id
stringlengths
1
5
prompt
listlengths
1
1
completion
listlengths
1
1
Label
stringclasses
3 values
900
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Evaluation of the functional efficacy of an antioxidative probiotic in healthy volunteers\n\n ABSTRACT.BACKGROUND:\nIn persons without clinical symptom it is difficult to assess an impact of probiotics regarding its effect on health. We evaluated the functional efficacy of the probiotic Lactobacillus fermentum ME-3 in healthy volunteers by measuring the influence of two different formulations on intestinal lactoflora, fecal recovery of the probiotic strain and oxidative stress markers of blood and urine after 3 weeks consumption.\n\nABSTRACT.METHODS:\nTwo 3-week healthy volunteer trials were performed. Open placebo controlled (OPC) study participants (n = 21) consumed either goat milk or by L. fermentum ME-3 fermented goat milk (daily dose 11.8 log CFU (Colony Forming Units). Double blind randomised placebo controlled (DBRP) study participants (n = 24) received either capsules with L. fermentum ME-3 (daily of dose 9.2 CFU) or placebo capsules. The faecal lactoflora composition, faecal ME-3 recovery, effect of the consumption on intestinal lactoflora, and oxidative stress markers of blood (total antioxidative activity; total antioxidative status and glutathione red-ox ratio) was measured.\n\nABSTRACT.RESULTS:\nME-3 was well tolerated and a significant increase in total faecal lactobacilli yet no predominance of ME-3 was detected in all study groups. Faecal recovery of ME-3 was documented by molecular methods only in fermented milk group, however the significant improvement of blood TAA (Total Antioxidative Activity) and TAS (Total Antioxidative Status) indices was seen both in case of fermented goat milk and capsules\", yet glutathione re-ox ratio values decreased only in case of fermented by ME-3 goat milk.\n\nABSTRACT.CONCLUSION:\nThe functional efficacy of both consumed formulations of an antioxidative probiotic L. fermentum ME-3 is proved by the increase of the intestinal lactobacilli counts providing putative defence against enteric infections and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. In non-diseased host the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of host, e.g. markers of antioxidative defence system.\n\nBODY.BACKGROUND:\nProbiotics are defined as live microbial food supplements, which beneficially influence human health [1,2]. Widely accepted probiotics contain different lactic acid producing bacteria of human origin: bifidobacteria, lactobacilli or enterococci. Nowadays the concept of functional foods, incl. probiotic food and dietary supplements implies to their ability to beneficially influence body functions in order to improve the state of well-being and health and reduce the risk of disease [2,3]. The important areas of human physiology that are relevant to functional food science according ILSI (International Life Science Institute) and FUFOSE (The European Commission Concerted Action on Functional Food Science in Europe) are besides others, the modulation of basic metabolic processes and defence against high-grade oxidative stress [4,5]. Human nutrition is clearly associated with oxidative metabolism, which beside production of energy is involved in a number of vital functions of the host. For example, under physiological conditions the reactive species (including peroxyl radicals, nitric oxide radical, superoxide anion) figure a crucial role in primary immune defense of the human body by phagocytic cells against harmful microorganisms [6,7]. On the other hand, a prolonged excess of reactive species is highly damaging for the host biomolecules and cells, resulting in dysbalance of the functional antioxidative network of the organism and leading to substantial escalation of pathological inflammation [8]. By our knowledge, no systematic studies have been performed to approve the functional efficacy of different formulations of probiotic on the antioxidative defence system of a healthy human. In our previous study Lactobacillus fermentum ME-3 (DSM 14241) [9-11], expressed strong antimicrobial activity against Gram-positive and Gram-negative entero- and uropathogens [12,13]. The cells and cell lysate of L. fermentum ME-3 possessed substantial antioxidative potency [14]. In an animal experiment ME-3 suppressed the excessive oxidative stress reaction caused by Salmonella infection in intestinal mucosa and thus improved the gut mucosal antioxidative status [15]. The antioxidative effect of L. fermentum ME-3 on human body oxidative stress markers was confirmed by our pilot study with fermented goat milk [16]. The aim present study was to evaluate the functional efficacy of the probiotic strain L fermentum ME-3 in the human gastrointestinal tract (GIT) of healthy volunteers. The faecal recovery, effect of two different formulations on total faecal lactoflora and oxidative stress markers of blood and urine were compared after 3 weeks consumption.\n\nBODY.METHODS.FORMULATIONS:\nThe efficacy of two different formulations (experimental fermented goat milk and probiotic capsules) on the human body oxidative stress markers was evaluated. Lactobacillus fermentum ME-3, a probiotic strain of healthy human intestinal origin [17], has been identified by biochemical and molecular methods [9]. The patent application has been submitted to the Estonian Patent Agency (Application No. 0356/01PV) as well as to the International Bureau of World Intellectual Property Organization (WIPO) (Application No. WO03002131) [11]. L. fermentum ME-3 was used as freeze-dried powder in capsulated form and in fermented milk.\n\nBODY.METHODS.FORMULATIONS.CAPSULES:\nGelatine coated capsules were manufactured by the Tallinn Pharmaceutical Company. The freshly prepared probiotic capsules contained 9.0 logs CFU of L. fermentum ME-3 per capsule in addition to 250 mg of saccharose and microcellulose. Identical placebo capsules contained only saccharose and microcellulose. All capsules were stored at +4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF ME-3 IN CAPSULE:\nSurvival of ME-3 in capsule was monitored during 12 months at +4°C. The content of one capsule was dissolved aseptically in 2 ml of 0.9% NaCl solution. The suspension was vortexed, serially diluted and seeded 0.1 ml on de Man-Rogosa-Sharpe (MRS) agar medium (OXOID, U.K.) and incubated 48 hours at 37°C microaerobically (10% CO2). The number of colonies was counted and the viable cell count in capsule was calculated.\n\nBODY.METHODS.FORMULATIONS.EXPERIMENTAL FERMENTED MILK:\nThree different lots of experimental fermented goat milk was prepared for the 3-week trial with healthy volunteers in order to establish the health effects of ME-3 consumption. The study group was supplied with fresh product once a week. Experimental fermented milk was prepared as described previously [16] by combining the probiotic strain with two supportive lactobacilli cultures L. plantarum LB-4 and L. buchneri S-15. L. buchneri strain S1-5 decreased the specific taste of the goat milk. L. plantarum LB-4 was included as a strong producer of exopolysaccharides, which gives the fermented milk a cream-like consistence and delightful acidity. The goat milk was inoculated with 2% mixture of Lactobacillus strains and incubated at 37°C for 24 hours. Theproduct, ready to use, was cooled and stored at 4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF L. FERMENTUM ME-3 IN FERMENTED GOAT MILK:\nTo measure the viable cell count of ME-3 in fermented goat milk, samples were taken at the end of fermentation (before cooling the product), and after 24 h, 32 h, 48 h and 7 days from the preparation, when the product was stored at 4°C. The amount of 0.5 ml of the fermented milk was serially diluted in saline and plated on MRS agar medium and incubated for 48 h at 37°C in microaerobic conditions.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS:\nTwo healthy volunteer (n 45) trials, particularly open placebo controlled (OPC) study and double blind randomized placebo controlled (DBRP) study were carried on to evaluate the functional efficacy of L. fermentum ME-3 in the human body. The inclusion criteria included the wish to participate, no known health problems, and no medical conditions requiring drug therapy, no other yogurts or no special diets. The subjects with a history of GIT disease, food allergy and acute infection, use of any antimicrobial agent within the last month or use of any regular concomitant medication including antioxidant vitamins and anti-inflammatory non-steroidal drugs were excluded. The members of the trial were daily questioned about their general welfare, intestinal function (general welfare, gut gas production, stool frequency) and putative adverse effects. The withdrawal criteria from the trials included acute infections during the study. Reasons for dropout were the unwillingness to proceed with the study or relocation to new area. The blood samples (6 ml) from the antecubital vein, faecal and urine samples were collected before and at the end of all clinical trials. Participants of all trials gave informed consent to the study protocols approved by the Ethical Committee of Tartu University.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.OPEN PLACEBO CONTROLLED FERMENTED GOAT MILK TRIAL:\nThe study participants were 5 men and 16 women, mean age 50 years (range 35–60). During three weeks of the trial the study group (3 males and 13 females) consumed daily 150 ml fermented goat milk. The daily dose of probiotic Lactobacillus strain was 11.2 to 11.8 log CFU per person. The control group (1 male and 4 females) consumed the same dose of fresh goat milk.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.PROBIOTIC CAPSULE TRIAL:\nA DBRP study (ISRCTN 53164826) was carried out as follows. The study group consisted of 15 men and 9 women, mean age 52 years (range 40–60) allocated according to their wish to participate and randomly divided by an independent person and computer program for two groups. The study group members (8 males and 4 females) took three probiotic containing capsules (8.4 log CFU per capsule) two times daily (the daily dose 9.2 log CFU) during three weeks. The placebo group (7 males and 5 females) received identical capsules without the probiotic strain. Faecal samples of all participants to assess change in faecal lactoflora and the persistence of the ingested probiotic strain were collected before and at the end of trial. Several laboratory indices of blood and urine were measured before and after the consumption of ME-3. Here we report on changes in human body oxidative stress markers as total antioxidative activity (TAA), total antioxidative status (TAS) and glutathione red-ox ratio (GSH/GSSG) from blood serum.\n\nBODY.METHODS.MICROBIOLOGICAL ANALYSES OF FAECES:\nThe total count of lactobacilli and the count of L. fermentum were evaluated in faecal samples. The faecal samples were collected at day 0 and 21 in both trials. Samples were kept at -80°C before analyzed. Serial dilutions (10-2–10-9) of the weighed faecal samples were prepared with phosphate buffer (pH 7.2) and 0.05 ml of aliquots was seeded onto MRS agar medium [17]. The plates were incubated at 37°C for 4 days microaerobically in 10% CO2 environment (incubator IG 150, Jouan, France). The catalase negative colonies were selected on the basis of typical for LAB colony morphology, cells microscopy and Gram staining. The count of Lactobacillus species was expressed in log10 colony forming units per gram faeces (log10 CFU/g) and percentage (relative share) in the total count of lactobacilli. The detection level of lactobacilli was a 3.0 log CFU/g faeces. The relative amount of L. fermentum, colonizing the gastrointestinal tract of persons in the study groups was expressed as a proportion of the total count (%), using the Bioquant program [18]). The program gives output data for every microorganism as an absolute count (log10 CFU/g) and their percentage in the total count with its normal values.\n\nBODY.METHODS.AP-PCR TYPING:\nThe putative ME-3 isolates were typed by arbitrarily primed polymerase chain reaction (AP-PCR). Genomic DNA was extracted from 24 h old cultures, cultivated on MRS agar microaerobically with the QIAamp DNA Mini Kit 50 (QIAGEN GmbH., Hilden, Germany) according to the manufacturers instructions. AP-PCR typing was done with two primers: ERIC1R (5'-ATGTAAGCTCCT GGGGATTCAC-3') and ERIC2 (5'-AAGTAAGTGACTGGGGTGAGCG -3') (DNA Technology A/S, Aarhus, Denmark). A 30 μl volume of reaction mixture consisted of 10 × PCR buffer (Fermentas, Vilnius, Lithuania), 2.5 mM MgCl2 (Fermentas, Vilnius, Lithuania), 200 μM deoxynucleoside triphosphate mixture (dATP, dGTP, dTTP and dCTP, Amersham Pharmacia Biotech, Freiburg, Germany) 0, 60 μg of each primer and 2.5 U Taq DNA Polymerase (Fermentas, Vilnius, Lithuania,) and 5 μl of extracted DNA according to Matsumiya et al. [19]. The PCR mixture was subjected to thermal cycling 35 cycles of denaturation at 95°C for 1 min, annealing at 35°C for 1 min, and extension at 74°C for 2 min, with a final extension at 74°C for 5 min with the PTC-200 thermal cycler (Eppendorf AG, Hamburg, Germany). The PCR products were separated by electrophoresis in a horizontal 2% agarose gel containing 0.1 μl/ml ethidium bromide in Tris-acetic acid-EDTA (TAE) buffer (40 mM Tris, 20 mM boric acid, 1 mM EDTA, pH 8.3) (Bio-Rad Laboratories, Hercules, USA) at constant voltage of 120 V. A 1 kb ladder (GeneRuler, Fermentas, Vilnius, Lithuania) was used as a base pair size marker. The banding patterns of isolates were visualized with UV light and compared with that of L. fermentum ME-3 strain.\n\nBODY.METHODS.MEASUREMENT OF HUMAN BODY OXIDATIVE STRESS STATUS:\nBlood serum was analysed for total antioxidative activity TAA, total antioxidative status TAS and glutathione red/ox ratio (oxidized glutathione and reduced glutathione, GSSG/GSH). TAA of the serum was assessed by the linolenic acid test (LA-test) described previously [16]. This test evaluates the ability of the sample to inhibit lipid peroxidation. TAS of the serum was measured with a commercially available kit (TAS, Randox Laboratories Ltd. Ardmore, UK) as described elsewhere [16], water-soluble vitamin E (Trolox) serving as a standard. This method is based on the inhibition of the absorbance of the ferrylmyoglobin radicals of 2,2'-azinobis-ethylbenzothiazoline 6-sulfonate (ABTS+) generated by activation of metmyoglobin peroxidase with H2O2. The cellular oxidative stress markers as total glutathione and oxidized glutathione were measured using the method of Griffith [20] as described elsewhere [16]. The glutathione content was calculated on the basis of a standard curve generated with known concentration of glutathione. Amount of GSH (μg/ml) was calculated as a difference between the total glutathione and GSSG (total glutathione – GSSG). The glutathione red/ox ratio was expressed as GSSG/GSH.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe computer program Sigma Stat for Windows 2.0 (Jandel Corporation, USA) was applied. The counts of faecal lactoflora were compared by using Student's t-test and Mann-Whitney rank sum test. Changes in oxidative stress markers of blood sera (TAA, TAS and glutathione red-ox ratio) were evaluated by Student's t-test, paired t-test and Mann-Whitney rank sum test. The choice of tests was made automatically according to the distribution of the data. Both microbial and biochemical markers were given as mean and standard deviation. One-way ANOVA test was performed to compare the effect of different formulation on TAA, TAS and faecal lactoflora parameters. Differences were considered statistically significant if the value was p < 0.05.\n\nBODY.RESULTS.SURVIVAL OF ME-3 IN FORMULATIONS:\nIn capsule after approximately 1-log drop after one week from the production of the capsules, the viable count of the probiotic strain remained stable at the level of 8.4 log CFU per capsule. Additional results have shown at +4°C the stability of the freeze-dried capsulated culture at least 17 months from the production. In fermented goat-milk the cell count of the probiotic strain varied insignificantly from 9.0 to 9.7 log CFU/ml from one preparation to the other. The viable count of ME-3 in the fermented goat milk was found to remain stable at least during 7 days of storage at 4°C.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS:\nNo dropouts were registered during volunteer trials, yet one participant was withdrawn from the probiotic capsule trial due to acute respiratory viral infection. Besides, no adverse affects in general welfare or changes in GI functionality were assessed during the trial.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.CHANGES IN TOTAL LAB COUNT:\nThe consumption of both ME-3 fermented milk and ME-3 capsule significantly increased the total count of lactobacilli in faeces as compared to the initial levels (Fig. 1). In opposite, in the group of volunteers consuming non-fermented goat-milk there was even a decrease in total LAB counts during the 3-week trial and no changes were found in capsule placebo group. Figure 1Increase of total fecal counts of lactobacilli in healthy volunteers consuming of ME-3 in fermented goat milk and probiotic capsule. 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values (Student's t-test): * p < 0.05; Significantly different from control (Student's t-test): ‡ p = 0.01\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.RECOVERY OF THE PROBIOTIC STRAIN:\nIn goat milk group L. fermentum as a species appeared in fecal samples of all individuals (n = 16) after consumption of fermented goat milk (Table 1). The AP-PCR confirmed the recovery of ME-3 in the faeces of all study group members (Fig. 2). However, in different trials the administration of ME-3 strain did not lead to the predominance of Lactobacillus fermentum species (Table 1).In the probiotic capsule trial the strain ME-3 was not detectable between L. fermentum isolates by AP-PCR. Table 1 Changes in fecal recovery of L. fermentum during healthy human volunteer trials L. fermentum Groups * Prevalence (%) † Count (log10) ‡ Proportion (%) Day 0 Day 21 Day 0 Day 21 Day 0 Day 21 Goat milk trial, ME-3 (n = 16) 25 (4/16) 100 (16/16) 7.0 ± 0.7 7.3 ± 1.4** 21 13 Control (n = 5) - 20 (1/5) - 3.6 - 28 Capsule trial, ME-3 (n = 11) 16.7(2/12) 33.3 (2/12) 4.3 ± 0.5 5.8 ± 1.6 4 9 Placebo (n = 12) 25 (3/12) 16.7 (2/12) 6.3 ± 2.5 8.0 ± 1.6 11 19 * Percentage of subjects with fecal L. fermentum inside the group ** Significantly different from the pre-treatment values (paired t-test): p < 0.001 † Median value ± SD ‡ Proportion of L. fermentum among fecal LAB Figure 2Confirmation for the survival of L. fermentum ME-3 in GIT in subjects receiving ME-3 fermented goat milk by AP-PCR in a horizontal 2% agarose gel. a) From the left: M – DNA 1 kb Ladder, Line 1 -L. bervis ATCC 14869, Line 2 – L. buchneri ATCC 4005, Line 3 – L. reuteri DSM 20016, Line 4 – L. fermentum ATCC 14931, Line 5 – L. fermentum ME-3 b) From the left: M – molecular weight marker, Line 1 ...16 – ME-3 like profiles from feces of goat milk trial study group participants\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.ANTIOXIDATIVE HEALTH EFFECT OF ME-3:\nThe positive effect on the blood ox stress markers as TAA and TAS was seen in the case of both formulations (Fig. 3). Particularly, the additive increase in goat milk group was 6% and 9% respectively as compared to control group, however only 4% for TAA and 2.5% for TAS in probiotic capsule study group as compared to placebo. Figure 3Effect of ME-3 consumption in fermented goat milk and capsules on human blood oxidative stress markers a) TAA (%) and b) TAS(mmol/l). 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values: *p < 0.05 (paired t-test); **p ≤ 0.01 (Student's t-test and paired t-test);ME-3 goat milk effect different from the effect of the ME-3 in capsule-form (ANOVA): ‡p ≤ 0.001 The effect of goat-milk consumption on the TAA and TAS values was significantlyhigher (p < 0.001) than by the consumption of the capsulated probiotic (Fig. 3). The decrease of the glutathione red-ox ratio was significant in both groups: the study Group (from 0.15 ± 0.01 to 0.11 ± 0.04 μg/ml, p < 0.01) and control (from 0.14 ± 0.03 to 0.11 ± 0.02 μg/ml, p < 0.01) in the goat milk trial. When the probiotic was consumed in capsulated form, no significant decrease was noticed in the glutathione red-ox ratio.\n\nBODY.DISCUSSION:\nOur aim was to evaluate the functional efficacy of the antimicrobial and antioxidative probiotic L. fermentum ME-3 in normal population with variable food intake. First of all the safety of the L. fermentum strain ME-3 was confirmed as no adverse side effects were registered in volunteers. Even relatively high (>1011 CFU) doses of consumed ME-3 had no negative impact on the hosts' general well-being. Lactobacillus fermentum as species, used in various food applications, has a well-established history of safe use and is evaluated as GRAS according to the Food and Drug Administration of the USA [21]. Second, a clear improvement of laboratory indices of antioxidative defence system of a healthy host was documented, using both formulations as fermented by L. fermentum ME-3 goat-milk and probiotic capsules. This effect was simultaneous with the increase of intestinal lactoflora of healthy volunteers even without necessity for faecal recovery of the strain. In the human population, persons without clinical symptoms have still a quite different health status, including stability, capacity and potency of antioxidative defence to counteract sufficiently to oxidative stress-caused adverse effects [7]. If a probiotic is able to exhibit a positive functionality on oxidative stress-related indices, it helps both to stabilize and promote the potency of the whole body antioxidative defence system in subclinical situations without disease symptoms. That in turn may have an impact for lowering the risk of atherosclerotic damage of blood vessels associated with several cardiovascular and neurodegenerative diseases [22-24]. In our study of healthy volunteers for validation of the antioxidative functionality of probiotic, four well-known oxidative stress markers of blood were chosen [25-27]. The state of the lipid fraction (including also LDL) in the antioxidative defence system of the blood is evaluated by TAA. TAS on the other hand reflects more the antioxidativity of the water-soluble fraction of the human blood. Among the measured blood sera markers both the TAA and TAS values were also improved in the two different study groups. However, there was found a significantly lower improvement of TAA and TAS values in cases of capsule than fermented goat-milk where the recovery of the strains was assessed by AP-PCR. Similarly, the reduction of the glutathione red-ox ratio was detected after the consumption of fermented by ME-3 goat-milk but not with the capsule. The crucial non-enzymatic cellular antioxidant is GSH [28] present in the millimolar range mainly in the red blood cells, liver, pancreas, kidneys, spleen, eyes, lungs and intestinal cells [29]. The oxidized form of glutathione becomes even at low concentrations toxic, and therefore in the cells the glutathione red-ox ratio is kept as low as possible. In the case of inflammation this balance is shifted towards the oxidized form, indicating non-physiological intracellular oxidative stress. Thus, our study shows that there is a good association between the mode of formulation of probiotic and expression of its functional properties inside the healthy host. The antioxidative potential of the food supplement containing ME-3 was excellent, as reisolates of the strain from capsule expressed significantly higher TAA in comparison with the base values of the strain in vitro (data not shown). Unexpectedly, the shifts in the antioxidativity markers in blood serum of participants of the probiotic capsule trial were less pronounced in comparison with ME-3 fermented goat milk. Particularly, the explanation for more expressed positive shifts in oxidative stress markers of volunteers of the fermented goat milk trial could be due to the synergistic effect of the probiotic and the substrate. Milk is not just a carrier for the probiotic Lactobacillus strain, but contains natural \"lactogenic\" factors like lactose, minerals, vitamins and other components that enhance the metabolic activity of ingested probiotic strain in GIT. Both fermented goat milk and goat milk elevated the values of TAA and TAS The goat milk contains different biomolecules (e.g. casomorphins, lactorphins, casokinins, etc) having certain antioxidative properties, which can contribute to consumers' plasma antioxidative capacity [30-36]. This was proved by some antioxidative effect also in persons consuming non-fermented goat milk. However, the elevation these indices were remarkably more expressed in the fermented goat milk group, thus the goat milk fermentation with L. fermentum ME-3 results in additive increase of total antioxidativity. Therefore, the provisional FAO regulations [37] suggesting the need for health claims by specified formulations of probiotic seem to be of the utmost importance. Additionally, in our study with experimental fermented milk the average daily dose of L. fermentum ME-3 being 11.5 logs CFU was clearly higher than that of capsule (max 9.5 log CFU). It is possible that the dose excesses the amount of bacteria necessary for interacting with intestinal mucosa and the unattached lactobacilli are excreted with faeces. The finding of Saxelin and colleagues confirmed that the faecal recovery of the probiotic strain started from the consumption of more than 9.0 log CFU daily doses of capsulated LGG [38]. To our surprise, in the present study the similar dose did not result in faecal recovery of the strain. It is possible that the ME-3 strain germinated mainly in some upper parts of intestinal tract where the advantageous conditions for survival and metabolic activity of probiotic lactobacilli were present. Using molecular tools, Marteau et al. showed that lactobacilli figuring only 7% of faecal microflora performed up to 30% of microbial communities in human colon [39]. If administered in lower quantities as in case of capsule trial, ME-3 did not reach the detectable level in faecal samples. Yet, its presence in gut was proved by the positive antioxidative health effect in blood but not in urine. Therefore it is understandable that the higher load of metabolically active probiotic bacteria in goat-milk resulted also in their faecal recovery and the highest impact on the oxidative stress indices. Moreover, in our study the positive impact of ME-3 consumption on the host lactoflora was proved by the increase of faecal lactobacilli counts in all participants of human volunteer studies. In experimental settings the high counts of intestinal lactobacilli have been shown as an important defensive factor against enteric infections [40,41]. Though up to now the period of consumption of probiotics has not been defined, the 3-week ingestion of fermented goat-milk and capsule seemed enough for reaching the aims. It is important to mention that after consumption of ME-3, a strain with high antagonistic activity, neither the species nor the strain predominated among total lactoflora. This shows a well-granted microbial balance inside the gut, which cannot be disturbed by high load of probiotic bacteria. Apparently, the interconnected advanced metabolism of large gut microbiota keeps the proportions of different species quite stable. Some other investigators have obtained similar results showing the proportional increase of different microbial populations (bifidobacteria, coliforms) after administration of Lactobacillus sp. probiotic [42,43]. Thus, the functional efficacy of different formulations of anti-infectious and antioxidative probiotic L. fermentum ME-3 were proved both by the increase of the lactobacilli counts providing putative defence against infectious agents in gut and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. Further, studies evaluating the efficacy of ME-3 as adjunct to conventional therapy in patients with atherosclerotic damages and a high-grade oxidative stress are ongoing.\n\nBODY.CONCLUSION:\nIn non-diseased host, the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of organism. In our case, the possibility for augmentation of the antioxidative defence system by the probiotic L. fermentum ME-3 in normal population can be proposed.\n\nBODY.COMPETING INTERESTS:\nMarika Mikelsaar, Mihkel Zilmer, Tiiu Kullisaar, Heidi Annuk (Hynes) and Epp Songisepp are sharing the Estonian patent application: no. EE 2001 00356 29.06.01 and International Patent application: no. WO03002131.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nEpp Songisepp, and Pirje Hütt have been in charge of the microbiological analysis. The former has been also in charge of analyzing the results and writing of the manuscript. Jaak Kals was responsible for performance and management of the volunteer trials. Tiiu Kullisaar has been in charge of the biochemical analysis and writing the manuscript. Mihkel Zilmer has conducted the biochemical estimations and writing the manuscript. Reet Mändar has been in charge of the molecular analysis and revising the manuscript. Marika Mikelsaar is the main conductor of the L. fermentum ME-3 research; for this paper she has been in charge of the clinical trial design and writing the manuscript.\n\n**Question:** Compared to goat milk what was the result of L. fermentum ME-3 fermented goat milk on Total Antioxidative Activity?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
961
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial of a lifestyle intervention in obese endometrial cancer survivors: quality of life outcomes and mediators of behavior change\n\n ABSTRACT.BACKGROUND:\nTo examine the effects of a 6 month lifestyle intervention on quality of life, depression, self-efficacy and eating behavior changes in overweight and obese endometrial cancer survivors.\n\nABSTRACT.METHODS:\nEarly stage endometrial cancer survivors were randomized to intervention (n = 23) or usual care (n = 22) groups. Chi-square, Student's t-test and repeated measures analysis of variance were used in intent-to-treat analyses. Outcomes were also examined according to weight loss.\n\nABSTRACT.RESULTS:\nMorbidly obese patients had significantly lower self-efficacy, specifically when feeling physical discomfort. There was a significant improvement for self-efficacy related to social pressure (p = .03) and restraint (p = .02) in the LI group. There was a significant difference for emotional well-being quality of life (p = .02), self-efficacy related to negative emotions (p < .01), food availability (p = .03), and physical discomfort (p = .01) in women who lost weight as compared to women who gained weight. Improvement in restraint was also reported in women who lost weight (p < .01).\n\nABSTRACT.CONCLUSION:\nThis pilot lifestyle intervention had no effect on quality of life or depression but did improve self-efficacy and some eating behaviors.\n\nABSTRACT.TRIAL REGISTRATION:\n; NCT00420979\n\nBODY.BACKGROUND:\nEndometrial cancer is the most common gynecologic cancer in the United States and obesity is the most significant risk factor for development of the disease [1]. A recent prospective study reported that 68% of women with early endometrial cancer were obese which is markedly increased compared to older reports [2-5]. Adding to this escalation is the increased rate of obesity in the female population (18.1%) [6]. When assessing obesity associated cancers, it appears that endometrial cancer patients are the most morbid as most have stage I disease yet are at significant risk for premature death [7]. While the impact of weight on cancer recurrence does not appear to be a factor in endometrial cancer, obese endometrial cancer survivors have a higher mortality from causes not related to cancer [8]. Endometrial cancer survivors have numerous co-morbidities related to their obesity which include hypertension (HTN), diabetes mellitus (DM), cardiovascular disease (CVD), osteoarthritis (OA) and pulmonary disease [5]. Improving medical co-morbidities through weight management in survivors may lead to improved overall quality of life and survival. Differences in quality of life (QOL) between obese and non-obese endometrial cancer survivors are related to physical health. In a prospective examination of QOL, general health status, and obesity in women with early stage endometrial cancer, [9] women with a body mass index (BMI) less than 30 (non-obese) had increased physical QOL scores. There was also an inverse relationship between the global physical health composite score and BMI; with higher BMI associated with a declining physical QOL score. A recent cross-sectional survey of Canadian endometrial cancer survivors revealed that those patients with morbid obesity had a QOL score three times lower than women with a normal BMI [10]. This suggests that if weight is decreased, survivors QOL may be improved. What is unknown is the effect a survivorship lifestyle intervention trial may have on QOL, psychological health and eating behavior. We have previously reported the effects of a six-month lifestyle intervention on weight loss, exercise behavior, and nutrient intake changes in overweight and obese endometrial cancer survivors [11]. We found that a lifestyle intervention program in obese endometrial cancer survivors is feasible and efficacious – resulting in sustained behavior change and weight loss over a one year period. The present query was conducted in order to examine the intervention's effect on QOL outcomes, depression, self-efficacy and eating behavior, possible mediators of behavior change. We hypothesized that a lifestyle intervention program would improve these outcomes in obese endometrial cancer survivors. We also conducted an exploratory analysis comparing women who lost weight with women who gained weight.\n\nBODY.METHODS.STUDY DESIGN & PATIENT RECRUITMENT:\nThe study was a prospective randomized controlled trial comparing a lifestyle intervention (LI) consisting of exercise and nutritional counseling with cognitive-behavior modification to usual care (UC) in overweight and obese endometrial cancer survivors. An invitation letter was sent to women included in the cancer registry from the Ireland Cancer Center at University Hospitals Case Medical Center diagnosed with stage I-II endometrial cancer from 2001–2004. The 6 month intervention was delivered to the LI group while the UC group received only an informational brochure outlining the benefits of proper nutrition and physical activity. Institutional review board approval was granted and informed consent and authorization to use protected health information (HIPAA) was obtained from all patients prior to study procedures. Feasibility, eligibility criteria, and details of the intervention have been published elsewhere [11]. Randomization assignment was stratified by BMI (<40 versus ≥ 40). Prior analysis revealed that demographics and clinical characteristics were equivalent between groups [11].\n\nBODY.METHODS.INTERVENTION PROGRAM:\nThe counseling protocol was administered by the study registered dietitian (RD), primary investigator (PI), and psychologist. It followed a stepwise, phased approach using strategies outlined by social cognitive theory, indicating that the optimal intervention for a major behavior change should focus on establishing short-term goals, and enabling the person to build self-efficacy [12-16]. The intervention covered topics related to nutrition and exercise and met weekly for six weeks, biweekly for one month, and monthly for 3 months. Participants were contacted by the RD either by phone, email, or newsletter every week that the group did not meet. The psychologist met with the group during 2 of the 11 sessions and topics included cognitive and behavioral self-management strategies for weight loss and stress management. The PI met with both LI and UC participants at 3, 6 and 12 months.\n\nBODY.METHODS.MEASURES:\nPatient demographic and clinical data was obtained at baseline and prior to randomization. QOL and self-efficacy were assessed at baseline and at 3, 6, and 12 months. Eating behavior and depression was assessed only at baseline and 12 months. QOL was measured by the Functional Assessment of Cancer Therapy-General (FACT-G), a valid and reliable questionnaire evaluating physical, functional, family-social, and emotional well-being domains [17,18]. A fatigue subscale (-F) and an endometrial symptom subscale (-En) were also used [19,20]. Functional status was measured with the SF-36, a comprehensive short-form survey designed to measure functional health and well-being [21]. Depression was measured by the Beck Depression Inventory (BDI) which is well-validated and frequencly used in lifestyle research studies [22]. Self-efficacy, an individual's judgement regarding his/her ability to perform certain behaviors, was measured with the Weight Efficacy Life-Style (WEL) questionnaire. This scale specifically evaluates self-efficacy judgments specific to eating behaviors in five situational factors: negative emotions, food availability, social pressure, physical discomfort, and positive activities [23,24]. This measure has evidenced adequate psychometric properties, including internal consistency coefficients ranging from 0.76 to 0.90 [23]. Eating behavior was measured by the Three-Factor Eating Questionnaire (TFEQ), a self-assessment questionnaire developed to measure cognitive and behavioral components of eating [25,26]. Responses are aggregated into three scales, cognitive restraint, disinhibition and hunger. Cognitive restraint is designed to measure dietary restraint, that is, control over food intake in order to influence body weight and body shape. Disinhibition measures episodes of loss of control over eating, while the hunger scale is concerned with subjective feelings of hunger and food cravings. Higher numbers reflect increased restraint, decreased tendency to overeat in the presence of disinhibitors (i.e. stress, mood, alcohol) and decreased perception of hunger. All three subscales have demonstrated high internal consistency and reliability [27,28]. Patient anthropometric data including weight, waist circumference, and BMI were measured at all time points manually by the study dietitian. BMI was computed as patient weight in kilograms divided by the square of their height in meters. Participants were categorized according to World Health Organization guidelines: < 18.5 (underweight), 18.5 to 24.9 (healthy weight), 25.0 to 29.9 (overweight), Class I – 30.0 to 34.9, Class II – 35.0–39.9, and Class III or morbid obesity ≥ 40.0 kg/m2 [29].\n\nBODY.METHODS.STATISTICAL ANALYSES:\nPatient demographic, clinical variables and baseline values were compared between groups and by BMI (25.0 – 39.9 versus ≥ 40) by use of independent samples t-test or chi-square test for proportions. Primary analysis used repeated measures analysis of variance (ANOVA) with the 3, 6 and 12 month data as outcomes and the appropriate baseline measurement as a covariate to test for the main effect of group (LI versus UC, intention-to-treat analyses) on QOL outcome measures (FACT-G; physical, functional, social, emotional well-being, fatigue and endometrial subscales) and self-efficacy (WEL; negative emotions, food availability, social pressure, positive activities, physical discomfort). Change (from baseline to 12 months) in eating behavior (TFEQ) and depression measures were compared using paired samples t-test for each group and independent samples t-test for the difference in change between groups. Missing data for participants who did not complete all study assessments was handled according to the last and next method or previous row mean method as recommended by Engels et al [30]. Imputation was done on between 15–19% of values for the various QOL and eating behavior measures. The percentage of missing data for these measures was less than that imputed for weight values as some patients opted to mail these surveys back as data were self-report. Data were also analyzed using last observation carried forward and completers only approaches. There were no substantive differences among the three approaches and thus we present results of the first approach only. QOL outcomes were also examined according to whether patients lost weight or their weight remained stable/gained over the course of the year as an ancillary stratified analyses. We were interested in examining if QOL outcomes differed between these two groups as there were some women in the UC group who lost weight and likewise women in the LI group who did not lose weight [31]. SPSS for Windows (version 14.0) was used for statistical analyses (SPSS Inc., Chicago, IL). Approximately 25 patients per group were needed to provide 80% power to detect a difference between groups in mean weight change from baseline to twelve months of 5 kg (11 pounds) or greater, representing approximately 5% weight loss for an obese female (alpha = 0.05, two-sided, SD = 5.0) [32]. Five percent weight change is considered clinically relevant and a recommended goal for weight loss over 6 months [33,34]. Weight change was the primary endpoint of this feasibility trial and the study was powered based on this endpoint. In addition, 25 patients/group were needed to detect a large standardized effect size (d = 0.80) with power of 0.80 and a two-tailed alpha of 0.05. A large standardized effect size on the FACT-G is approximately 10 points based on a standard deviation of 12 and exceeds the 7 point MID identified for this scale [18,35].\n\nBODY.RESULTS:\nForty-five patients were enrolled; 23 were randomized to the LI group and 22 to the UC group. In summary, most patients were Caucasian with an average age of 55 years. As reported previously, at 12 months, the LI group lost 3.5 kg compared to a 1.4 kg gain in the UC group (p = .02) and increased their exercise by 16.4 metabolic equivalents (METS) compared to a decrease of 1.3 METS in the UC group (p < .001) [11]. Average time since diagnosis was 2 years and mean BMI was 42.3 kg/m2 [11]. Fifty-three percent (24/45) of participants reported being very overweight during the previous ten years and 15/45 (33%) reported being moderately overweight. Patients with a BMI > 40 had increased abdominal obesity with a mean (SD) waist circumference of 125.6 (15.9) cm as compared to those with a BMI < 40 (mean 98.5 cm (8.2), p < 0.01). Waist/hip ratio did not differ between morbidly obese and BMI < 40 patients (mean 0.84 (SD = .07). Co-morbidities related to obesity (hypertension, diabetes, CVD, arthritis and metabolic syndrome) were common in these patients [11]. Five patients had prior bariatric surgery prior to enrollment (LI; 4, UC: 1) with surgeries performed 3–6 years before the study began.\n\nBODY.RESULTS.BASELINE DIFFERENCES BASED ON BODY MASS INDEX:\nBaseline QOL (FACT-G, SF-36), BDI, WEL and TFEQ scores did not differ between the LI and UC groups. However significant differences were observed when patients were categorized according to BMI (Table 1). Total QOL (FACT-G) was not significantly lower in morbidly obese women [BMI ≥ 40: 78.0 (SD = 14.4) vs. BMI < 40: 83.7 (SD = 11.4); p = 0.14], however the difference was consistent with minimally important differences (5–6 points) [18]. SF-36 physical composite score was decreased in morbidly obese women (p = 0.04). At baseline 7/45 (15%) patients reported mild depressive symptoms, and 4/45 (9%) reported moderate depressive symptomatology. Beck depression score did not differ according to BMI (Table 1). Table 1 Quality of life, depression, self-efficacy and eating inventory scores at baseline (n = 45) Variables Overall (n = 45) Body mass index < 40 (n = 23) Body mass index ≥ 40 (n = 22) p value * Quality of Life FACT-G (0–108) 80.1 (12.1) 83.7 (9.6) 78.0 (14.1) 0.14 Physical well-being (0–28) 23.9 (3.2) 24.8 (3.2) 23.2 (3.3) 0.12 Social well-being (0–24) 16.5 (3.1) 16.6 (2.6) 16.5 (3.2) 0.90 Emotional well-being (0–24) 18.6 (4.1) 19.4 (3.9) 17.9 (4.4) 0.27 Functional well-being (0–28) 21.2 (5.4) 22.9 (4.0) 20.4 (5.4) 0.10 Endometrial (0–64) 53.5 (5.8) 52.8 (5.0) 54.6 (6.6) 0.32 Fatigue (0–52) 37.9 (9.7) 40.2 (9.7) 36.2 (9.8) 0.19 Physical composite score, SF36 43.9 (10.7) 47.1 (8.7) 40.6 (11.7) 0.04 Mental composite score, SF36 50.0 (10.9) 50.6 (9.8) 49.3 (12.2) 0.70 Beck depression inventory (0–63) 10.3 (7.9) 8.8 (6.5) 11.4 (9.0) 0.29 Weight self-efficacy  Negative Emotions (0–36) 18.8 (10.8) 20.4 (9.7) 15.5 (10.4) 0.12  Food Availability (0–36) 18.3 (9.3) 20.0 (8.8) 15.4 (8.9) 0.09  Social Pressure (0–36) 24.2 (8.7) 25.0 (7.2) 22.3 (9.7) 0.31  Physical Discomfort (0–36) 26.5 (8.3) 28.8 (5.8) 23.4 (9.4) 0.03  Positive Activities (0–36) 24.3 (7.3) 25.5 (5.0) 22.3 (8.5) 0.15  Total (0–180) 112.1 (36.8) 119.6 (26.7) 98.8 (39.0) 0.05 Three-factor eating questionnaire  Restraint (0–21) 10.3 (4.9) 12.8 (4.4) 7.5 (3.8) < 0.01  Disinhibition (0–16) 7.9 (3.6) 7.3 (3.8) 9.2 (3.0) 0.10  Perceived Hunger (0–14) 5.6 (3.4) 5.6 (3.4) 6.3 (3.6) 0.50 * Independent samples Student's t-test. Data are presented as the mean (standard deviation) FACT-G, Functional Assessment of Cancer Therapy-General; SF-36, Short-form Medical Outcomes Survey In regards to eating patterns, morbidly obese patients had significantly lower self-efficacy when feeling physical discomfort and decreased total self-efficacy (WEL) score. Restraint on the TFEQ questionnaire was decreased in patients with BMI > 40 as compared to BMI < 40.\n\nBODY.RESULTS.EFFECTS OF THE LIFESTYLE INTERVENTION (INTENTION-TO-TREAT ANALYSIS):\nRepeated measures ANOVA with baseline measurements as a covariate and 3, 6 and 12 months as outcomes revealed no group (LI versus UC) effects for QOL outcomes (Table 2). There was a significant group effect for self-efficacy related to social pressure and restraint improved in the LI group. The UC group had a significant change in depression from baseline. Change scores for depression or TFEQ, compared by independent samples t-test, however did not differ by group. Table 2 Effects of the lifestyle intervention on QOL and clinical outcomes (intention-to-treat analysis) Variables Baseline 3 mos 6 mos 12 mos Group effect P* Primary Quality of life outcome FACT-G (0–108) 0.37  Lifestyle intervention 80.6 (12.7) 81.1 (14.0) 82.4 (14.5) 80.1 (14.2)  Usual care 79.5 (13.1) 80.1 (15.5) 81.8 (13.2) 82.2 (14.9) Secondary QOL outcomes Physical well-being (0–28) 0.84  LI 23.9 (3.4) 23.6 (4.3) 24.0 (4.4) 23.3 (4.3)  UC 23.9 (3.0) 23.3 (4.5) 24.2 (4.1) 23.7 (4.1) Functional well-being (0–28)  LI 21.3 (5.2) 22.0 (5.6) 22.0 (5.7) 21.2 (6.2) 0.44  UC 21.0 (5.7) 22.0 95.6) 22.0 (5.0) 22.0 (6.1) Emotional well-being (0–24)  LI 18.3 (4.4) 19.1 (4.5) 19.3 (4.5) 18.8 (4.5) 0.98  UC 18.9 (3.9) 19.2 (4.0) 19.5 (4.1) 20.0 (3.7) Social/family well-being (0–28)  LI 17.2 (2.7) 16.6 (3.0) 17.2 (2.7) 16.9 (2.6) 0.17  UC 15.7 (3.3) 15.6 (3.4) 16.3 (3.0) 16.6 (3.4) Fatigue (0–52)  LI 37.5 (10.3) 38.5 (11.3) 39.2 (11.6) 38.5 (11.1) 0.99  UC 38.2 (9.2) 39.0 (10.4) 39.2 (11.5) 40.0 (9.3) Endometrial subscale (0–64)  LI 54.4 (5.4) 54.9 (6.7) 55.5 (6.8) 55.0 (6.7) 0.46  UC 52.3 (6.2) 51.9 (6.9) 52.6 (6.9) 53.6 (6.2) Weight self-efficacy  Negative emotion (0–36) 0.87  LI 17.0 (10.4) 21.7 (9.8) 23.1 (9.7) 21.9 (9.1)  UC 19.9 (10.8) 24.9 (9.9) 22.8 (10.0) 23.9 (10.9) Food Availability (0–36) 0.49  LI 17.5 (8.5) 23.1 (8.9) 23.2 (9.9) 22.4 (10.1)  UC 18.8 (10.3) 23.6 (9.8) 21.0 (9.5) 23.2 (9.9) Social pressure (0–36) 0.03  LI 23.9 (8.1) 28.2 (7.0) 28.4 (7.5) 27.9 (7.9)  UC 23.9 (9.3) 26.1 (8.5) 23.3 (8.0) 26.0 (7.3) Positive activities (0–36)  LI 23.2 (5.7) 27.5 (4.8) 27.8 (4.2) 27.3 (4.6) 0.10  UC 25.1 (8.5) 27.5 (8.1) 24.9 (7.7) 27.9 (5.0) Physical discomfort (0–36) 0.38  LI 25.8 (8.4) 28.5 (7.1) 29.1 (6.6) 29.7 (6.7)  UC 26.8 (8.2) 29.1 (6.5) 27.6 (6.7) 29.0 (6.4) Total (0–180) 0.19  LI 107.4 (31.5) 129.1 (30.4) 131.5 (31.7) 129.3 (31.3)  UC 114.5 (40.9) 131.1 (39.4) 119.6 (37.7) 130.0 (35.6) Beck depression inventory  LI 9.7 (6.8) - - 8.6 (7.2) 0.16  UC 10.6 (8.9) - - 8.3 (7.4) 0.02 Three-factor eating questionnaire Restraint (0–21) - -  LI 10.0 (4.8) - - 11.9 (5.1) 0.02  UC 10.3 (5.2) 11.1 (4.9) 0.24 Disinhibition (0–16) - -  LI 8.8 (2.8) - - 8.4 (3.3) 0.44  UC 7.1 (4.1) 7.0 (4.1) 0.68 Hunger (0–14) - -  LI 6.2 (3.3) 6.0 (2.7) 0.67  UC 5.1 (3.6) 4.2 (2.6) 0.13 Data are presented as the mean (standard deviation). LI (n = 23), UC (n = 22) * p value for repeated measures ANOVA with baseline measurement as covariate and 3, 6 and 12 months as outcomes or paired samples t-test (eating inventory and Beck depression inventory) Abbreviations: QoL, quality of life; FACT-G, Functional Assessment of Cancer Therapy-General; SF-36, Short-form Medical Outcomes Survey; BDI, Beck Depression Inventory; WEL, weight efficacy lifestyle; TFEQ, three-factor eating questionnaire; BMI, body mass index \n\nBODY.RESULTS.ASSOCIATION BETWEEN WEIGHT LOSS AND OUTCOMES (EXPLORATORY ANALYSIS):\nTable 3 presents an ancillary analysis for QOL, self-efficacy, depression and eating behavior outcomes. For this analysis we compared women who lost weight (WL; n = 21) versus those whose weight was the same or who gained weight (WG; n = 24) from baseline to twelve months. There was a significant effect (WL versus WG) for emotional well-being QOL, self-efficacy related to negative emotions, food availability, and physical discomfort as the WL group had higher scores. For self-efficacy related to negative emotions, there was a mean increase of 8.9 in women who lost weight versus 0.6 in those whose weight was stable/gained. Similarly, for self-efficacy related to food availability and physical discomfort, the weight loss group had an increase of 7.9 and 5.3 versus 1.9 and 1.0 in women who did not lose weight. Women who lost weight had improvement in depression (2.3 versus 1.2) and TFEQ restraint score (2.8 versus 0.1). Women who lost weight also had a lower disinhibition score at 12 months. Group differences for change in TFEQ restraint score were observed using independent samples t-test for comparison of change scores between WL and WG (p = .01), but not for depression. Table 3 Effects of weight loss on QOL and clinical outcomes (ancillary analyses) Variables Baseline 3 mos 6 mos 12 mos Group effect P* Primary QoL outcome FACT-G 0.26  WL 81.4 (14.0) 83.5 (15.7) 84.8 (14.7) 82.3 (15.4)  WG 78.9 (11.7) 78.1 (13.3) 79.8 (12.6) 80.1 (13.8) Secondary QoL outcomes Physical well-being (0–28) 0.79  WL 24.7 (3.3) 24.5 (4.6) 24.7 (4.7) 24.2 (4.8)  WG 23.1 (3.0) 22.5 (4.0) 23.5 (3.7) 22.8 (3.5) Functional well-being (0–28)  WL 21.9 (5.6) 22.8 (5.3) 23.3 (5.0) 21.9 (6.1) 0.65  WG 20.5 (5.2) 21.2 (5.8) 20.8 (5.4) 21.2 (6.2) Emotional well-being (0–24)  WL 18.2 (4.3) 20.0 (4.1) 20.0 (4.3) 19.5 (4.2) 0.02  WG 18.9 (4.0) 18.4 (4.2) 18.9 (4.2) 19.3 (4.1) Social/family well-being (0–28)  WL 16.6 (3.2) 16.2 (3.4) 17.0 (2.8) 16.7 (2.8) 0.98  WG 16.4 (3.0) 16.0 (3.0) 16.5 (2.9) 16.8 (3.2) Fatigue (0–52)  WL 39.0 (10.2) 41.6 (10.5) 41.4 (12.1) 42.0 (10.5) 0.07  WG 36.9 (9.3) 36.3 (10.5) 37.3 (10.7) 36.9 (9.5) Endometrial subscale (0–64)  WL 54.3 (6.6) 54.3 (7.7) 55.2 (8.1) 55.5 (6.9) 0.55  WG 52.8 (5.1) 52.7 (6.2) 53.1 (5.7) 53.3 (5.9) WEL Negative emotions (0–36) < 0.01  WL 16.7 (10.9) 24.3 (10.5) 25.6 (9.3) 25.6 (9.7)  WG 20.0 (10.3) 22.4 (9.5) 20.5 (3.3) 20.6 (9.8) Food Availability (0–36) 0.03  WL 16.5 (10.4) 24.0 (9.6) 23.8 (9.9) 24.4 (10.4)  WG 19.5 (8.3) 22.8 (9.0) 20.7 (9.4) 21.4 (9.4) Social pressure (0–36) 0.20  WL 25.2 (9.2) 28.4 (7.5) 28.1 (7.5) 29.1 (7.2)  WG 22.7 (8.1) 26.1 (8.0) 23.9 (8.2) 25.2 (7.6) Positive activities (0–36) 0.10  WL 23.0 (7.9) 27.5 (6.2) 27.5 (5.3) 27.8 (5.0)  WG 25.1 (6.5) 27.5 (6.9) 25.4 (7.0) 27.4 (4.6) Physical discomfort (0–36)  WL 25.5 (9.2) 29.6 (7.0) 30.0 (6.9) 30.8 (7.1) 0.01  WG 27.0 (7.4) 28.1 (6.5) 26.9 (6.2) 28.0 (5.7) Total (0–180) <0.01  WL 106.9 (37.4) 133.8 (36.0) 135.0 (33.8) 137.6 (33.9)  WG 114.3 (35.5) 126.9 (33.9) 117.4 (34.4) 122.6 (31.4) Beck depression inventory  WL 9.3 (7.0) - - 7.0 (6.5) 0.03  WG 10.9 (8.8) - - 9.7 (7.9) 0.14 TFEQ Restraint (0–21)  WL 10.1 (4.5) - - 12.9 (4.2) <0.01  WG 10.2 (5.4) - - 10.3 (5.3) 0.74 Disinhibition (0–16)  WL 7.8 (3.0) - - 6.9 (3.3) 0.05  WG 8.1 (4.1) - - 8.4 (4.1) 0.48 Hunger (0–14)  WL 6.1 (3.8) - - 5.1 (2.5) 0.21  WG 5.3 (3.1) - - 5.1 (3.1) 0.51 Data are presented as the mean (standard deviation). WL, n = 21; WG, n = 24 * p value for repeated measures ANOVA with baseline measurement as covariate and 3, 6 and 12 months as outcomes or paired samples t-test (eating inventory and Beck depression inventory) Abbreviations: QoL, quality of life; FACT-G, Functional Assessment of Cancer Therapy-General; SF-36, Short-form Medical Outcomes Survey; BDI, Beck Depression Inventory; WEL, weight efficacy lifestyle; TFEQ, three-factor eating questionnaire; BMI, body mass index \n\nBODY.DISCUSSION:\nLifestyle interventions are necessary for survivorship in obese endometrial cancer patients as they are at risk for premature death not due to cancer but secondary to poor cardiovascular health. The majority of patients in this study were morbidly obese and had lower physical health-related scores. The lifestyle intervention did not have any effect on global QOL outcomes, however self-efficacy, emotional well-being and certain eating behaviors improved with weight loss. Although the intervention was efficacious in promoting weight loss, the lack of influence on QOL was contrary to the hypothesis. It may be that a longer intervention, greater weight loss or increased exercise is needed to improve QOL. Functional, and psychological-related changes may require a longer term investment in a healthy lifestyle before reaching significance. Social cognitive theory [36] is a well-established explanation of health behavior change, and is frequently utilized in dietary and physical activity lifestyle interventions [37-39]. Social cognitive theory explains health behaviors in terms of reciprocal relationships between behavior, personal factors and environmental influences. The power of circumstance, of being diagnosed with endometrial cancer, can potentially launch new life courses, change a person's perception of their environment, improve health behavior-related expectations and influence reciprocal determinism. Thus, endometrial cancer patients may be amenable to a \"teachable moment.\" Self-efficacy is the social cognitive theory concept that represents one's judgment about her ability to successfully engage in a particular behavior and overcome barriers to achieve change [36]. Toobert et al performed a lifestyle intervention trial in women with CVD and found that fat intake decreased as their self-efficacy scores increased [40]. This research and those of our collaborators supports self-efficacy as a theoretical model and a possible mediator to improve lifestyle change in the obese [41,42]. We found increased self-efficacy as related to negative emotions, food availibiltiy and physical discomfort in those women who lost weight during the year. In addition, self-efficacy scores at twelve months remained increased, six months after the intervention had concluded. In terms of eating behavior, restraint was also improved in patients who lost weight. Weight losers, however had a lower disinhibition score, indicating an increase likelihood to overeat in the presence of disinhibitors. This was an unexpected finding and may indicate that there are still certain triggers that are evident and more attention to these is possibly needed. Different populations including endometrial cancer survivors, may need more intense interventions in order to change morbid patterns [24]. Limitations of this study include its small sample size, and lack of racial heterogeneity. There was a greater number of patients in the intervention group who had a prior history of bariatric surgery. These patients may have better self-efficacy and eating behavior patterns that could influence results though the number of patients is too small to make any conclusions. However, the findings should trigger clinicians to focus on positive lifestyle change, improving self-efficacy and decreasing co-morbidity in endometrial cancer survivors. An additional limitation was potential contamination of the control group, as depression improved in the control group over time. This may have been influenced by all study patients participating in an orientation meeting, in addition to meeting with the principal investigator at the 3 measurement time points. It may be that simple increased physician contact (without teaching) can improve outcome measures. Others have questioned whether improving QOL could be attributable to the increased attention (or increasing self-efficacy) given to cancer patients involved in exercise interventions [43]. QOL may improve temporally as patients experience more years of survivorship and travel farther from the challenging time of diagnosis. In addition, we can also hypothesize that the weight loss observed was not large enough to see a change in QOL. However, given that this was a feasibility study the results are helpful in designing a future intervention trial.\n\nBODY.CONCLUSION:\nThis pilot lifestyle intervention had no effect on quality of life, or depression but did improve self-efficacy and restraint. A substantial amount of endometrial cancer survivors are surviving their cancer, however, they are succumbing to other diseases correlated with obesity. The Gynecologic Oncology Group is considering adding QOL measures to their next large prospective endometrial cancer trial with long term follow up. Future directions will also consist of the measurement of 5 year outcomes in this study population [44]. Goals of upcoming projects are to decrease co-morbidity and increase overall survival in endometrial cancer survivors.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nVVG, HG, MBK, JJ, EL and KC conceived of the study, and participated in its design and coordination. VVG, HG, and MBK implemented the study and were responsible for day to day conduct of the study. VVG, HG, and KC analyzed the data. VVG, HG, and KC drafted the manuscript; MBK, JJ and EL provided critical review. All authors read and approved the final manuscript.\n\n**Question:** Compared to no weight lost what was the result of lost weight on depression?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
967
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Absorption, Metabolism and Excretion of Cranberry (Poly)phenols in Humans: A Dose Response Study and Assessment of Inter-Individual Variability\n\n ABSTRACT:\nThe beneficial health effects of cranberries have been attributed to their (poly)phenol content. Recent studies have investigated the absorption, metabolism and excretion of cranberry (poly)phenols; however, little is known about whether they follow a dose response in vivo at different levels of intake. An acute double-blind randomized controlled trial in 10 healthy men with cranberry juices containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols was performed. Blood and urine were analyzed by UPLC-Q-TOF-MS. Sixty metabolites were identified in plasma and urine including cinnamic acids, dihydrocinnamic, flavonols, benzoic acids, phenylacetic acids, benzaldehydes, valerolactones, hippuric acids, catechols, and pyrogallols. Total plasma, but not excreted urinary (poly)phenol metabolites, exhibited a linear dose response (r2 = 0.74, p < 0.05), driven by caffeic acid 4-O-ß-d-glucuronide, quercetin-3-O-ß-d-glucuronide, ferulic acid 4-O-ß-d-glucuronide, 2,5-dihydroxybenzoic acid, 2,4-dihydroxybenzoic acid, ferulic acid, caffeic acid 3-O-ß-d-glucuronide, sinapic acid, ferulic acid 4-O-sulfate, 3-hydroxybenzoic acid, syringic acid, vanillic acid-4-O-sulfate, (4R)-5-(3′-hydroxyphenyl)-γ-valerolactone-4′-O-sulfate, 4-methylgallic acid-3-O-sulfate, and isoferulic acid 3-O-sulfate (all r2 ≥ 0.89, p < 0.05). Inter-individual variability of the plasma metabolite concentration was broad and dependent on the metabolite. Herein, we show that specific plasma (poly)phenol metabolites are linearly related to the amount of (poly)phenols consumed in cranberry juice. The large inter-individual variation in metabolite profile may be due to variations in the gut microbiome.\n\nBODY.1. INTRODUCTION:\nSeveral clinical trials have showed the beneficial effects of cranberry intake on human health [1,2,3,4,5,6,7]. However, while in vitro studies have shown dose-response effects of cranberry (poly)phenols on the inhibition of microbial invasion of gut and bladder epithelial cells [8,9] and in the cytotoxicity of ovarian cancer cells [10], very few human dose-response studies have been conducted. Howell et al. reported an ex vivo dose-dependent effect on E. coli anti-adhesion in urine after cranberry consumption [11], and more recently, cranberry powder intake has demonstrated a dose-dependent effect on reducing urinary tract symptoms in men [12]. However, dose-response studies with cranberry interventions that report improvements in human health not related to the urinary tract are scarce. We recently reported a dose-dependent enhancement in endothelial function after cranberry juice intake [13], while other works did not reveal any improvement in endothelial function or blood pressure after different doses of quercetin-3-O-glucoside, a flavonoid present in cranberries, although an increase in quercetin circulating metabolites was reported [14]. To the best of our knowledge, few works have investigated plasma (poly)phenol dose-response profiles after intake of (poly)phenol-rich foods. A statistically significant increase was seen in the maximum concentration (Cmax) and area under the curve (AUC) of pelargonidin-glucuronide, pelargonidin-3-O-glucoside and cyanidin-3-O-glucoside after increasing amounts of strawberry powder, which were correlated with insulin response and inflammatory markers [15]. However, not all (poly)phenols seem to follow a linear dose response. After green tea consumption, the sum of catechins detected in plasma did not follow a linear dose response, despite the fact that (-)-epicatechin and (-)-epigallocatechin gallate increased their plasma concentration proportionally with the dosage intake [16]. These results suggested an apparent saturation mechanism which was also reported for some plasma (poly)phenols after wild blueberry consumption [17]. We have previously reported that improvements in endothelial function after cranberry consumption correlated with plasma levels of phenolic metabolites, mostly deriving from phase II metabolism [13]. In this work, we evaluate whether the plasma and urine levels of 60 phenolic metabolites found after consumption of cranberry juice with five different (poly)phenol concentrations follow a dose response and we calculate the extent of their inter-individual variability.\n\nBODY.2. MATERIALS AND METHODS .2.1. CHEMICALS:\nSulfates and glucuronides of (poly)phenol standards were obtained from Toronto Research Chemicals (Toronto, ON, Canada). Kaempferol-3-O-ß-d-glucuronide was obtained from Extrasynthese (Genay, France). The 1-methylpyrogallol-O-sulfate, 2-methylpyrogallol-O-sulfate, 4-methylcatechol-O-sulfate, 4-methylgallic-3-O-sulfate, catechol-O-sulfate, pyrogallol-O-1-sulfate, pyrogallol-O-2-sulfate and vanillic acid-4-O-sulfate were kindly provided by Dr Claudia Nunes dos Santos and Dr Rita Ventura, and their synthesis has been described elsewhere [18]. The 2-, 3- and 4-hydroxyhippuric acids were purchased from Enamine (Kiev, Ukraine). Remaining compounds were obtained from Sigma-Aldrich Co. (Steinheim, Germany). Acetic acid was from Carl Roth (Karlsruhe, Germany) and Oasis HLB μElution plates (2 mg sorbent per well, 30 μm) were from Waters (Eschborn, Germany). Milli-Q system (Merck KGaA, Darmstadt, Germany) ultrapure water was used. Unless otherwise stated, all chemicals and reagents were obtained from Sigma-Aldrich Co. (Steinheim, Germany).\n\nBODY.2. MATERIALS AND METHODS .2.2. HUMAN STUDY :\nThe study protocol has been described previously [13]. In brief, ten healthy young individuals with ages between 18 and 35 years old participated in a cross-over randomized double-blinded control-matched clinical trial which was registered under the NIH [19] ClinicalTrials.gov website (NCT02517775). All procedures involving human subjects were approved by the University of Düsseldorf Research Ethics Committee (ref: 14-012) and all procedures were performed following the Declaration of Helsinki guidelines. Volunteers were instructed to follow a low-polyphenol diet and exclude berry intake for 24 h prior to each visit. Volunteers were fasted for at least 12 h prior to each intervention to reduce background diet interference as much as possible. During the study day, the same low (poly)phenol meal was giving to all volunteers together with the test drink, and no other food or drink was allowed until after 8 h post-consumption, except for water ad libitum. Participants consumed cranberry juice containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols (TP) in a random order with 1 week wash out. On the study day whole blood was collected at 0, 1, 2, 4, 6, 8 and 24 h and urine was collected 0–8 and 8–24 h after intake. Plasma was obtained by centrifuging whole-blood collected in EDTA-containing vacutainers for 15 min at 1800 g for at 4 °C; plasma samples were spiked with 2% formic acid. Ascorbic acid was added to the urine containers (3.75 g/2 L) and acidification with formic acid until pH 2.5 was achieved. Urine containers were kept in opaque cool bags with ice blocks. Plasma and urine aliquots were stored at −80 °C until analysis.\n\nBODY.2. MATERIALS AND METHODS .2.3. (POLY)PHENOL ANALYSIS IN PLASMA:\nPlasma was analyzed with a previous validated μ-SPE coupled with UPLC-Q-TOF-MS that allowed for high-throughput [20]. Briefly, 600 μL of plasma were centrifuged at 15,000× g for 15 min at 4 °C and 350 μL of the supernatant was diluted (1:1) with phosphoric acid 4% and spiked with taxifolin (50 nM) as an internal standard. 600 μL were loaded on a 96 well μ-SPE HLB plate, washed with 200 μL of water and 200 μL of 0.2% acetic acid and finally eluted with 60 μL of methanol. Extracted and concentrated plasma samples were analyzed with an Agilent 6550 iFunnel Accurate-Mass Quadrupole Time-of-Flight Mass Spectrometer (Q-TOF MS) (Agilent, Waldbronn, Germany) after separation on a 1290 Infinity UPLC system (Agilent, Waldbronn, Germany) using a Zorbax Eclipse Plus RRHD column 2.1 mm × 50 mm, 1.8 μm (Agilent, Waldbronn, Germany). The mobile phase consisted of 0.1% HCOOH (solvent A) and acetonitrile with 0.1% HCOOH (solvent B). The elution profile (flow rate of 0.4 mL/min) started at 1% solvent B and increased to 10% after 5 min, to 25% at 8 min and to 99% at 9.1 min. The percentage of solvent B was held constant for 0.9 min. The Q-TO-MS parameters were as follows: Negative mode, gas temperature 150 °C, gas flow 20 L/min, nebulizer 25 psig, sheath gas temperature 350 °C, sheath gas flow 12 L/min and Vcap 3000 V. Data were analyzed and processed using Mass Hunter Workstation Quantitative and Qualitative Analysis software (version B.06.00, Agilent, Waldbronn, Germany).\n\nBODY.2. MATERIALS AND METHODS .2.4. DATA TREATMENT AND STATISTICAL ANALYSIS:\nThe maximum plasma concentration (Cmax) and time needed to reach it (Tmax) and AUC of time vs. concentration (calculated using a trapezium method) were calculated using the PKSolver add-in software for Microsoft Excel [21]. Linear regression analysis was performed between (poly)phenol amount from each juice and the average AUC, giving a regression equation and r2 values. All statistical analysis was performed using GraphPad Prism V6. Significance was defined as p < 0.05.\n\nBODY.3. RESULTS.3.1. PLASMA KINETICS OF CRANBERRY (POLY)PHENOLS:\nA total of 60 compounds in plasma were identified and quantified post-consumption of cranberry juice containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols, as we have previously reported for the juice containing 787 mg of (poly)phenols [22]. These included cinnamic acids, dihydrocinnamic, flavonols, benzoic acids, phenylacetic acids, benzaldehydes, valerolactones, hippuric acids, catechols, and pyrogallols. Full details of the plasma kinetic profiles can be found in Supplementary Tables S1 and S2. In summary, plasma concentrations varied from low nM to mid μM. The maximum concentration in plasma (Cmax) ranged from 1–31,269 nM (after 409 mg TP), 1–263,152 nM (after 787 mg TP), 2–115,012 nM (after 1238 mg TP), 12–267,355 nM (after 1534 mg TP) and 5–390,606 nM (after 1910 mg TP). The highest Cmax for juices was achieved for 3-(4-hydroxyphenyl) propionic acid, followed by hippuric acid and catechol-O-sulfate. The time to reach Cmax (Tmax) varied depending on the metabolite and depending on the amount of (poly)phenols in the juice (Supplementary Figure S1). The level of TP in cranberry juice seemed to impact the Tmax of individual metabolites, for example the Tmax for 3-(4-hydroxyphenyl) propionic acid varied between 7.2 h (409 mg TP juice) and 13.8 h (787 mg TP juice). The AUC of the plasma concentration against the time plot was calculated for each metabolite between 0 and 24 h and was between 2–3,119,972 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid at 409 mg TP), 1–2,274,346 nM*h (caffeic acid and hippuric acid at 787 mg TP), 2–1,609,708 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid at 1238 mg TP), 1–390,606 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid 1534 mg TP) and 5–2,467,256 nM*h (chlorogenic acid and 3-(4-hydroxyphenyl) propionic acid at 1910 mg TP). The compounds with a higher AUC for each dose were hippuric acid > catechol-O-sulfate > 2,3,dihydrobenzoic acid > phenylacetic acid.\n\nBODY.3. RESULTS.3.2. PLASMA DOSE RESPONSE :\nThe linear dose response for each plasma metabolite was assessed and of the 60 identified metabolites, 14 (poly)phenol metabolites displayed a linear dose-response curve (r2 ≥ 0.89) in plasma (Figure 1) with slopes significantly different from zero (p < 0.05): caffeic acid 4-O-ß-d-glucuronide and quercetin-3-O-ß-d-glucuronide displayed r2 > 0.98, while ferulic acid 4-O-ß-d-glucuronide, 2,5-dihydroxybenzoic acid, 2,4-dihydroxybenzoic acid, ferulic acid, caffeic acid 3-O-ß-d-glucuronide, sinapic acid, ferulic acid 4-O-sulfate, 3-hydroxybenzoic acid and syringic acid exhibited r2 > 0.90. Vanillic acid-4-O-sulfate, (4R)-5-(3′-hydroxyphenyl)-γ-valerolactone-4′-O-sulfate, 4-methylgallic acid-3-O-sulfate and isoferulic acid 3-O-sulfate had r2 values of almost 0.90. The sum of the (poly)phenol metabolites also exhibited a modest but significant (p > 0.05) correlation of r2 = 0.74 when plotted vs. (poly)phenol content in the interventions. Linear regression analysis was performed between the TP level in the intervention and the AUC of each class of compounds. The AUC for flavonols produced the highest r2 (0.97, p < 0.001), followed by the valerolactones (r2 = 0.89, p < 0.005), benzoic acids (r2 = 0.85, p < 0.01), cinnamic acids (r2 = 0.72, p < 0.05), phenylacetic acids (r2 = 0.71, p < 0.05), benzaldehydes (r2 = 0.65, p < 0.05); hippuric acids, pyrogallols, and catechols did not show a linear dose response.\n\nBODY.3. RESULTS.3.3. INTER-INDIVIDUAL VARIABILITY:\nThe coefficient of variation (CV) expressed as a percentage was calculated for Cmax and AUC for each individual plasma metabolite after consumption of juice containing 787 mg TP; the %CV was used to assess the inter-individual variability after consumption of the juice (Figure 2). The CV of the AUC for the total (sum of all 60) metabolites was 53% and the CV for Cmax was 51%. The CV for the Cmax varied between 43% for dihydroferulic acid 4-O-sulfate and 216% for vanillic acid. The CV for the AUC varied between 48% for 4-hydroxybenzaldehyde and 163% for 1-methylpyrogallol-O-sulfate.\n\nBODY.3. RESULTS.3.4. URINARY RECOVERY:\nThe same 60 (poly)phenol metabolites quantified in plasma were quantified in urine. Urinary excretion of (poly)phenol metabolites over the 24 h period post cranberry juice consumption is shown in Figure 3. Between 38 and 48 mg total urinary metabolites were excreted post-consumption of the five juices with varying TP content. The mean percentage recovery varied between 10.1% for the juice containing 406 mg TP down to 2.1% for the juice containing the highest TP content (1910 mg). The juice containing the least TP (409 mg) had a higher percentage excretion compared to all the other juices (p < 0.001). A positive, linear dose response for total excretion over 24 h of each urinary metabolite was assessed and of the 60 identified metabolites, 12 (poly)phenol metabolites displayed a positive linear dose-response curve (r2 ≥ 0.66) in urine with slopes significantly different from zero (p < 0.05); they were: 2,3-dihydrobenzoic acid (r2 = 0.79), 2,4-dihydrobenzoic acid (r2 = 0.77), dihydrocaffeic-3-O-sulfate (r2 = 0.66), ferulic-O-4-sulfate (r2 = 0.89), o-courmaric acid (r2 = 0.72), quercetin-3-O-ß-d-glucuronide (r2 = 0.89), 2,5-dihydroxybenzoic acid (r2 = 0.80), chlorogenic acid (r2 = 0.72), p-coumaric acid (r2 = 0.90), sinapic acid (r2 = 0.85), benzoic acid (r2 = 0.99), isoferulic acid (r2 = 0.74). The sum of (poly)phenol metabolites did not exhibit a linear dose response (r2 = 0.32, p = 0.25) when plotted vs. (poly)phenol content in the interventions.\n\nBODY.4. DISCUSSION:\nIn this study, we quantified 60 poly(phenol) metabolites in plasma and urine after consumption of cranberry juice containing five differing amounts of (poly)phenols. Other than previous work by our group where the same 60 metabolites were quantified in a single concentration of cranberry juice [22], this is a much larger number of metabolites than usually reported after cranberry juice consumption [23,24,25]. Further, here we present quantities of glucuronidated and sulfated metabolites in addition to the methylated and native metabolites, unlike many of the previous works looking at cranberry (poly)phenol metabolites. This study demonstrates that the most abundant plasma (poly)phenol metabolites post-ingestion of cranberry juice are 3-(4-hydroxyphenyl) propionic acid, hippuric acid, and catechol-O-sulfate; this was regardless of the amount of TP in the drink. Despite quantifying a comparatively large number of metabolites in this investigation, it should be noted that this is by no means the entirety of the cranberry (poly)phenol metabolome. Further understanding of the metabolic pathway and synthesis of further metabolites as standards is essential to fully elucidate this. This study investigated the relationship between the amount of (poly)phenols in cranberry juice and the levels of the corresponding metabolites in plasma. These data show that selected plasma (poly)phenol metabolites correlate with the amount of (poly)phenols ingested with cranberry juice, showing a linear dose response. However interestingly, this does not apply to all (poly)phenol metabolites. Although the correlation between the intake amount and AUC of the metabolite concentration is a parameter which is seldom assessed, of the 14 (poly)phenols identified as showing a dose response in this study, only three (2,4-dihydroxybenzoic acid, syringic acid, and ferulic acid) have previously been shown to correlate with the amount of (poly)phenols ingested [17]. However, this was after blueberry intake, which has a different native TP profile, and further, the investigators used an enzymatic treatment prior to quantification. Therefore, all glucuronidated, sulfated, and unconjugated metabolites would have been quantified together. Unlike the previous work after blueberry ingestion, no metabolites exhibited a significant negative correlation. Other investigations into the (poly)phenol dose relationship with metabolites do not report regression analysis in this way [15,16]; such an investigation in cranberry (poly)phenols has not previously been published. Despite the fact that less than a quarter of the 60 metabolites measured showed linear dose dependence, when assessing the sum of the plasma metabolites, these were positively correlated with the amount of (poly)phenols ingested in the drink. This suggests that those 14 metabolites are key in driving the overall correlation. Many of the metabolites assessed exhibited a very large inter-individual variability (CV up to 216%). However, each metabolite did not exhibit the same level of variability as has been discussed previously [26]. Plus, when looking at the total metabolites (sum of 60 quantified), the variability was much lower (53%), suggesting that it is not a matter of individuals poorly/effectively metabolizing poly(phenols) as a whole, but it is actually down to an individual's ability to generate specific metabolites. There are a number of factors that will influence the metabolism and absorption. These factors may include sex, genetic polymorphisms of transporters or metabolizing enzymes, environmental influences, and likely the composition of the gut microbiome. This inter-individual variability in relation to (poly)phenols has been suggested previously [27] and may be down to the variation in the make-up of the volunteer's gut microbiota. Recent work has shown that (poly)phenols are able to modulate the levels of gut bacteria, but it is also becoming more evident that the gut microbiota plays a very important role in (poly)phenol metabolism [28]. Thus, characterizing the native gut bacteria and investigating relationships between species of bacteria and specific metabolites will be necessary to understand this complex relationship between (poly)phenols and the gut microbiota. The urinary recovery of the (poly)phenols was low (≤10%), although in line with urinary recoveries after other (poly)phenol-rich foods. This could be due to not capturing the entire profile of (poly)phenol metabolites and hence underestimating urinary excretion, which is feasible. Further, fecal metabolites were not considered and it is possible that a big proportion of metabolites are lost via this route, in particular the high-molecular-weight proanthocyanidins, which are not absorbed to a big extent. The method of quantification of proanthocyanidins in cranberry that we used can also explain in part the low recovery, as previously discussed [22]. Here we used the OS4-dimethylaminocinnamaldehyde (DMAC) method rather than the often used BL DMAC method. The latter overestimates (poly)phenol levels due to the use of procyanidin A2 as a standard instead of a better representation of cranberry proanthocyanidin complexity; OS DMAC includes a standardized extract comprehending larger polymers [29]. Therefore, using what we believe is a more appropriate method of quantification of (poly)phenols in cranberry juice may explain the low urinary recovery. This is specifically important in regards to this study as proanthocyanidins contribute to ~30% of the total TP amount in the cranberry beverages tested. Although the excretion of 12 of the 60 metabolites exhibited a linear dose response, the sum of the 60 metabolites did not. Unlike the plasma metabolites, the response of the sum of the urinary metabolites seemed to plateau beyond 787 mg TP cranberry juice. Although excretion dose response is seldom investigated, similar plateaus in response to physical parameters (e.g., flow-mediated dilation) have been observed by us, in response to cranberry juice [13] and blueberry [30]. We acknowledge the limitations within this study, for example there may be metabolites that we have not measured which may impact the recovery. There are a number of compounds which escape phase I and II metabolism and undergo microbial catabolism. These compounds are transformed into small-molecular-weight compounds that are no longer polyphenol intake–specific (e.g., catechols, pyrogallols, etc.), hence making it impossible to know which are derived from the ingested (poly)phenols and which from the background diet. In addition, although the plasma was taken under the control of the analyst, most of the urine was collected under free-living conditions, and as our method of quantification was based solely on urine volume, this could have caused some error. Further, although participants were instructed regarding lifestyle and dietary adjustment prior to the study days, we had little control over this aspect. Therefore, this could have been an important factor increasing the inter-individual variation and also the differing Tmax between cranberry juices. Although in the study we intentionally investigated the effect of cranberry juice on healthy, young, male subjects, broadening the inclusion criteria (for example, including women) or modifying the inclusion criteria (deviating from healthy participants) could have impacted our findings and could be an interesting avenue of investigation. Taken together, we show that after consuming cranberry juice containing 409, 787, 1238, 1534, and 1910 mg total (poly)phenols, there is a linear dose-dependent increase in circulating plasma (poly)phenol metabolites, but not in urinary excretion. We note that not all individual metabolites exhibit the same effect with regard to linear regression, and we identify specific plasma metabolites that may be driving this overall effect. Further, herein we note the large inter-individual variation in the metabolic profile in response to (poly)phenol consumption and suggest that this may be in part due to inter-individual variation in the gut microbiome.\n\n**Question:** Compared to nothing what was the result of cranberry juices on flavonols, valerolactones, benzoic acids, cinnamic acids, phenylacetic acids, benzaldehydes ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
968
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Absorption, Metabolism and Excretion of Cranberry (Poly)phenols in Humans: A Dose Response Study and Assessment of Inter-Individual Variability\n\n ABSTRACT:\nThe beneficial health effects of cranberries have been attributed to their (poly)phenol content. Recent studies have investigated the absorption, metabolism and excretion of cranberry (poly)phenols; however, little is known about whether they follow a dose response in vivo at different levels of intake. An acute double-blind randomized controlled trial in 10 healthy men with cranberry juices containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols was performed. Blood and urine were analyzed by UPLC-Q-TOF-MS. Sixty metabolites were identified in plasma and urine including cinnamic acids, dihydrocinnamic, flavonols, benzoic acids, phenylacetic acids, benzaldehydes, valerolactones, hippuric acids, catechols, and pyrogallols. Total plasma, but not excreted urinary (poly)phenol metabolites, exhibited a linear dose response (r2 = 0.74, p < 0.05), driven by caffeic acid 4-O-ß-d-glucuronide, quercetin-3-O-ß-d-glucuronide, ferulic acid 4-O-ß-d-glucuronide, 2,5-dihydroxybenzoic acid, 2,4-dihydroxybenzoic acid, ferulic acid, caffeic acid 3-O-ß-d-glucuronide, sinapic acid, ferulic acid 4-O-sulfate, 3-hydroxybenzoic acid, syringic acid, vanillic acid-4-O-sulfate, (4R)-5-(3′-hydroxyphenyl)-γ-valerolactone-4′-O-sulfate, 4-methylgallic acid-3-O-sulfate, and isoferulic acid 3-O-sulfate (all r2 ≥ 0.89, p < 0.05). Inter-individual variability of the plasma metabolite concentration was broad and dependent on the metabolite. Herein, we show that specific plasma (poly)phenol metabolites are linearly related to the amount of (poly)phenols consumed in cranberry juice. The large inter-individual variation in metabolite profile may be due to variations in the gut microbiome.\n\nBODY.1. INTRODUCTION:\nSeveral clinical trials have showed the beneficial effects of cranberry intake on human health [1,2,3,4,5,6,7]. However, while in vitro studies have shown dose-response effects of cranberry (poly)phenols on the inhibition of microbial invasion of gut and bladder epithelial cells [8,9] and in the cytotoxicity of ovarian cancer cells [10], very few human dose-response studies have been conducted. Howell et al. reported an ex vivo dose-dependent effect on E. coli anti-adhesion in urine after cranberry consumption [11], and more recently, cranberry powder intake has demonstrated a dose-dependent effect on reducing urinary tract symptoms in men [12]. However, dose-response studies with cranberry interventions that report improvements in human health not related to the urinary tract are scarce. We recently reported a dose-dependent enhancement in endothelial function after cranberry juice intake [13], while other works did not reveal any improvement in endothelial function or blood pressure after different doses of quercetin-3-O-glucoside, a flavonoid present in cranberries, although an increase in quercetin circulating metabolites was reported [14]. To the best of our knowledge, few works have investigated plasma (poly)phenol dose-response profiles after intake of (poly)phenol-rich foods. A statistically significant increase was seen in the maximum concentration (Cmax) and area under the curve (AUC) of pelargonidin-glucuronide, pelargonidin-3-O-glucoside and cyanidin-3-O-glucoside after increasing amounts of strawberry powder, which were correlated with insulin response and inflammatory markers [15]. However, not all (poly)phenols seem to follow a linear dose response. After green tea consumption, the sum of catechins detected in plasma did not follow a linear dose response, despite the fact that (-)-epicatechin and (-)-epigallocatechin gallate increased their plasma concentration proportionally with the dosage intake [16]. These results suggested an apparent saturation mechanism which was also reported for some plasma (poly)phenols after wild blueberry consumption [17]. We have previously reported that improvements in endothelial function after cranberry consumption correlated with plasma levels of phenolic metabolites, mostly deriving from phase II metabolism [13]. In this work, we evaluate whether the plasma and urine levels of 60 phenolic metabolites found after consumption of cranberry juice with five different (poly)phenol concentrations follow a dose response and we calculate the extent of their inter-individual variability.\n\nBODY.2. MATERIALS AND METHODS .2.1. CHEMICALS:\nSulfates and glucuronides of (poly)phenol standards were obtained from Toronto Research Chemicals (Toronto, ON, Canada). Kaempferol-3-O-ß-d-glucuronide was obtained from Extrasynthese (Genay, France). The 1-methylpyrogallol-O-sulfate, 2-methylpyrogallol-O-sulfate, 4-methylcatechol-O-sulfate, 4-methylgallic-3-O-sulfate, catechol-O-sulfate, pyrogallol-O-1-sulfate, pyrogallol-O-2-sulfate and vanillic acid-4-O-sulfate were kindly provided by Dr Claudia Nunes dos Santos and Dr Rita Ventura, and their synthesis has been described elsewhere [18]. The 2-, 3- and 4-hydroxyhippuric acids were purchased from Enamine (Kiev, Ukraine). Remaining compounds were obtained from Sigma-Aldrich Co. (Steinheim, Germany). Acetic acid was from Carl Roth (Karlsruhe, Germany) and Oasis HLB μElution plates (2 mg sorbent per well, 30 μm) were from Waters (Eschborn, Germany). Milli-Q system (Merck KGaA, Darmstadt, Germany) ultrapure water was used. Unless otherwise stated, all chemicals and reagents were obtained from Sigma-Aldrich Co. (Steinheim, Germany).\n\nBODY.2. MATERIALS AND METHODS .2.2. HUMAN STUDY :\nThe study protocol has been described previously [13]. In brief, ten healthy young individuals with ages between 18 and 35 years old participated in a cross-over randomized double-blinded control-matched clinical trial which was registered under the NIH [19] ClinicalTrials.gov website (NCT02517775). All procedures involving human subjects were approved by the University of Düsseldorf Research Ethics Committee (ref: 14-012) and all procedures were performed following the Declaration of Helsinki guidelines. Volunteers were instructed to follow a low-polyphenol diet and exclude berry intake for 24 h prior to each visit. Volunteers were fasted for at least 12 h prior to each intervention to reduce background diet interference as much as possible. During the study day, the same low (poly)phenol meal was giving to all volunteers together with the test drink, and no other food or drink was allowed until after 8 h post-consumption, except for water ad libitum. Participants consumed cranberry juice containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols (TP) in a random order with 1 week wash out. On the study day whole blood was collected at 0, 1, 2, 4, 6, 8 and 24 h and urine was collected 0–8 and 8–24 h after intake. Plasma was obtained by centrifuging whole-blood collected in EDTA-containing vacutainers for 15 min at 1800 g for at 4 °C; plasma samples were spiked with 2% formic acid. Ascorbic acid was added to the urine containers (3.75 g/2 L) and acidification with formic acid until pH 2.5 was achieved. Urine containers were kept in opaque cool bags with ice blocks. Plasma and urine aliquots were stored at −80 °C until analysis.\n\nBODY.2. MATERIALS AND METHODS .2.3. (POLY)PHENOL ANALYSIS IN PLASMA:\nPlasma was analyzed with a previous validated μ-SPE coupled with UPLC-Q-TOF-MS that allowed for high-throughput [20]. Briefly, 600 μL of plasma were centrifuged at 15,000× g for 15 min at 4 °C and 350 μL of the supernatant was diluted (1:1) with phosphoric acid 4% and spiked with taxifolin (50 nM) as an internal standard. 600 μL were loaded on a 96 well μ-SPE HLB plate, washed with 200 μL of water and 200 μL of 0.2% acetic acid and finally eluted with 60 μL of methanol. Extracted and concentrated plasma samples were analyzed with an Agilent 6550 iFunnel Accurate-Mass Quadrupole Time-of-Flight Mass Spectrometer (Q-TOF MS) (Agilent, Waldbronn, Germany) after separation on a 1290 Infinity UPLC system (Agilent, Waldbronn, Germany) using a Zorbax Eclipse Plus RRHD column 2.1 mm × 50 mm, 1.8 μm (Agilent, Waldbronn, Germany). The mobile phase consisted of 0.1% HCOOH (solvent A) and acetonitrile with 0.1% HCOOH (solvent B). The elution profile (flow rate of 0.4 mL/min) started at 1% solvent B and increased to 10% after 5 min, to 25% at 8 min and to 99% at 9.1 min. The percentage of solvent B was held constant for 0.9 min. The Q-TO-MS parameters were as follows: Negative mode, gas temperature 150 °C, gas flow 20 L/min, nebulizer 25 psig, sheath gas temperature 350 °C, sheath gas flow 12 L/min and Vcap 3000 V. Data were analyzed and processed using Mass Hunter Workstation Quantitative and Qualitative Analysis software (version B.06.00, Agilent, Waldbronn, Germany).\n\nBODY.2. MATERIALS AND METHODS .2.4. DATA TREATMENT AND STATISTICAL ANALYSIS:\nThe maximum plasma concentration (Cmax) and time needed to reach it (Tmax) and AUC of time vs. concentration (calculated using a trapezium method) were calculated using the PKSolver add-in software for Microsoft Excel [21]. Linear regression analysis was performed between (poly)phenol amount from each juice and the average AUC, giving a regression equation and r2 values. All statistical analysis was performed using GraphPad Prism V6. Significance was defined as p < 0.05.\n\nBODY.3. RESULTS.3.1. PLASMA KINETICS OF CRANBERRY (POLY)PHENOLS:\nA total of 60 compounds in plasma were identified and quantified post-consumption of cranberry juice containing 409, 787, 1238, 1534 and 1910 mg total (poly)phenols, as we have previously reported for the juice containing 787 mg of (poly)phenols [22]. These included cinnamic acids, dihydrocinnamic, flavonols, benzoic acids, phenylacetic acids, benzaldehydes, valerolactones, hippuric acids, catechols, and pyrogallols. Full details of the plasma kinetic profiles can be found in Supplementary Tables S1 and S2. In summary, plasma concentrations varied from low nM to mid μM. The maximum concentration in plasma (Cmax) ranged from 1–31,269 nM (after 409 mg TP), 1–263,152 nM (after 787 mg TP), 2–115,012 nM (after 1238 mg TP), 12–267,355 nM (after 1534 mg TP) and 5–390,606 nM (after 1910 mg TP). The highest Cmax for juices was achieved for 3-(4-hydroxyphenyl) propionic acid, followed by hippuric acid and catechol-O-sulfate. The time to reach Cmax (Tmax) varied depending on the metabolite and depending on the amount of (poly)phenols in the juice (Supplementary Figure S1). The level of TP in cranberry juice seemed to impact the Tmax of individual metabolites, for example the Tmax for 3-(4-hydroxyphenyl) propionic acid varied between 7.2 h (409 mg TP juice) and 13.8 h (787 mg TP juice). The AUC of the plasma concentration against the time plot was calculated for each metabolite between 0 and 24 h and was between 2–3,119,972 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid at 409 mg TP), 1–2,274,346 nM*h (caffeic acid and hippuric acid at 787 mg TP), 2–1,609,708 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid at 1238 mg TP), 1–390,606 nM*h (caffeic acid and 3-(4-hydroxyphenyl) propionic acid 1534 mg TP) and 5–2,467,256 nM*h (chlorogenic acid and 3-(4-hydroxyphenyl) propionic acid at 1910 mg TP). The compounds with a higher AUC for each dose were hippuric acid > catechol-O-sulfate > 2,3,dihydrobenzoic acid > phenylacetic acid.\n\nBODY.3. RESULTS.3.2. PLASMA DOSE RESPONSE :\nThe linear dose response for each plasma metabolite was assessed and of the 60 identified metabolites, 14 (poly)phenol metabolites displayed a linear dose-response curve (r2 ≥ 0.89) in plasma (Figure 1) with slopes significantly different from zero (p < 0.05): caffeic acid 4-O-ß-d-glucuronide and quercetin-3-O-ß-d-glucuronide displayed r2 > 0.98, while ferulic acid 4-O-ß-d-glucuronide, 2,5-dihydroxybenzoic acid, 2,4-dihydroxybenzoic acid, ferulic acid, caffeic acid 3-O-ß-d-glucuronide, sinapic acid, ferulic acid 4-O-sulfate, 3-hydroxybenzoic acid and syringic acid exhibited r2 > 0.90. Vanillic acid-4-O-sulfate, (4R)-5-(3′-hydroxyphenyl)-γ-valerolactone-4′-O-sulfate, 4-methylgallic acid-3-O-sulfate and isoferulic acid 3-O-sulfate had r2 values of almost 0.90. The sum of the (poly)phenol metabolites also exhibited a modest but significant (p > 0.05) correlation of r2 = 0.74 when plotted vs. (poly)phenol content in the interventions. Linear regression analysis was performed between the TP level in the intervention and the AUC of each class of compounds. The AUC for flavonols produced the highest r2 (0.97, p < 0.001), followed by the valerolactones (r2 = 0.89, p < 0.005), benzoic acids (r2 = 0.85, p < 0.01), cinnamic acids (r2 = 0.72, p < 0.05), phenylacetic acids (r2 = 0.71, p < 0.05), benzaldehydes (r2 = 0.65, p < 0.05); hippuric acids, pyrogallols, and catechols did not show a linear dose response.\n\nBODY.3. RESULTS.3.3. INTER-INDIVIDUAL VARIABILITY:\nThe coefficient of variation (CV) expressed as a percentage was calculated for Cmax and AUC for each individual plasma metabolite after consumption of juice containing 787 mg TP; the %CV was used to assess the inter-individual variability after consumption of the juice (Figure 2). The CV of the AUC for the total (sum of all 60) metabolites was 53% and the CV for Cmax was 51%. The CV for the Cmax varied between 43% for dihydroferulic acid 4-O-sulfate and 216% for vanillic acid. The CV for the AUC varied between 48% for 4-hydroxybenzaldehyde and 163% for 1-methylpyrogallol-O-sulfate.\n\nBODY.3. RESULTS.3.4. URINARY RECOVERY:\nThe same 60 (poly)phenol metabolites quantified in plasma were quantified in urine. Urinary excretion of (poly)phenol metabolites over the 24 h period post cranberry juice consumption is shown in Figure 3. Between 38 and 48 mg total urinary metabolites were excreted post-consumption of the five juices with varying TP content. The mean percentage recovery varied between 10.1% for the juice containing 406 mg TP down to 2.1% for the juice containing the highest TP content (1910 mg). The juice containing the least TP (409 mg) had a higher percentage excretion compared to all the other juices (p < 0.001). A positive, linear dose response for total excretion over 24 h of each urinary metabolite was assessed and of the 60 identified metabolites, 12 (poly)phenol metabolites displayed a positive linear dose-response curve (r2 ≥ 0.66) in urine with slopes significantly different from zero (p < 0.05); they were: 2,3-dihydrobenzoic acid (r2 = 0.79), 2,4-dihydrobenzoic acid (r2 = 0.77), dihydrocaffeic-3-O-sulfate (r2 = 0.66), ferulic-O-4-sulfate (r2 = 0.89), o-courmaric acid (r2 = 0.72), quercetin-3-O-ß-d-glucuronide (r2 = 0.89), 2,5-dihydroxybenzoic acid (r2 = 0.80), chlorogenic acid (r2 = 0.72), p-coumaric acid (r2 = 0.90), sinapic acid (r2 = 0.85), benzoic acid (r2 = 0.99), isoferulic acid (r2 = 0.74). The sum of (poly)phenol metabolites did not exhibit a linear dose response (r2 = 0.32, p = 0.25) when plotted vs. (poly)phenol content in the interventions.\n\nBODY.4. DISCUSSION:\nIn this study, we quantified 60 poly(phenol) metabolites in plasma and urine after consumption of cranberry juice containing five differing amounts of (poly)phenols. Other than previous work by our group where the same 60 metabolites were quantified in a single concentration of cranberry juice [22], this is a much larger number of metabolites than usually reported after cranberry juice consumption [23,24,25]. Further, here we present quantities of glucuronidated and sulfated metabolites in addition to the methylated and native metabolites, unlike many of the previous works looking at cranberry (poly)phenol metabolites. This study demonstrates that the most abundant plasma (poly)phenol metabolites post-ingestion of cranberry juice are 3-(4-hydroxyphenyl) propionic acid, hippuric acid, and catechol-O-sulfate; this was regardless of the amount of TP in the drink. Despite quantifying a comparatively large number of metabolites in this investigation, it should be noted that this is by no means the entirety of the cranberry (poly)phenol metabolome. Further understanding of the metabolic pathway and synthesis of further metabolites as standards is essential to fully elucidate this. This study investigated the relationship between the amount of (poly)phenols in cranberry juice and the levels of the corresponding metabolites in plasma. These data show that selected plasma (poly)phenol metabolites correlate with the amount of (poly)phenols ingested with cranberry juice, showing a linear dose response. However interestingly, this does not apply to all (poly)phenol metabolites. Although the correlation between the intake amount and AUC of the metabolite concentration is a parameter which is seldom assessed, of the 14 (poly)phenols identified as showing a dose response in this study, only three (2,4-dihydroxybenzoic acid, syringic acid, and ferulic acid) have previously been shown to correlate with the amount of (poly)phenols ingested [17]. However, this was after blueberry intake, which has a different native TP profile, and further, the investigators used an enzymatic treatment prior to quantification. Therefore, all glucuronidated, sulfated, and unconjugated metabolites would have been quantified together. Unlike the previous work after blueberry ingestion, no metabolites exhibited a significant negative correlation. Other investigations into the (poly)phenol dose relationship with metabolites do not report regression analysis in this way [15,16]; such an investigation in cranberry (poly)phenols has not previously been published. Despite the fact that less than a quarter of the 60 metabolites measured showed linear dose dependence, when assessing the sum of the plasma metabolites, these were positively correlated with the amount of (poly)phenols ingested in the drink. This suggests that those 14 metabolites are key in driving the overall correlation. Many of the metabolites assessed exhibited a very large inter-individual variability (CV up to 216%). However, each metabolite did not exhibit the same level of variability as has been discussed previously [26]. Plus, when looking at the total metabolites (sum of 60 quantified), the variability was much lower (53%), suggesting that it is not a matter of individuals poorly/effectively metabolizing poly(phenols) as a whole, but it is actually down to an individual's ability to generate specific metabolites. There are a number of factors that will influence the metabolism and absorption. These factors may include sex, genetic polymorphisms of transporters or metabolizing enzymes, environmental influences, and likely the composition of the gut microbiome. This inter-individual variability in relation to (poly)phenols has been suggested previously [27] and may be down to the variation in the make-up of the volunteer's gut microbiota. Recent work has shown that (poly)phenols are able to modulate the levels of gut bacteria, but it is also becoming more evident that the gut microbiota plays a very important role in (poly)phenol metabolism [28]. Thus, characterizing the native gut bacteria and investigating relationships between species of bacteria and specific metabolites will be necessary to understand this complex relationship between (poly)phenols and the gut microbiota. The urinary recovery of the (poly)phenols was low (≤10%), although in line with urinary recoveries after other (poly)phenol-rich foods. This could be due to not capturing the entire profile of (poly)phenol metabolites and hence underestimating urinary excretion, which is feasible. Further, fecal metabolites were not considered and it is possible that a big proportion of metabolites are lost via this route, in particular the high-molecular-weight proanthocyanidins, which are not absorbed to a big extent. The method of quantification of proanthocyanidins in cranberry that we used can also explain in part the low recovery, as previously discussed [22]. Here we used the OS4-dimethylaminocinnamaldehyde (DMAC) method rather than the often used BL DMAC method. The latter overestimates (poly)phenol levels due to the use of procyanidin A2 as a standard instead of a better representation of cranberry proanthocyanidin complexity; OS DMAC includes a standardized extract comprehending larger polymers [29]. Therefore, using what we believe is a more appropriate method of quantification of (poly)phenols in cranberry juice may explain the low urinary recovery. This is specifically important in regards to this study as proanthocyanidins contribute to ~30% of the total TP amount in the cranberry beverages tested. Although the excretion of 12 of the 60 metabolites exhibited a linear dose response, the sum of the 60 metabolites did not. Unlike the plasma metabolites, the response of the sum of the urinary metabolites seemed to plateau beyond 787 mg TP cranberry juice. Although excretion dose response is seldom investigated, similar plateaus in response to physical parameters (e.g., flow-mediated dilation) have been observed by us, in response to cranberry juice [13] and blueberry [30]. We acknowledge the limitations within this study, for example there may be metabolites that we have not measured which may impact the recovery. There are a number of compounds which escape phase I and II metabolism and undergo microbial catabolism. These compounds are transformed into small-molecular-weight compounds that are no longer polyphenol intake–specific (e.g., catechols, pyrogallols, etc.), hence making it impossible to know which are derived from the ingested (poly)phenols and which from the background diet. In addition, although the plasma was taken under the control of the analyst, most of the urine was collected under free-living conditions, and as our method of quantification was based solely on urine volume, this could have caused some error. Further, although participants were instructed regarding lifestyle and dietary adjustment prior to the study days, we had little control over this aspect. Therefore, this could have been an important factor increasing the inter-individual variation and also the differing Tmax between cranberry juices. Although in the study we intentionally investigated the effect of cranberry juice on healthy, young, male subjects, broadening the inclusion criteria (for example, including women) or modifying the inclusion criteria (deviating from healthy participants) could have impacted our findings and could be an interesting avenue of investigation. Taken together, we show that after consuming cranberry juice containing 409, 787, 1238, 1534, and 1910 mg total (poly)phenols, there is a linear dose-dependent increase in circulating plasma (poly)phenol metabolites, but not in urinary excretion. We note that not all individual metabolites exhibit the same effect with regard to linear regression, and we identify specific plasma metabolites that may be driving this overall effect. Further, herein we note the large inter-individual variation in the metabolic profile in response to (poly)phenol consumption and suggest that this may be in part due to inter-individual variation in the gut microbiome.\n\n**Question:** Compared to nothing what was the result of cranberry juices on 12 (poly)phenol metabolites?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
969
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The implementation of a community-based aerobic walking program for mild to moderate knee osteoarthritis (OA): a knowledge translation (KT) randomized controlled trial (RCT): Part I: The Uptake of the Ottawa Panel clinical practice guidelines (CPGs)\n\n ABSTRACT.BACKGROUND:\nThe implementation of evidence based clinical practice guidelines on self-management interventions to patients with chronic diseases is a complex process. A multifaceted strategy may offer an effective knowledge translation (KT) intervention to promote knowledge uptake and improve adherence in an effective walking program based on the Ottawa Panel Evidence Based Clinical Practice Guidelines among individuals with moderate osteoarthritis (OA).\n\nABSTRACT.METHODS:\nA single-blind, randomized control trial was conducted. Patients with mild to moderate (OA) of the knee (n=222) were randomized to one of three KT groups: 1) Walking and Behavioural intervention (WB) (18 males, 57 females) which included the supervised community-based aerobic walking program combined with a behavioural intervention and an educational pamphlet on the benefits of walking for OA; 2) Walking intervention (W) (24 males, 57 females) wherein participants only received the supervised community-based aerobic walking program intervention and the educational pamphlet; 3) Self-directed control (C) (32 males, 52 females) wherein participants only received the educational pamphlet. One-way analyses of variance were used to test for differences in quality of life, adherence, confidence, and clinical outcomes among the study groups at each 3 month assessment during the 12-month intervention period and 6-month follow-up period.\n\nABSTRACT.RESULTS:\nShort-term program adherence was greater in WB compared to C (p<0.012) after 3 months. No statistical significance (p> 0.05) was observed for long-term adherence (6 to 12 months), and total adherence between the three groups. The three knowledge translation strategies demonstrated equivalent long-term results for the implementation of a walking program for older individuals with moderate OA. Lower dropout rates as well as higher retention rates were observed for WB at 12 and 18 months.\n\nABSTRACT.CONCLUSION:\nThe additional knowledge translation behavioural component facilitated the implementation of clinical practice guidelines on walking over a short-term period. More studies are needed to improve the long-term walking adherence or longer guidelines uptake on walking among participants with OA. Particular attention should be taken into account related to patient's characteristic and preference. OA can be managed through the implementation of a walking program based on clinical practice guidelines in existing community-based walking clubs as well as at home with the minimal support of an exercise therapist or a trained volunteer.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent Controlled Trials IRSCTNO9193542\n\nBODY.BACKGROUND:\nRising healthcare costs, limited resources, and the aging population have created new and growing challenges in the management of osteoarthritis (OA). The challenges associated with the management of OA are numerous as the prevalence of the population continues to increase while healthcare resources remain limited \n[1-7]. It is therefore necessary to determine the most effective methods of integrating research evidence in order to optimize health outcomes. The purpose of this randomized controlled trial (RCT) was to compare 1) improvements in quality of life (QoL) and clinical outcomes such as pain, mobility and endurance); 2) adherence rates; and 3) confidence and self-efficacy after the implementation of a 12-month supervised community-based aerobic walking program (SCAWP) based on the Ottawa Panel clinical practice guidelines (CPG) among three knowledge translation (KT) intervention arms. QoL, confidence, and self-efficacy were compared at 12- months (end of treatment) and at 18-months (6-months post-intervention). Adherence was compared during the intervention period at 3, 6, 9 and 12 months. The first part of this manuscript introduces each KT intervention and demonstrates the impact of knowledge (CPG) uptake of each by comparing outcomes influenced by the KT intervention such as adherene and behaviour change (confidence and self-efficacy). Knowledge application of each KT intervention is explained through the use a theoretical framework: Knowledge-To-Action Cycle (KTAC) \n[8,9]. The second part of this manuscript focuses on outcome evaluation, a specific phase in the KTAC framework used to guide this study. Part II demonstrates the effect of each intervention involving a walking program on outcomes which were influenced by the SCAWP including QoL (primary outcome) and other clinical outcomes such as pain, mobility, and endurance. These outcomes are exhibited in part II. Therefore, both parts of this manuscript were split according to the ''Evaluate outcomes\" phase of the KTAC framework. Part I focused on KT outcomes which measured the success of CPG uptake through participants' adherence and behavioural change while part II focused on clinical outcomes which measured the positive effect of SCAWP due to the indirect CPG uptake/implementation/adoption by individuals with OA of the knee.\n\nBODY.BACKGROUND.KNOWLEDGE TRANSLATION:\nA major issue in health research today is finding effective and efficient ways to exchange knowledge between researchers, clinicians and the general public. Potential benefits to health which could be accrued through the application of research findings are not realized due to challenges in research uptake and knowledge translation \n[10]. KT is defined by Estabrooks et al. \n[11] as \"the process by which specific research-based knowledge (science) is implemented in practice.\" The available evidence suggests that with an effective KT strategy, uptake of evidence-based clinical practice guidelines (EBCPGs) can be effective in improving patient health outcomes \n[12].\n\nBODY.BACKGROUND.PHYSICAL ACTIVITY AND OSTEOARTHRITIS:\nThe promotion of physical activity (PA) is a priority for health organizations serving the general population \n[13,14] and is highly recommended for subgroups affected by chronic diseases \n[15-19]. Community-based PA combined with a behavioural modification and self-management interventions can reduce the risk of disability and negative consequences of inactivity related to OA \n[20,21]. The challenge is to develop PA programs that will encourage OA patients to initiate and maintain improvements in exercise behaviour over a long-term period \n[22]. The available evidence suggests that with an effective KT strategy, uptake of evidence-based clinical practice guidelines (CPGs) can be effective in improving patient health outcomes \n[12].\n\nBODY.BACKGROUND.SELECTING AN EFFECTIVE KT STRATEGY:\nMultifaceted interventions combining more than one KT strategy have tended to be more effective than using a single KT strategy alone \n[23,24] and have shown to have the greatest impact on CPG adherence and PA performance \n[25]. Behavioural interventions have been used for other chronic conditions to improve long-term adherence and maintenance of PA regimens with varying success. The efficacy of different behavioural interventions including patient education, health counselling, goal setting, and social/peer support, delivered separately and or in combination, have been examined in the management of arthritis \n[26-31].\n\nBODY.METHODS:\nThe following methodology is in full agreement with the Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement criteria for reporting RCTs \n[32].\n\nBODY.METHODS.DESIGN:\nThis single blind RCT, funded by the Canadian institutes of Health Research, used a parallel group design (1:1:1). This community-based study was approved by the University of Ottawa Research Ethics Board and the City of Ottawa Public Health Research Ethics Board. This study implemented one of the Ottawa Panel CPG recommendations related to a supervised community-based aerobic walking programs (SCAWPs) among individuals with mild to moderate OA of the knee \n[33-35]. In order to facilitate an understanding of the KT process, Graham et al. developed a conceptual framework entitled \"the Knowledge-To-Action Cycle\" (Figure \n1) which provides an approach which combines commonalities from various planned-action theories \n[8,9]. This framework was used to guide the development of the three KT strategies used in this RCT (Additional file \n1). The framework demonstrates the dynamic process of knowledge creation and application \n[9]. Knowledge creation consists of three phases: knowledge inquiry, knowledge synthesis, and knowledge tools/products. The knowledge creation cycle of the framework demonstrates how knowledge is refined through each phase to provide useful information for end users. The action cycle of the framework consists of seven phases which may occur sequentially or simultaneously: 1) Identify the Knowledge-To-Action Cycle gaps; 2) Adapt Knowledge to local context; 3) Assess barriers to Knowledge Use; 4) Select, Tailor, Implement Interventions; 5) Monitor Knowledge Use; 6) Evaluate outcomes; and 7) Sustained Knowledge Use. For part I, the emphasis was placed on the \"Evaluate outcomes (KT outcomes)\" phase as participants' knowledge uptake was measured as adherence to the effective walking program for OA and attitude/behaviour change (self-efficacy and confidence) after participating in the SCAWP. Figure 1Knowledge-To-Action Cycle. This figure illustrates the Knowledge-To-Action Cycle Framework \n[8]. This figure is used with permission: Lost in knowledge translation: Time for a map. Graham ID, et al. Journal of Continuing Education in the Health Professions, 26(1). Copyright © 2006. The Alliance for Continuing Medical Education, The Society for Academic Continuing Medical Education, and The Association for Hospital Medical Education. The hypothesis was that the additional behavioural approach of the multifaceted KT intervention would: 1) increase short and long term adherence to the SCAWP; 2) reduce the drop-out rate; 3) influence behaviour change by improving self-efficacy, confidence, QoL and other clinical outcomes.\n\nBODY.METHODS.SAMPLE AND RECRUITMENT:\nA total of 222 adults with knee OA were recruited. Participants were eligible to participate in the study if he/she 1) had a confirmed diagnosis with mild to moderate unilateral or bilateral OA according to the American College of Rheumatology clinical and radiographic/magnetic resonance imagery criteria; 2) reported pain for at least 3 months; 3) expected his/her medication to change during the study period; 4) demonstrated an ability to ambulate for a minimum of 20 minutes, at their own pace with minimal reports of pain (≤ 3 out of 10 on a visual analogue pain rating scale) \n[36]; 4) were able to be treated as an out-patient; and 5) were available three times a week over a period of 12 months. Potential study subjects were excluded if they had: 1) Participated in regular physical or aerobic sports ≥ 2 times per week for more than 20 minutes per session during the previous 6 months; 2) severe OA of the knee or other weight bearing joints of the lower extremity; 3) no written consent from their physician to participate in the study; 4) pain at rest or at night; 5) received rehabilitation treatment, corticosteroids injection, or any other pain-related treatment besides medication for arthritis within the last 12 months; 6) uncontrolled hypertension (Systolic blood pressure >160 mm Hg confirmed by the screening initial VO2 max test at the Ottawa Heart Institute) 7) other illnesses, such as rheumatoid arthritis (judged by the patient or study physician to make participation in this study inadvisable); 8) significant cognitive deficit resulting in an inability to understand or comply with instructions; 9) surgery planned in the next year; 10) Intention to move away from Ottawa region in the next year; 11) an inability to communicate in English or French; or 12) an unwillingness to sign informed consent. The study coordinator assessed the inclusion/exclusion criteria of all potential participants by telephone. At the first visit, written informed consent was obtained if the patient met all inclusion criteria.\n\nBODY.METHODS.INTERVENTION:\nParticipants were randomly assigned to one of the three KT intervention groups using central randomization \n[37] and computer generated numbers \n[38]: 1) Walking and Behavioural intervention (WB) (18 males, 57 females) which included the SCAWP with a behavioural intervention and an educational pamphlet on the benefits of walking for OA; 2) Walking intervention (W) (24 males, 57 females) wherein participants only received the SCAWP intervention and the educational pamphlet; 3) Self-directed control (C) (32 males, 52 females) wherein participants only received the educational pamphlet. All 3 groups were provided with pedometers and log books to be completed to measure physical performance (walking in minutes) and additional PA aside from the walking sessions (Table \n1). The KT strategies were implemented over a 12-month duration and participants were assessed for additional 6 months (15 and 18-month follow-up assessments). Table 1 Summary of the KT strategies used KT strategies in each group WB or WalkingClub+ (supervised walking program + behavioural approach at the community-based Walking Club) W or WalkingClub (supervised walking program only at the community-based Walking Club) C or Self-directed Home-based or community based different than W and WB (unsupervised/self-directed Walking program only) EBP walking program (Ottawa Panel, 2005/2012) refs: \n[ 33 - 35 ] EBP walking program (Ottawa Panel, 2005/2012) refs: \n[ 33 - 35 ] Walking program (general info from pamphlet on walking and OA) Walking Club + effect (supervised program, team, daily monitored vital measures: BP, HR, #steps,) Walking Club effect (supervised program, team, daily monitored vital measures: BP, HR, #steps,) N/A Pedometer as measurement tool, but becomes a KT strategy Pedometer as measurement tool, but becomes a KT strategy Pedometer as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy $ compensation each walking session and at evaluation session $ compensation each walking session and at evaluation session $ compensation each logbook fulfilled and at evaluation session Study affiliation/participation (feel committed) Study affiliation/participation (feel committed) Study affiliation/participation (feel committed) Behavioural intervention (Baycrest PACE-ex: patient education + goal settings; PA counselling, telephone support) N/A N/A Following the successful randomization process, wherein no statistical difference was found between the three groups (with the exception of the \"physical role\" category of the SF-36: part II) (Tables \n2, \n3, \n4, \n5), research staff and evaluators were blinded to the treatment allocation. An independent evaluator was blinded to assess outcome measures. Due to the nature of the physical intervention, it was not practical to blind the study participants and PA specialists supervising the aerobic walking program \n[39]. Table 2 Subject demographics and baseline characteristics Characteristic Walking (n=79) Walking and Behavioural (n=69) Self-directed Control (n=74) Total Mean age (SD), yrs 63.9 (10.3) 63.9 (8.2) 62.3(6.8) 63.4 (8.6) Missing Data 0 0 0 0 Men/women, (%) 24 (30.4)/55 (69.9) 18 (26.1)/51 (73.9) 27 (36.5)/47 (63.5) 69 (31.1)/153 (68.9) Missing Data 0 0 0 0 Affected knee, n (%)         Right 33 (41.8) 31 (44.9) 25 (33.8) 89 (40.1) Left 31(39.2) 23 (33.3) 28 (37.8) 82 (36.9) Both side 15 (19.0) 15 (28.4) 21 (28.4) 51 (23.0) Missing Data 0 0 0 0 Mean duration of OA (SD), yrs 9.54 (8.09) 11.3 (9.7) 10.0 (9.9) 10.3 (9.26) Missing Data 0 0 0 0 Mean weight (SD), kg 80.7 (18.5) 83.1 (15.4) 83.0(15.8) 82.2 (16.6) Missing Data 0 1 0 0 Mean BMI (SD), kg/m² 29.4 (5.4) 30.3 (5.6) 29.9(5.3) 29.8 (5.4) Missing Data 0 2 4 6 Walking aid, n (%)         Yes 10 (12.7) 10 (14.5) 9 (12.2) 29 (13.1) No 69 (87.3) 58 (84.1) 64 (86.5) 191 (86.0) Missing Data 0 1 1 2 Racial background, n (%)         White 69 (87.3) 60 (87.0) 68 (91.9) 197 (88.7) Black 1 (1.3) 3 (4.3) 1 (1.4) 5 (2.3) Hispanic 2 (2.5) 2 (2.9) 4 (5.4) 8 (3.6) Asian or Pacific Islander 5 (6.3) 4 (5.8) 1 (1.4) 10 (4.5) American Indian or Alaskan native 1 (1.3) 0 (0) 0 (0) 1 (0.5) Other 1 (1.3) 0 (0) 0 (0) 1 (0.5) Missing Data 0 0 0 0 Marital status, n (%)         Married 46 (58.2) 36 (52.2) 44 (59.5) 126 (56.8) Separated 2 (2.5) 1 (1.4) 1 (1.4) 4 (1.8) Divorced 9 (11.4) 17 (24.6) 8 (10.8) 34 (15.3) Widowed 17 (21.5) 11 (15.9) 9 (12.2) 37 (16.7) Never Married 5 (6.3) 4 (5.8) 12 (16.2) 21 (9.5) Missing Data 0 0 0 0 Level of education, n (%)         Less than 7 yrs of school 2 (2.5) 1 (1.4) 1 (1.4) 4 (1.8) Grades 7 through 9 5 (6.3) 0 (0) 0 (0) 5 (2.3) Grades 10 through 11 7 (8.9) 4 (5.8) 5 (6.8) 16 (7.2) High school graduate 13 (16.5) 16 (23.2) 8 (10.8) 37 (16.7) 1 to 4 yrs of college 13 (16.5) 9 (13.0) 22 (29.7) 44 (19.8) College graduate 25 (31.6) 21 (30.4) 19 (25.7) 65 (29.3) Professional or Graduate school 14 (17.7) 18 (26.1) 19 (25.7) 51 (23.0) Missing Data 0 0 0 0 Yrs: years, OA: osteoarthritis, kg: kilograms, m: meters, %: validity percent. Table 3 Subject’s Medication at Baseline Medication Walking (W) only (n=79) Walking and Behavioural (WB) (n=69) Self-directed (C) (n=74) Oral hypoglycaemic agents         Total 2 (3) 5 (7) 9 (12)   Biguanide Class 2 (3) 4 (6) 9 (12)   Sulfonylureas Class 0 (0) 0 (0) 2 (3)   Thiazolidinedione 0 (0) 2 (3) 2 (3) Antihypertensive Agents         Total 22 (28) 15 (22) 15 (22)   Calcium channel blockers 6(8) 0(0) 7(9)   Antiotensin-converting enzyme 3 (4) 7 (10) 7 (9)   Angiotensin II receptor blockers 6 (8) 6 (9) 3 (4)   Beta-blocker 4 (5) 3 (4) 4 (5)   Alpha-blocker 3 (4) 0 (0) 0 (0)   Diuretic 5 (6) 8 (12) 5 (7) Antiarrhythmic 1 (1) 0 (0) 0 (0) Anti-anginal 1 (1) 0 (0) 0 (0) Anti-platelet agent 1 (1) 1 (1) 1 (1) Anticoagulant 0 (0) 2 (3) 0 (0) Phosphodiesterase type 5 inhibitors 0 (0) 1 (1) 0 (0) Statins 8 (10) 12 (17) 10 (14) Hormone         Total 11 (14) 10 (14) 11 (15)   Thyroid 5 (6) 5 (7) 8 (11)   Insulin 0 (0) 3 (4) 1 (1)   Corticosteroid 4 (5) 2 (3) 3 (4)   Progesterone 1 (1) 0 (0) 1 (1)   Androgen 1 (1) 0 (0) 0 (0)   Oestrogen 1 (1) 0 (0) 0 (0) Antithyroid 0 (0) 1 (1) 0(0) SERMs (Selective estrogen receptor modifiers) 1 (1) 0 (0) 0 (0) Beta 2-adrenergic receptor agonist (bronchodilator) 2(3) 1 (1) 3 (4) NSAIDs (Non-steroid anti-inflammatory drugs) 26 (33) 21 (30) 18 (24) COX-2 selective inhibitor 3 (4) 4 (6) 1 (1) NSAIDS       Betahistine 0 (0) 1 (1) 0 (0) DMARDs (Disease modifying antirheumatic drugs) 0 (0) 1 (1) 0 (0) Histamine antagonist 0 (0) 2 (3) 0 (0) Bisphosphonate 6 (8) 4 (6) 4 (5) Antibiotics 2 (3) 1 (1) 4 (5) Analgesics 1 (1) 1 (1) 3 (4) Opiates 0 (0) 0 (0) 2 (3) Hypnotic 4 (5) 3 (4) 4 (5) Antidepressant 7 (9) 5 (7) 7 (9) Psychostimulant 0 (0) 1 (1) 0 (0) Antipsychotic 0 (0) 1 (1) 0 (0) Anti-manic 0 (0) 1 (1) 0 (0) Anti-convulsant 1 (1) 1 (1) 0 (0) Protein pump inhibitor (PPI) 9 (11) 5 (6) 6 (8) Enzyme inhibitor 0 (0) 1 (1) 2 (3) Muscle relaxant 0 (0) 1 (1) 1 (1) Antispasmodic 0 (0) 1 (1) 0 (0) Anti-cholinergic 1 (1) 1 (1) 0 (0) Antineoplastic 2 (3) 1 (1) 0 (0) Antimetabolite 1 (1) 1 (1) 1 (1) Antifolate 0 (0) 1 (1) 1 (1) Latanoprost 0 (0) 1 (1) 0 (0) Viscosupplementation 0 (0) 1 (1) 0 (0) Xanthine 1 (1) 0 (0) 0 (0) Antifungals 0 (0) 0 (0) 1 (1) Antimalarial 0 (0) 1 (1) 1 (1) Nitrate 0 (0) 0 (0) 2 (3) Laxative 0 (0) 0 (0) 2 (3) Supplements 22 (28) 22 (28) 18 (24) Homeopathic Medication 1 (1) 1 (1) 1 (1) None 11 (14) 10 (13) 9 (12) Not Specific 21 (27) 21 (27) 20 (27) Missing Data         Total 4 (5) 8 (10) 17 (23)   No Evaluation* 2 (3) 4 (5) 7 (9)   No File** 2 (3) 4 (6) 10 (14) This table presents a list of medications which participants declared to be taking at baseline. Data is presented as n (%). Table 4 Summary of adherence based on attendance marked in trainers’ manuals and individual walkers’ logbooks   Walking N Mean±SD Walking & Behavioural N Mean±SD Self-directed (C) N Mean±SD Walking vs. Self-directed t -test P value Walking & Behavioural Vs. Self-directed t -test P value Walking & Behavioural Vs. Walking t -test P value 0-3 months 79 69 73 *(0.043) *(0.012) (0.514)   0.770±0.299 0.802±0.290 0.652±0.403       3-6 months 79 69 73 (0.242) (0.159) (0.774)   0.617±0.410 0.636±0.390 0.535±0.459       6-9 months 79 69 73 (0.421) (0.937) (0.363)   0.471±0.418 0.534±0.425 0.528±0.463       9-12 months 79 69 73 (0.549) (0.551) (0.989)   0.446±0.441 0.445±0.433 0.490±0.462       Total Adherence 79 69 73 (0.690) (0.413) (0.619)   0.576±0.346 0.604±0.342 0.551±0.420       Table 5 Summary of self-efficacy, measured with Stanford questionnaire on chronic disease for three study arms (continued)   Baseline 12 months 18 months   W WB C W WB C W WB C   N N N N N N N N N Variables Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD         W.Vs. C(P) W.Vs. C(P) W.Vs. W(P) W.Vs. C(P) W.Vs. C(P) W.Vs. W(P) Coping 79 69 73 44 44 40 44 41 36 With 1.069±794 1.126±0.775 1.013±0.946 0.864±0.771 1.239±0.730 1.192±0.835 1.064±0.952 1.388±0.856 1.342±1.127 Symptoms       (0.091) (0.858) (0.057) (0.190) (0.793) (0.286) Confidence 79 69 73 44 44 39 43 41 36 About 7.406±1.719 7.827±1.488 7.858±1.512 7.826±1.551 7.682±1.785 8.064±1.575 7.690±1.920 7.546±1.848 8.015±1.476 Doing Things       (0.433) (0.297) (0.060) (0.235) (0.422) *(0.041) (Higher is better) W: Walking only group; WB: Walking and Behavioural Group; C: Self-directed group (unsupervised/self-directed); N: number of subjects in each comparative group; SD: standard deviation; vs: versus; data is presented as mean (standard deviation); p: p-value (statistical significance); * Statistically significant. \n\nBODY.METHODS.INTERVENTION.SUPERVISED WALKING PROGRAMS (WB & W GROUPS):\nThe SCAWPs took place at two walking club sites in Ottawa, Ontario, Canada and one in Gatineau, Québec, Canada. Participants took part in three weekly walking sessions over a 12-month period. Every walking session began with a 10-minute warm-up, consisting of light aerobic exercises, before engaging in the 45-minute aerobic walking phase. The walking sessions ended with a 10-minute cool-down consisting of light aerobic exercises and stretching. The target intensity of the walking phase was approximately 50% to 70% of the subjects' pre-determined maximum heart rate as recommended in The Ottawa Panel guidelines \n[33-35]. The SCAWP was divided into two stages: 1) A \"progressive aerobic phase\" wherein the duration and heart rate intensity progressively increased over time and 2) a \"maintenance aerobic phase\" wherein the duration and heart rate intensity remained constant for the remainder of the walking program (Additional file \n2). The selected dosage, frequency, intensity, and progression of the walking interventions were based on existing protocols proven effective for OA, described in several RCTs involving individuals with OA \n[33-35] (Additional file \n2). To avoid contamination between the two walking interventions, participants in the W group were instructed to walk in the mornings, while the WB group walked in the afternoons at the main walking site. In order to meet the prescribed target heart rate, heart rate monitors were provided to each participant prior to his/her walking session. As a safety precaution, participants who experienced prolonged pain lasting more than two hours within 24 hours after a walking session were asked to temporarily reduce their level of exercise until he/she felt comfortable enough to resume the appropriate duration, intensity and frequency of the walking program. Following randomization, the PA specialist met each subject in the walking groups to explain the SCAWP and its progression (Additional file \n2) and was present for a minimum of three weekly scheduled sessions over one year to supervise the subjects. The PA specialist was responsible for monitoring and recording attendance, blood pressure, heart rate (during and after the walking session), time duration, and number of steps (pedometer) at each walking session. Monetary compensation was given to each participant for attending each walking session.\n\nBODY.METHODS.INTERVENTION.BEHAVIOURAL INTERVENTION (WB GROUP):\nThe behavioural intervention was part of the multifaceted KT intervention (Table \n1) and was implemented using the adapted Program for Arthritis Control through Education and Exercise (PACEex) program \n[40]. The behavioural intervention was integrated into the PACEex program and consisted of the following components: (1) short- and long-term goal setting during PACEex classes; (2) an educational component, delivered by a trained instructor, involving consisting of instructional sessions about the benefits of PA, specifically walking; (3) monthly face-to-face counselling wherein participants received moral support/encouragement to adherence with PA. Potential barriers to program adherence were identified and self-management strategies were reviewed to overcome those barriers; (4) goal setting and telephone counselling were provided as an additional source of social support until the end of the supervised phase. As with the face-to-face counselling, barriers were identified and strategies were shared in an effort to promote the long-term maintenance of PA. In summary, the behavioural intervention consisted of twenty 2-hour group sessions discussing short-term goal setting and education of arthritis-related topics over a duration of 20 consecutive weeks. Individual long-term goal setting was discussed at the beginning of the PACEex program and was followed by monthly face-to-face meetings throughout the first 6 months of the program. The last six months of the 12-month supervised phase consisted of participants receiving counselling via telephone discussing long-term goals and barriers/facilitators to adhere to the walking program.\n\nBODY.METHODS.INTERVENTION.THE SELF-DIRECTED CONTROL (C GROUP):\nParticipants were invited to consult an educational pamphlet on walking and OA. One introductory session was provided to the participant wherein they were provided an educational pamphlet describing the benefits of walking for OA, a pedometer to monitor PA, and a logbook to record activity level and adherence while partaking in the self-directed walking program. Participants in this group received monetary compensation for the completion his/her log books. The self-directed (C) group did not have any contact with participants in the two other groups, avoiding potential contamination.\n\nBODY.METHODS.DATA COLLECTION:\nParticipants were assessed by an independent evaluator at baseline and at each 3 month interval (months 3, 6, 9, 12) during the intervention period. Participants were then assessed at 3 and 6 months post-intervention during the follow-up period (15 and 18-months). Participants were asked to complete a collection of validated questionnaires as well as perform physical tests at each assessment.\n\nBODY.RESULTS.KNOWLEDGE (CPG) UPTAKE OUTCOME MEASURES (KT OUTCOMES).ADHERENCE AND BEHAVIOUR CHANGE:\nSimilar to other studies \n[29,41,42], participant adherence was assessed as the number of attended walking sessions divided by the number of prescribed sessions (3 sessions per week). Among the W and WB groups, the number of attended walking sessions throughout the 12-month duration was recorded by the exercise therapist on site as well as through the use of participants' completed logbooks which provided information on the amount of performed PA during the 12-month intervention period. The C group's adherence was only assessed by the completed log books. Behaviour change was based on the concept of self-efficacy and was measured using the Stanford scale. In regards to the research question, behaviour change results were only reported at 12 and 18-months.\n\nBODY.RESULTS.KNOWLEDGE (CPG) UPTAKE OUTCOME MEASURES (KT OUTCOMES).ANALYSIS:\nThe analysis was conducted on an intention-to-treat basis. Descriptive statistics including mean, standard deviation and frequencies were used to summarize the study groups at baseline. A repeated measure mixed model was used to assess the change in adherence from baseline to end of treatment (12 months) among three study groups. Behavioural change outcomes were assessed from baseline to of end treatment (12 months) and at 6-month follow-up post-intervention (18 months). The model included variables such as intervention group, study month, and an interaction term between intervention group and study month. Missing data was assumed to be missing at random (MAR) in order to include incomplete data. Pairwise differences comparing each group to one another (W vs. C, W vs. WB, WB vs. C) were assessed. The repeated measure mixed model was used to compare the change of adherence from baseline to the 18- month follow-up. Statistical analyses were performed using SAS (version 9.2, SAS Institute Inc., Cary, North Carolina), and statistical significance was defined as p<0.05.\n\nBODY.RESULTS:\nBetween January 2007 and December 2008, 629 study participants were assessed for eligibility. Figure \n2 exhibits the flow of participants from recruitment to follow-up. The most common reasons for exclusion were: 1) time commitment (26.5%); 2) first diagnosed with OA by physician or X-Ray but later contradicted by MRI test result (18.1%); 3) loss of interest (14.2%); 4) severe OA of the weight bearing joint of LE (11.8%); 5) no medical clearance after VO2 max test (5.1%); 6) unable to contact (4.9%); 7) too active (2,9%); 8) knee replacement (2.7%); 9) age related (2.5%); 10) travelling too much (1.9%); 11) living too far (1.2%). Follow-up for the last recruited participant was completed in May 2010. Figure 2Study flow diagram. This figure provides a flow diagram of data collection from baseline to 18-months follow-up. The groups were similar in age, sex, ethnicity, OA duration and medication use (Table \n2 &3). Two hundred and twenty two subjects were included in the study. Women made up 153 of the 222 subjects. The mean age (±S.D.) was 63.4 (±8.6) years, the mean disease duration (±S.D.) was 10.3 (±9.3) years, the highest level of completed education was \"College graduate\" with 65 participants (±29.3) and \"Professional or graduate school\" with 51 participants (±23.0). There was no statistical difference between the three comparative groups at baseline for all the demographic variables.\n\nBODY.RESULTS.BLINDING:\nThe effectiveness of blinding the evaluator to treatment allocation was assessed through a questionnaire where she was asked to identify which group each participant belonged to. The blinding rate of the evaluator was high (98%).\n\nBODY.RESULTS.ADHERENCE AND ATTRITION:\nFrom baseline to 12 months, the adherence rates were expressed as percentages of adherence among walkers who did not drop out and were reported as means (+/−standard deviation) for each time period between groups (Table \n4). Statistically significant results for short-term adherence rates favoured participants in the WB group (80.2%) compared to the self-directed (C) group (65.2%) (p<0.012) after 3 months. For long term adherence (6 to 12 months), the WB group demonstrated superior adherence rates throughout each period compared to the other two groups, but results did not reach statistical significance. No statistical significance (p> 0.05) was observed for the total adherence rates between WB vs. W, W vs. C, and WB vs. C. During the 12-month intervention period, the dropout rates were 43.1% for the W group, 40.6% for the WB group and 49.3% for the self-directed (C) group. From baseline to 18 months, the dropout rates were 44.3% for the W group; 40.6% for the WB group and 52.1% for the self-directed (C) group (Figure \n2 and Additional files \n3 &4). The dropout rates were lower for the WB group at 12-months (36.2%) and 18-months (39.1%) compared to the W group (40.5% and 44.3%) and the self-directed (C) group (40.5% and 51.3%). The W and self-directed (C) groups had the highest dropout rates (40.5%) at 12 months while the self-directed (C) group had the highest at 18 months (52.1%).\n\nBODY.RESULTS.BEHAVIOUR CHANGE:\nAfter the 12 month intervention phase, no differences were found between the 3 comparative groups for any of the items on the Stanford scale (Table \n5). Moreover, variables related to coping with symptoms and confidence were higher among the WB group compared to the other two groups. At 18-months, statistically significant differences in scores between the WB and W groups revealed that the W group demonstrated an improved mean for the \"confidence about doing things\" variable (p=0.041). The self-directed group (C) demonstrated the highest mean \"confidence about doing things\" score 8.015± 1.476 (p=0.040) as it increased by 0.157 (p=0.048) when compared to baseline. The self-directed group (C) demonstrated the highest mean score 8.015± 1.476 (p=0.040).\n\nBODY.DISCUSSION.THE IMPACT OF THE MULTIFACETED KT INTERVENTIONS ON KNOWLEDGE (CPG) UPTAKE.ADHERENCE AND ATTRITION:\nThe additional behavioural approach of the multifaceted KT intervention (WB group) improved adherence to the SCAWP when compared to the other two groups over a short-term period. In addition, the WB group demonstrated reduced dropout rates compared to the W and self-directed (C) groups. Over a long-term period, the three groups were equivalent in improving behaviour change while adopting the Ottawa Panel recommendations on walking \n[33-35]. Adherence rates were higher among the WB group for each 3-month period compared to the other two groups. Improved adherence rates over the short-term period can be explained by the fact the PACEex program used in the behavioural intervention lasted for 20 consecutive weeks at the beginning of the study. The PACEex program \n[40] used in this study, allowed our participants to set general goals, which may have motivated participants to adhere to the walking program. Similar to a previous study \n[43], another reason may be that participants in the W and WB group lost motivation after the supervised phase, no longer having regular professional support from the PA specialist. The drop-out rate results are comparable to other long-term studies which have used an aerobic PA intervention with a behavioural intervention component \n[38,44-54]. Higher adherence rates were demonstrated at 2 to 4 months (85% - 90%) and at 10 to 18 months (50% - 90%). An initial 6-month center-based exercise program enhanced retention over short and the long-term periods while promoting short-term adherence and energy expenditure \n[55]. The moderate exercise intervention retained more subjects, but had little influence on adherence over a long-term period \n[56]. Home-based PA programs, such as self-directed walking, can achieve improved adherence rates compared to facility-based programs over a long period \n[55].\n\nBODY.DISCUSSION.BEHAVIOUR CHANGE:\nThere were no statistically significant results among the three groups at 12 months. These results concur with a similar study \n[53] and are normal in the context of a KT study which implemented an already proven effective walking program \n[33-35]. However, at 18-months, there were statistically significant results which favoured the self-directed group (C). One reason explaining why participants in the self-directed group (C) had higher confidence and self-efficacy scores may be due to the fact that since these individuals were not supervised, they may have been more independent, self-controlled, self-confident, and self-motivated. Participants in this group may have also had the opportunity, over the 12-month period, to develop confidence and self-efficacy skills compared to the two supervised groups during the unsupervised follow-up phase. The variable \"confidence about doing things\" decreased after 12 months for the WB group. Given the high amount of PA demonstrated by the WB group at 12 months, we were surprised with this result. On the other hand, at 18 months, statistically significant results favoured the W group compared to WB group for the variables \"confidence about doing things\". Again, these results about confidence were unexpected given that the WB group performed higher amounts of PA. Exercise therapists, pedometers, logbooks and social support from walking club members (Table \n1) assisted participants who had difficulties with activity adherence and maintenance \n[57]. Regular professional contact was recognised as the significant motivational factor for adherence to the supported walking program during the RCT \n[43]. Social support has been reported to be more successful in engaging participants in PA programs compared to programs which only provided written educational material \n[58,59]. The combination of using pedometers and goal-setting has been recognised as an effective KT tool for increasing PA \n[31,60-62]. However, despite the short-term effectiveness of pedometers and goal-setting, the success of these tools diminished over time. Other KT strategies are recommended to facilitate sustainability \n[62].\n\nBODY.DISCUSSION.LIMITATIONS:\nBarriers related to the implementation of SCAWP were identified and addressed to improve adherence for older individuals \n[57]. Acceptability (preferences, tolerance and accessibility) should be considered to promote PA adherence and to overcome associated barriers \n[63]. The most common barriers identified by participants with OA are perceptions about illness and recovery, transportation difficulties, family commitments and inconvenient timing \n[64]. Time commitment is also recognised as a significant barrier \n[55]. Additional barriers such as weather conditions including snow storms, freezing rain and humidity, a 3-month bus strike, and health problems of participants' partners may have contributed to study adherence. Adherence is influenced by participant's preference of intervention \n[65-70]. Unfortunately, with this study being a RCT, taking participants' preference into consideration was not possible due to the randomization procedure. This being said, randomisation may have contributed to higher dropout rates. The purpose of this KT study was to implement a scientifically proven effective walking program for OA \n[33-35]. Short term research suggests that self-management interventions \n[16,18], telephone counselling \n[22,46,71-73], peer support \n[13,74-76] and PA education \n[18,22,27,41,77] are effective behavioural strategies for improving self-efficacy. The long-term multifaceted behavioural strategy used in this study did not concur with the previous studies, as participants in the self-directed group (C group) were provided information and education on the benefits of PA. Methods of measurement for adherence for the W and WB groups were different than the self-directed (C) group. Observed attendance was recorded on site for each walking session for the W and WB groups by exercise therapists while adherence could only be measured in the self-directed group (C) using completed logbooks. The self-reporting of PA, which is commonly overstated, may have been subject to potential information bias. Selection bias may have occurred as our study population included individuals with mild to moderate OA, with a confirmed X-ray report. In addition, some participants were retired while some were still employed. Since participants were provided monetary compensation for their participation in the study, one may argue the generalizability of the results.\n\nBODY.DISCUSSION.IMPLICATIONS:\nThis KT RCT aimed to identify the best multifaceted strategy to implement a proven effective walking program for OA of the knee among older individuals. The epidemic public health problem of OA can be managed through the implementation of a proven effective walking program in existing community-based walking clubs as well as at home with minimal support. More studies are needed to improve the long-term walking adherence or knowledge uptake on SCAWP among participants with OA. Particular attention should focus on patients' characteristics and preferences.\n\nBODY.CONCLUSIONS:\nThis study partially supported our initial hypothesis regarding the impact of a multifaceted KT intervention as the combined walking and behavioural approach had greater benefits over the comparative groups resulting in improved long-term adherence to PA only over a short period (3-months). The WB group demonstrated to be the best KT strategy for reducing dropouts compared to the W group and self-directed (C) group. All three KT strategies were equivalent over the long-term period (up to 18 months) for improving behavioural change. However, the self-directed walking program (C group) was the least expensive to implement over a long-period (18 months). This RCT was a long-term adherence study as well as a KT study which addressed questions of clinical and scientific importance aimed at improving the understanding of effective KT strategies to promote the adoption and maintenance of a community-based walking program for OA.\n\nBODY.ABBREVIATIONS:\nOA: Osteoarthritis; QoL: Quality of Life; SCAWP: Supervised Community-based Aerobic Walking Program; CPG: Clinical Practice Guideline; KT: Knowledge Translation; KTAC: Knowledge-To-Action Cycle; EBCPG: Evidence-Based Clinical Practice Guidelines; CONSORT: Consolidated Standards of Reporting Trials; WB (group): Walking and Behavioural intervention; W (group): Walking intervention; C (group): Self-directed control intervention; RCT: Randomized Controlled Trial; PA: Physical Activity; PACE-ex: Program for Arthritis Control through Education and Exercise; AIMS2: Arthritis Impact Measurement Scale 2; SF-36: Short-Form 36 Health Survey; WOMAC: Western Ontario and McMaster Universities Osteoarthritis Index; MET: Multiples of basal metabolic rate; MAR: Missing At Random; CIHR: Canadian Institutes of Health Research.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nLB is a Full Professor of Rehabilitation, an epidemiologist and the principal investigator of this study and primary author of this manuscript. GAW the co-principal investigator of this study, is senior biostatistician and director, Cardiovascular Research Methods Centre at the University of Ottawa Heart Institute, and is a leading expert in the design and analysis of clinical trials. He provided assistance with the methodology and statistical analysis of the study. GPK is a Full Professor of Physiology at the University of Ottawa, and director of the university's Human Performance and Environmental Medicine Research Laboratory and Professional Fitness and Lifestyle Consultant Certification Training program. He assisted with the methodology of the study. RR is a senior researcher at the University of Ottawa Heart Institute and provided experience in applying innovative behavioural approaches aimed at increasing PA in healthy and chronic diseased populations. AM is a health economist and assisted with the economic evaluation concept in the original proposal. PT is a rheumatologist, an epidemiologist and chief of the Cochrane Musculoskeletal Group. He has experience in conducting RCTs and meta-analyses. He provided assistance with OA outcome measures in the study. MH & CM developed the PACEex program and was in charge of training the PA specialist. GDA was the research coordinator of this study and assisted with the writing of this manuscript. LC is a biostatistician and performed the analyses of this study. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/12/871/prepub\n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional file 1The knowledge –to-action framework and KT interventions. This table demonstrates how each step of the KTAC framework is applied to the RCT.Click here for file Additional file 2Post-Randomization Walking Interventions. The supervised phase of the walking intervention lasted 52 weeks following randomization. The unsupervised (follow-up) phase of the walking intervention lasted 26 additional weeks. For Intervention (one year): 52 weeks × 3 sessions =156 sessions attended; Follow-up (six month): 26 weeks x 3 sessions = 78 sessions attended. N.B. This structured and supervised walking program is based on Ottawa Panel Grade A recommendations \n[33-35] and was implemented to groups W and WB (Implementation groups). Individuals with OA in the self-directed group (C) received a pamphlet on OA and walking (which recommends regular walking in a unsupervised/self-directed way) (Dissemination group) \n[25].Click here for file Additional file 3Reasons of dropouts. This table demonstrates the reasons as to why participants decided to withdraw from the study.Click here for file Additional file 4Dropout rates and corresponding retention rates at 12-and 18-month time periods. This table demonstrates the drop-out rates and retention rates at end of intervention 12-months and follow-up at 18-months.Click here for file\n\n**Question:** Compared to Self-directed control (C) what was the result of Walking and Behavioural intervention (WB) on Short-term adherence to the program at 3 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
971
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The implementation of a community-based aerobic walking program for mild to moderate knee osteoarthritis (OA): a knowledge translation (KT) randomized controlled trial (RCT): Part I: The Uptake of the Ottawa Panel clinical practice guidelines (CPGs)\n\n ABSTRACT.BACKGROUND:\nThe implementation of evidence based clinical practice guidelines on self-management interventions to patients with chronic diseases is a complex process. A multifaceted strategy may offer an effective knowledge translation (KT) intervention to promote knowledge uptake and improve adherence in an effective walking program based on the Ottawa Panel Evidence Based Clinical Practice Guidelines among individuals with moderate osteoarthritis (OA).\n\nABSTRACT.METHODS:\nA single-blind, randomized control trial was conducted. Patients with mild to moderate (OA) of the knee (n=222) were randomized to one of three KT groups: 1) Walking and Behavioural intervention (WB) (18 males, 57 females) which included the supervised community-based aerobic walking program combined with a behavioural intervention and an educational pamphlet on the benefits of walking for OA; 2) Walking intervention (W) (24 males, 57 females) wherein participants only received the supervised community-based aerobic walking program intervention and the educational pamphlet; 3) Self-directed control (C) (32 males, 52 females) wherein participants only received the educational pamphlet. One-way analyses of variance were used to test for differences in quality of life, adherence, confidence, and clinical outcomes among the study groups at each 3 month assessment during the 12-month intervention period and 6-month follow-up period.\n\nABSTRACT.RESULTS:\nShort-term program adherence was greater in WB compared to C (p<0.012) after 3 months. No statistical significance (p> 0.05) was observed for long-term adherence (6 to 12 months), and total adherence between the three groups. The three knowledge translation strategies demonstrated equivalent long-term results for the implementation of a walking program for older individuals with moderate OA. Lower dropout rates as well as higher retention rates were observed for WB at 12 and 18 months.\n\nABSTRACT.CONCLUSION:\nThe additional knowledge translation behavioural component facilitated the implementation of clinical practice guidelines on walking over a short-term period. More studies are needed to improve the long-term walking adherence or longer guidelines uptake on walking among participants with OA. Particular attention should be taken into account related to patient's characteristic and preference. OA can be managed through the implementation of a walking program based on clinical practice guidelines in existing community-based walking clubs as well as at home with the minimal support of an exercise therapist or a trained volunteer.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent Controlled Trials IRSCTNO9193542\n\nBODY.BACKGROUND:\nRising healthcare costs, limited resources, and the aging population have created new and growing challenges in the management of osteoarthritis (OA). The challenges associated with the management of OA are numerous as the prevalence of the population continues to increase while healthcare resources remain limited \n[1-7]. It is therefore necessary to determine the most effective methods of integrating research evidence in order to optimize health outcomes. The purpose of this randomized controlled trial (RCT) was to compare 1) improvements in quality of life (QoL) and clinical outcomes such as pain, mobility and endurance); 2) adherence rates; and 3) confidence and self-efficacy after the implementation of a 12-month supervised community-based aerobic walking program (SCAWP) based on the Ottawa Panel clinical practice guidelines (CPG) among three knowledge translation (KT) intervention arms. QoL, confidence, and self-efficacy were compared at 12- months (end of treatment) and at 18-months (6-months post-intervention). Adherence was compared during the intervention period at 3, 6, 9 and 12 months. The first part of this manuscript introduces each KT intervention and demonstrates the impact of knowledge (CPG) uptake of each by comparing outcomes influenced by the KT intervention such as adherene and behaviour change (confidence and self-efficacy). Knowledge application of each KT intervention is explained through the use a theoretical framework: Knowledge-To-Action Cycle (KTAC) \n[8,9]. The second part of this manuscript focuses on outcome evaluation, a specific phase in the KTAC framework used to guide this study. Part II demonstrates the effect of each intervention involving a walking program on outcomes which were influenced by the SCAWP including QoL (primary outcome) and other clinical outcomes such as pain, mobility, and endurance. These outcomes are exhibited in part II. Therefore, both parts of this manuscript were split according to the ''Evaluate outcomes\" phase of the KTAC framework. Part I focused on KT outcomes which measured the success of CPG uptake through participants' adherence and behavioural change while part II focused on clinical outcomes which measured the positive effect of SCAWP due to the indirect CPG uptake/implementation/adoption by individuals with OA of the knee.\n\nBODY.BACKGROUND.KNOWLEDGE TRANSLATION:\nA major issue in health research today is finding effective and efficient ways to exchange knowledge between researchers, clinicians and the general public. Potential benefits to health which could be accrued through the application of research findings are not realized due to challenges in research uptake and knowledge translation \n[10]. KT is defined by Estabrooks et al. \n[11] as \"the process by which specific research-based knowledge (science) is implemented in practice.\" The available evidence suggests that with an effective KT strategy, uptake of evidence-based clinical practice guidelines (EBCPGs) can be effective in improving patient health outcomes \n[12].\n\nBODY.BACKGROUND.PHYSICAL ACTIVITY AND OSTEOARTHRITIS:\nThe promotion of physical activity (PA) is a priority for health organizations serving the general population \n[13,14] and is highly recommended for subgroups affected by chronic diseases \n[15-19]. Community-based PA combined with a behavioural modification and self-management interventions can reduce the risk of disability and negative consequences of inactivity related to OA \n[20,21]. The challenge is to develop PA programs that will encourage OA patients to initiate and maintain improvements in exercise behaviour over a long-term period \n[22]. The available evidence suggests that with an effective KT strategy, uptake of evidence-based clinical practice guidelines (CPGs) can be effective in improving patient health outcomes \n[12].\n\nBODY.BACKGROUND.SELECTING AN EFFECTIVE KT STRATEGY:\nMultifaceted interventions combining more than one KT strategy have tended to be more effective than using a single KT strategy alone \n[23,24] and have shown to have the greatest impact on CPG adherence and PA performance \n[25]. Behavioural interventions have been used for other chronic conditions to improve long-term adherence and maintenance of PA regimens with varying success. The efficacy of different behavioural interventions including patient education, health counselling, goal setting, and social/peer support, delivered separately and or in combination, have been examined in the management of arthritis \n[26-31].\n\nBODY.METHODS:\nThe following methodology is in full agreement with the Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement criteria for reporting RCTs \n[32].\n\nBODY.METHODS.DESIGN:\nThis single blind RCT, funded by the Canadian institutes of Health Research, used a parallel group design (1:1:1). This community-based study was approved by the University of Ottawa Research Ethics Board and the City of Ottawa Public Health Research Ethics Board. This study implemented one of the Ottawa Panel CPG recommendations related to a supervised community-based aerobic walking programs (SCAWPs) among individuals with mild to moderate OA of the knee \n[33-35]. In order to facilitate an understanding of the KT process, Graham et al. developed a conceptual framework entitled \"the Knowledge-To-Action Cycle\" (Figure \n1) which provides an approach which combines commonalities from various planned-action theories \n[8,9]. This framework was used to guide the development of the three KT strategies used in this RCT (Additional file \n1). The framework demonstrates the dynamic process of knowledge creation and application \n[9]. Knowledge creation consists of three phases: knowledge inquiry, knowledge synthesis, and knowledge tools/products. The knowledge creation cycle of the framework demonstrates how knowledge is refined through each phase to provide useful information for end users. The action cycle of the framework consists of seven phases which may occur sequentially or simultaneously: 1) Identify the Knowledge-To-Action Cycle gaps; 2) Adapt Knowledge to local context; 3) Assess barriers to Knowledge Use; 4) Select, Tailor, Implement Interventions; 5) Monitor Knowledge Use; 6) Evaluate outcomes; and 7) Sustained Knowledge Use. For part I, the emphasis was placed on the \"Evaluate outcomes (KT outcomes)\" phase as participants' knowledge uptake was measured as adherence to the effective walking program for OA and attitude/behaviour change (self-efficacy and confidence) after participating in the SCAWP. Figure 1Knowledge-To-Action Cycle. This figure illustrates the Knowledge-To-Action Cycle Framework \n[8]. This figure is used with permission: Lost in knowledge translation: Time for a map. Graham ID, et al. Journal of Continuing Education in the Health Professions, 26(1). Copyright © 2006. The Alliance for Continuing Medical Education, The Society for Academic Continuing Medical Education, and The Association for Hospital Medical Education. The hypothesis was that the additional behavioural approach of the multifaceted KT intervention would: 1) increase short and long term adherence to the SCAWP; 2) reduce the drop-out rate; 3) influence behaviour change by improving self-efficacy, confidence, QoL and other clinical outcomes.\n\nBODY.METHODS.SAMPLE AND RECRUITMENT:\nA total of 222 adults with knee OA were recruited. Participants were eligible to participate in the study if he/she 1) had a confirmed diagnosis with mild to moderate unilateral or bilateral OA according to the American College of Rheumatology clinical and radiographic/magnetic resonance imagery criteria; 2) reported pain for at least 3 months; 3) expected his/her medication to change during the study period; 4) demonstrated an ability to ambulate for a minimum of 20 minutes, at their own pace with minimal reports of pain (≤ 3 out of 10 on a visual analogue pain rating scale) \n[36]; 4) were able to be treated as an out-patient; and 5) were available three times a week over a period of 12 months. Potential study subjects were excluded if they had: 1) Participated in regular physical or aerobic sports ≥ 2 times per week for more than 20 minutes per session during the previous 6 months; 2) severe OA of the knee or other weight bearing joints of the lower extremity; 3) no written consent from their physician to participate in the study; 4) pain at rest or at night; 5) received rehabilitation treatment, corticosteroids injection, or any other pain-related treatment besides medication for arthritis within the last 12 months; 6) uncontrolled hypertension (Systolic blood pressure >160 mm Hg confirmed by the screening initial VO2 max test at the Ottawa Heart Institute) 7) other illnesses, such as rheumatoid arthritis (judged by the patient or study physician to make participation in this study inadvisable); 8) significant cognitive deficit resulting in an inability to understand or comply with instructions; 9) surgery planned in the next year; 10) Intention to move away from Ottawa region in the next year; 11) an inability to communicate in English or French; or 12) an unwillingness to sign informed consent. The study coordinator assessed the inclusion/exclusion criteria of all potential participants by telephone. At the first visit, written informed consent was obtained if the patient met all inclusion criteria.\n\nBODY.METHODS.INTERVENTION:\nParticipants were randomly assigned to one of the three KT intervention groups using central randomization \n[37] and computer generated numbers \n[38]: 1) Walking and Behavioural intervention (WB) (18 males, 57 females) which included the SCAWP with a behavioural intervention and an educational pamphlet on the benefits of walking for OA; 2) Walking intervention (W) (24 males, 57 females) wherein participants only received the SCAWP intervention and the educational pamphlet; 3) Self-directed control (C) (32 males, 52 females) wherein participants only received the educational pamphlet. All 3 groups were provided with pedometers and log books to be completed to measure physical performance (walking in minutes) and additional PA aside from the walking sessions (Table \n1). The KT strategies were implemented over a 12-month duration and participants were assessed for additional 6 months (15 and 18-month follow-up assessments). Table 1 Summary of the KT strategies used KT strategies in each group WB or WalkingClub+ (supervised walking program + behavioural approach at the community-based Walking Club) W or WalkingClub (supervised walking program only at the community-based Walking Club) C or Self-directed Home-based or community based different than W and WB (unsupervised/self-directed Walking program only) EBP walking program (Ottawa Panel, 2005/2012) refs: \n[ 33 - 35 ] EBP walking program (Ottawa Panel, 2005/2012) refs: \n[ 33 - 35 ] Walking program (general info from pamphlet on walking and OA) Walking Club + effect (supervised program, team, daily monitored vital measures: BP, HR, #steps,) Walking Club effect (supervised program, team, daily monitored vital measures: BP, HR, #steps,) N/A Pedometer as measurement tool, but becomes a KT strategy Pedometer as measurement tool, but becomes a KT strategy Pedometer as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy $ compensation each walking session and at evaluation session $ compensation each walking session and at evaluation session $ compensation each logbook fulfilled and at evaluation session Study affiliation/participation (feel committed) Study affiliation/participation (feel committed) Study affiliation/participation (feel committed) Behavioural intervention (Baycrest PACE-ex: patient education + goal settings; PA counselling, telephone support) N/A N/A Following the successful randomization process, wherein no statistical difference was found between the three groups (with the exception of the \"physical role\" category of the SF-36: part II) (Tables \n2, \n3, \n4, \n5), research staff and evaluators were blinded to the treatment allocation. An independent evaluator was blinded to assess outcome measures. Due to the nature of the physical intervention, it was not practical to blind the study participants and PA specialists supervising the aerobic walking program \n[39]. Table 2 Subject demographics and baseline characteristics Characteristic Walking (n=79) Walking and Behavioural (n=69) Self-directed Control (n=74) Total Mean age (SD), yrs 63.9 (10.3) 63.9 (8.2) 62.3(6.8) 63.4 (8.6) Missing Data 0 0 0 0 Men/women, (%) 24 (30.4)/55 (69.9) 18 (26.1)/51 (73.9) 27 (36.5)/47 (63.5) 69 (31.1)/153 (68.9) Missing Data 0 0 0 0 Affected knee, n (%)         Right 33 (41.8) 31 (44.9) 25 (33.8) 89 (40.1) Left 31(39.2) 23 (33.3) 28 (37.8) 82 (36.9) Both side 15 (19.0) 15 (28.4) 21 (28.4) 51 (23.0) Missing Data 0 0 0 0 Mean duration of OA (SD), yrs 9.54 (8.09) 11.3 (9.7) 10.0 (9.9) 10.3 (9.26) Missing Data 0 0 0 0 Mean weight (SD), kg 80.7 (18.5) 83.1 (15.4) 83.0(15.8) 82.2 (16.6) Missing Data 0 1 0 0 Mean BMI (SD), kg/m² 29.4 (5.4) 30.3 (5.6) 29.9(5.3) 29.8 (5.4) Missing Data 0 2 4 6 Walking aid, n (%)         Yes 10 (12.7) 10 (14.5) 9 (12.2) 29 (13.1) No 69 (87.3) 58 (84.1) 64 (86.5) 191 (86.0) Missing Data 0 1 1 2 Racial background, n (%)         White 69 (87.3) 60 (87.0) 68 (91.9) 197 (88.7) Black 1 (1.3) 3 (4.3) 1 (1.4) 5 (2.3) Hispanic 2 (2.5) 2 (2.9) 4 (5.4) 8 (3.6) Asian or Pacific Islander 5 (6.3) 4 (5.8) 1 (1.4) 10 (4.5) American Indian or Alaskan native 1 (1.3) 0 (0) 0 (0) 1 (0.5) Other 1 (1.3) 0 (0) 0 (0) 1 (0.5) Missing Data 0 0 0 0 Marital status, n (%)         Married 46 (58.2) 36 (52.2) 44 (59.5) 126 (56.8) Separated 2 (2.5) 1 (1.4) 1 (1.4) 4 (1.8) Divorced 9 (11.4) 17 (24.6) 8 (10.8) 34 (15.3) Widowed 17 (21.5) 11 (15.9) 9 (12.2) 37 (16.7) Never Married 5 (6.3) 4 (5.8) 12 (16.2) 21 (9.5) Missing Data 0 0 0 0 Level of education, n (%)         Less than 7 yrs of school 2 (2.5) 1 (1.4) 1 (1.4) 4 (1.8) Grades 7 through 9 5 (6.3) 0 (0) 0 (0) 5 (2.3) Grades 10 through 11 7 (8.9) 4 (5.8) 5 (6.8) 16 (7.2) High school graduate 13 (16.5) 16 (23.2) 8 (10.8) 37 (16.7) 1 to 4 yrs of college 13 (16.5) 9 (13.0) 22 (29.7) 44 (19.8) College graduate 25 (31.6) 21 (30.4) 19 (25.7) 65 (29.3) Professional or Graduate school 14 (17.7) 18 (26.1) 19 (25.7) 51 (23.0) Missing Data 0 0 0 0 Yrs: years, OA: osteoarthritis, kg: kilograms, m: meters, %: validity percent. Table 3 Subject’s Medication at Baseline Medication Walking (W) only (n=79) Walking and Behavioural (WB) (n=69) Self-directed (C) (n=74) Oral hypoglycaemic agents         Total 2 (3) 5 (7) 9 (12)   Biguanide Class 2 (3) 4 (6) 9 (12)   Sulfonylureas Class 0 (0) 0 (0) 2 (3)   Thiazolidinedione 0 (0) 2 (3) 2 (3) Antihypertensive Agents         Total 22 (28) 15 (22) 15 (22)   Calcium channel blockers 6(8) 0(0) 7(9)   Antiotensin-converting enzyme 3 (4) 7 (10) 7 (9)   Angiotensin II receptor blockers 6 (8) 6 (9) 3 (4)   Beta-blocker 4 (5) 3 (4) 4 (5)   Alpha-blocker 3 (4) 0 (0) 0 (0)   Diuretic 5 (6) 8 (12) 5 (7) Antiarrhythmic 1 (1) 0 (0) 0 (0) Anti-anginal 1 (1) 0 (0) 0 (0) Anti-platelet agent 1 (1) 1 (1) 1 (1) Anticoagulant 0 (0) 2 (3) 0 (0) Phosphodiesterase type 5 inhibitors 0 (0) 1 (1) 0 (0) Statins 8 (10) 12 (17) 10 (14) Hormone         Total 11 (14) 10 (14) 11 (15)   Thyroid 5 (6) 5 (7) 8 (11)   Insulin 0 (0) 3 (4) 1 (1)   Corticosteroid 4 (5) 2 (3) 3 (4)   Progesterone 1 (1) 0 (0) 1 (1)   Androgen 1 (1) 0 (0) 0 (0)   Oestrogen 1 (1) 0 (0) 0 (0) Antithyroid 0 (0) 1 (1) 0(0) SERMs (Selective estrogen receptor modifiers) 1 (1) 0 (0) 0 (0) Beta 2-adrenergic receptor agonist (bronchodilator) 2(3) 1 (1) 3 (4) NSAIDs (Non-steroid anti-inflammatory drugs) 26 (33) 21 (30) 18 (24) COX-2 selective inhibitor 3 (4) 4 (6) 1 (1) NSAIDS       Betahistine 0 (0) 1 (1) 0 (0) DMARDs (Disease modifying antirheumatic drugs) 0 (0) 1 (1) 0 (0) Histamine antagonist 0 (0) 2 (3) 0 (0) Bisphosphonate 6 (8) 4 (6) 4 (5) Antibiotics 2 (3) 1 (1) 4 (5) Analgesics 1 (1) 1 (1) 3 (4) Opiates 0 (0) 0 (0) 2 (3) Hypnotic 4 (5) 3 (4) 4 (5) Antidepressant 7 (9) 5 (7) 7 (9) Psychostimulant 0 (0) 1 (1) 0 (0) Antipsychotic 0 (0) 1 (1) 0 (0) Anti-manic 0 (0) 1 (1) 0 (0) Anti-convulsant 1 (1) 1 (1) 0 (0) Protein pump inhibitor (PPI) 9 (11) 5 (6) 6 (8) Enzyme inhibitor 0 (0) 1 (1) 2 (3) Muscle relaxant 0 (0) 1 (1) 1 (1) Antispasmodic 0 (0) 1 (1) 0 (0) Anti-cholinergic 1 (1) 1 (1) 0 (0) Antineoplastic 2 (3) 1 (1) 0 (0) Antimetabolite 1 (1) 1 (1) 1 (1) Antifolate 0 (0) 1 (1) 1 (1) Latanoprost 0 (0) 1 (1) 0 (0) Viscosupplementation 0 (0) 1 (1) 0 (0) Xanthine 1 (1) 0 (0) 0 (0) Antifungals 0 (0) 0 (0) 1 (1) Antimalarial 0 (0) 1 (1) 1 (1) Nitrate 0 (0) 0 (0) 2 (3) Laxative 0 (0) 0 (0) 2 (3) Supplements 22 (28) 22 (28) 18 (24) Homeopathic Medication 1 (1) 1 (1) 1 (1) None 11 (14) 10 (13) 9 (12) Not Specific 21 (27) 21 (27) 20 (27) Missing Data         Total 4 (5) 8 (10) 17 (23)   No Evaluation* 2 (3) 4 (5) 7 (9)   No File** 2 (3) 4 (6) 10 (14) This table presents a list of medications which participants declared to be taking at baseline. Data is presented as n (%). Table 4 Summary of adherence based on attendance marked in trainers’ manuals and individual walkers’ logbooks   Walking N Mean±SD Walking & Behavioural N Mean±SD Self-directed (C) N Mean±SD Walking vs. Self-directed t -test P value Walking & Behavioural Vs. Self-directed t -test P value Walking & Behavioural Vs. Walking t -test P value 0-3 months 79 69 73 *(0.043) *(0.012) (0.514)   0.770±0.299 0.802±0.290 0.652±0.403       3-6 months 79 69 73 (0.242) (0.159) (0.774)   0.617±0.410 0.636±0.390 0.535±0.459       6-9 months 79 69 73 (0.421) (0.937) (0.363)   0.471±0.418 0.534±0.425 0.528±0.463       9-12 months 79 69 73 (0.549) (0.551) (0.989)   0.446±0.441 0.445±0.433 0.490±0.462       Total Adherence 79 69 73 (0.690) (0.413) (0.619)   0.576±0.346 0.604±0.342 0.551±0.420       Table 5 Summary of self-efficacy, measured with Stanford questionnaire on chronic disease for three study arms (continued)   Baseline 12 months 18 months   W WB C W WB C W WB C   N N N N N N N N N Variables Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD         W.Vs. C(P) W.Vs. C(P) W.Vs. W(P) W.Vs. C(P) W.Vs. C(P) W.Vs. W(P) Coping 79 69 73 44 44 40 44 41 36 With 1.069±794 1.126±0.775 1.013±0.946 0.864±0.771 1.239±0.730 1.192±0.835 1.064±0.952 1.388±0.856 1.342±1.127 Symptoms       (0.091) (0.858) (0.057) (0.190) (0.793) (0.286) Confidence 79 69 73 44 44 39 43 41 36 About 7.406±1.719 7.827±1.488 7.858±1.512 7.826±1.551 7.682±1.785 8.064±1.575 7.690±1.920 7.546±1.848 8.015±1.476 Doing Things       (0.433) (0.297) (0.060) (0.235) (0.422) *(0.041) (Higher is better) W: Walking only group; WB: Walking and Behavioural Group; C: Self-directed group (unsupervised/self-directed); N: number of subjects in each comparative group; SD: standard deviation; vs: versus; data is presented as mean (standard deviation); p: p-value (statistical significance); * Statistically significant. \n\nBODY.METHODS.INTERVENTION.SUPERVISED WALKING PROGRAMS (WB & W GROUPS):\nThe SCAWPs took place at two walking club sites in Ottawa, Ontario, Canada and one in Gatineau, Québec, Canada. Participants took part in three weekly walking sessions over a 12-month period. Every walking session began with a 10-minute warm-up, consisting of light aerobic exercises, before engaging in the 45-minute aerobic walking phase. The walking sessions ended with a 10-minute cool-down consisting of light aerobic exercises and stretching. The target intensity of the walking phase was approximately 50% to 70% of the subjects' pre-determined maximum heart rate as recommended in The Ottawa Panel guidelines \n[33-35]. The SCAWP was divided into two stages: 1) A \"progressive aerobic phase\" wherein the duration and heart rate intensity progressively increased over time and 2) a \"maintenance aerobic phase\" wherein the duration and heart rate intensity remained constant for the remainder of the walking program (Additional file \n2). The selected dosage, frequency, intensity, and progression of the walking interventions were based on existing protocols proven effective for OA, described in several RCTs involving individuals with OA \n[33-35] (Additional file \n2). To avoid contamination between the two walking interventions, participants in the W group were instructed to walk in the mornings, while the WB group walked in the afternoons at the main walking site. In order to meet the prescribed target heart rate, heart rate monitors were provided to each participant prior to his/her walking session. As a safety precaution, participants who experienced prolonged pain lasting more than two hours within 24 hours after a walking session were asked to temporarily reduce their level of exercise until he/she felt comfortable enough to resume the appropriate duration, intensity and frequency of the walking program. Following randomization, the PA specialist met each subject in the walking groups to explain the SCAWP and its progression (Additional file \n2) and was present for a minimum of three weekly scheduled sessions over one year to supervise the subjects. The PA specialist was responsible for monitoring and recording attendance, blood pressure, heart rate (during and after the walking session), time duration, and number of steps (pedometer) at each walking session. Monetary compensation was given to each participant for attending each walking session.\n\nBODY.METHODS.INTERVENTION.BEHAVIOURAL INTERVENTION (WB GROUP):\nThe behavioural intervention was part of the multifaceted KT intervention (Table \n1) and was implemented using the adapted Program for Arthritis Control through Education and Exercise (PACEex) program \n[40]. The behavioural intervention was integrated into the PACEex program and consisted of the following components: (1) short- and long-term goal setting during PACEex classes; (2) an educational component, delivered by a trained instructor, involving consisting of instructional sessions about the benefits of PA, specifically walking; (3) monthly face-to-face counselling wherein participants received moral support/encouragement to adherence with PA. Potential barriers to program adherence were identified and self-management strategies were reviewed to overcome those barriers; (4) goal setting and telephone counselling were provided as an additional source of social support until the end of the supervised phase. As with the face-to-face counselling, barriers were identified and strategies were shared in an effort to promote the long-term maintenance of PA. In summary, the behavioural intervention consisted of twenty 2-hour group sessions discussing short-term goal setting and education of arthritis-related topics over a duration of 20 consecutive weeks. Individual long-term goal setting was discussed at the beginning of the PACEex program and was followed by monthly face-to-face meetings throughout the first 6 months of the program. The last six months of the 12-month supervised phase consisted of participants receiving counselling via telephone discussing long-term goals and barriers/facilitators to adhere to the walking program.\n\nBODY.METHODS.INTERVENTION.THE SELF-DIRECTED CONTROL (C GROUP):\nParticipants were invited to consult an educational pamphlet on walking and OA. One introductory session was provided to the participant wherein they were provided an educational pamphlet describing the benefits of walking for OA, a pedometer to monitor PA, and a logbook to record activity level and adherence while partaking in the self-directed walking program. Participants in this group received monetary compensation for the completion his/her log books. The self-directed (C) group did not have any contact with participants in the two other groups, avoiding potential contamination.\n\nBODY.METHODS.DATA COLLECTION:\nParticipants were assessed by an independent evaluator at baseline and at each 3 month interval (months 3, 6, 9, 12) during the intervention period. Participants were then assessed at 3 and 6 months post-intervention during the follow-up period (15 and 18-months). Participants were asked to complete a collection of validated questionnaires as well as perform physical tests at each assessment.\n\nBODY.RESULTS.KNOWLEDGE (CPG) UPTAKE OUTCOME MEASURES (KT OUTCOMES).ADHERENCE AND BEHAVIOUR CHANGE:\nSimilar to other studies \n[29,41,42], participant adherence was assessed as the number of attended walking sessions divided by the number of prescribed sessions (3 sessions per week). Among the W and WB groups, the number of attended walking sessions throughout the 12-month duration was recorded by the exercise therapist on site as well as through the use of participants' completed logbooks which provided information on the amount of performed PA during the 12-month intervention period. The C group's adherence was only assessed by the completed log books. Behaviour change was based on the concept of self-efficacy and was measured using the Stanford scale. In regards to the research question, behaviour change results were only reported at 12 and 18-months.\n\nBODY.RESULTS.KNOWLEDGE (CPG) UPTAKE OUTCOME MEASURES (KT OUTCOMES).ANALYSIS:\nThe analysis was conducted on an intention-to-treat basis. Descriptive statistics including mean, standard deviation and frequencies were used to summarize the study groups at baseline. A repeated measure mixed model was used to assess the change in adherence from baseline to end of treatment (12 months) among three study groups. Behavioural change outcomes were assessed from baseline to of end treatment (12 months) and at 6-month follow-up post-intervention (18 months). The model included variables such as intervention group, study month, and an interaction term between intervention group and study month. Missing data was assumed to be missing at random (MAR) in order to include incomplete data. Pairwise differences comparing each group to one another (W vs. C, W vs. WB, WB vs. C) were assessed. The repeated measure mixed model was used to compare the change of adherence from baseline to the 18- month follow-up. Statistical analyses were performed using SAS (version 9.2, SAS Institute Inc., Cary, North Carolina), and statistical significance was defined as p<0.05.\n\nBODY.RESULTS:\nBetween January 2007 and December 2008, 629 study participants were assessed for eligibility. Figure \n2 exhibits the flow of participants from recruitment to follow-up. The most common reasons for exclusion were: 1) time commitment (26.5%); 2) first diagnosed with OA by physician or X-Ray but later contradicted by MRI test result (18.1%); 3) loss of interest (14.2%); 4) severe OA of the weight bearing joint of LE (11.8%); 5) no medical clearance after VO2 max test (5.1%); 6) unable to contact (4.9%); 7) too active (2,9%); 8) knee replacement (2.7%); 9) age related (2.5%); 10) travelling too much (1.9%); 11) living too far (1.2%). Follow-up for the last recruited participant was completed in May 2010. Figure 2Study flow diagram. This figure provides a flow diagram of data collection from baseline to 18-months follow-up. The groups were similar in age, sex, ethnicity, OA duration and medication use (Table \n2 &3). Two hundred and twenty two subjects were included in the study. Women made up 153 of the 222 subjects. The mean age (±S.D.) was 63.4 (±8.6) years, the mean disease duration (±S.D.) was 10.3 (±9.3) years, the highest level of completed education was \"College graduate\" with 65 participants (±29.3) and \"Professional or graduate school\" with 51 participants (±23.0). There was no statistical difference between the three comparative groups at baseline for all the demographic variables.\n\nBODY.RESULTS.BLINDING:\nThe effectiveness of blinding the evaluator to treatment allocation was assessed through a questionnaire where she was asked to identify which group each participant belonged to. The blinding rate of the evaluator was high (98%).\n\nBODY.RESULTS.ADHERENCE AND ATTRITION:\nFrom baseline to 12 months, the adherence rates were expressed as percentages of adherence among walkers who did not drop out and were reported as means (+/−standard deviation) for each time period between groups (Table \n4). Statistically significant results for short-term adherence rates favoured participants in the WB group (80.2%) compared to the self-directed (C) group (65.2%) (p<0.012) after 3 months. For long term adherence (6 to 12 months), the WB group demonstrated superior adherence rates throughout each period compared to the other two groups, but results did not reach statistical significance. No statistical significance (p> 0.05) was observed for the total adherence rates between WB vs. W, W vs. C, and WB vs. C. During the 12-month intervention period, the dropout rates were 43.1% for the W group, 40.6% for the WB group and 49.3% for the self-directed (C) group. From baseline to 18 months, the dropout rates were 44.3% for the W group; 40.6% for the WB group and 52.1% for the self-directed (C) group (Figure \n2 and Additional files \n3 &4). The dropout rates were lower for the WB group at 12-months (36.2%) and 18-months (39.1%) compared to the W group (40.5% and 44.3%) and the self-directed (C) group (40.5% and 51.3%). The W and self-directed (C) groups had the highest dropout rates (40.5%) at 12 months while the self-directed (C) group had the highest at 18 months (52.1%).\n\nBODY.RESULTS.BEHAVIOUR CHANGE:\nAfter the 12 month intervention phase, no differences were found between the 3 comparative groups for any of the items on the Stanford scale (Table \n5). Moreover, variables related to coping with symptoms and confidence were higher among the WB group compared to the other two groups. At 18-months, statistically significant differences in scores between the WB and W groups revealed that the W group demonstrated an improved mean for the \"confidence about doing things\" variable (p=0.041). The self-directed group (C) demonstrated the highest mean \"confidence about doing things\" score 8.015± 1.476 (p=0.040) as it increased by 0.157 (p=0.048) when compared to baseline. The self-directed group (C) demonstrated the highest mean score 8.015± 1.476 (p=0.040).\n\nBODY.DISCUSSION.THE IMPACT OF THE MULTIFACETED KT INTERVENTIONS ON KNOWLEDGE (CPG) UPTAKE.ADHERENCE AND ATTRITION:\nThe additional behavioural approach of the multifaceted KT intervention (WB group) improved adherence to the SCAWP when compared to the other two groups over a short-term period. In addition, the WB group demonstrated reduced dropout rates compared to the W and self-directed (C) groups. Over a long-term period, the three groups were equivalent in improving behaviour change while adopting the Ottawa Panel recommendations on walking \n[33-35]. Adherence rates were higher among the WB group for each 3-month period compared to the other two groups. Improved adherence rates over the short-term period can be explained by the fact the PACEex program used in the behavioural intervention lasted for 20 consecutive weeks at the beginning of the study. The PACEex program \n[40] used in this study, allowed our participants to set general goals, which may have motivated participants to adhere to the walking program. Similar to a previous study \n[43], another reason may be that participants in the W and WB group lost motivation after the supervised phase, no longer having regular professional support from the PA specialist. The drop-out rate results are comparable to other long-term studies which have used an aerobic PA intervention with a behavioural intervention component \n[38,44-54]. Higher adherence rates were demonstrated at 2 to 4 months (85% - 90%) and at 10 to 18 months (50% - 90%). An initial 6-month center-based exercise program enhanced retention over short and the long-term periods while promoting short-term adherence and energy expenditure \n[55]. The moderate exercise intervention retained more subjects, but had little influence on adherence over a long-term period \n[56]. Home-based PA programs, such as self-directed walking, can achieve improved adherence rates compared to facility-based programs over a long period \n[55].\n\nBODY.DISCUSSION.BEHAVIOUR CHANGE:\nThere were no statistically significant results among the three groups at 12 months. These results concur with a similar study \n[53] and are normal in the context of a KT study which implemented an already proven effective walking program \n[33-35]. However, at 18-months, there were statistically significant results which favoured the self-directed group (C). One reason explaining why participants in the self-directed group (C) had higher confidence and self-efficacy scores may be due to the fact that since these individuals were not supervised, they may have been more independent, self-controlled, self-confident, and self-motivated. Participants in this group may have also had the opportunity, over the 12-month period, to develop confidence and self-efficacy skills compared to the two supervised groups during the unsupervised follow-up phase. The variable \"confidence about doing things\" decreased after 12 months for the WB group. Given the high amount of PA demonstrated by the WB group at 12 months, we were surprised with this result. On the other hand, at 18 months, statistically significant results favoured the W group compared to WB group for the variables \"confidence about doing things\". Again, these results about confidence were unexpected given that the WB group performed higher amounts of PA. Exercise therapists, pedometers, logbooks and social support from walking club members (Table \n1) assisted participants who had difficulties with activity adherence and maintenance \n[57]. Regular professional contact was recognised as the significant motivational factor for adherence to the supported walking program during the RCT \n[43]. Social support has been reported to be more successful in engaging participants in PA programs compared to programs which only provided written educational material \n[58,59]. The combination of using pedometers and goal-setting has been recognised as an effective KT tool for increasing PA \n[31,60-62]. However, despite the short-term effectiveness of pedometers and goal-setting, the success of these tools diminished over time. Other KT strategies are recommended to facilitate sustainability \n[62].\n\nBODY.DISCUSSION.LIMITATIONS:\nBarriers related to the implementation of SCAWP were identified and addressed to improve adherence for older individuals \n[57]. Acceptability (preferences, tolerance and accessibility) should be considered to promote PA adherence and to overcome associated barriers \n[63]. The most common barriers identified by participants with OA are perceptions about illness and recovery, transportation difficulties, family commitments and inconvenient timing \n[64]. Time commitment is also recognised as a significant barrier \n[55]. Additional barriers such as weather conditions including snow storms, freezing rain and humidity, a 3-month bus strike, and health problems of participants' partners may have contributed to study adherence. Adherence is influenced by participant's preference of intervention \n[65-70]. Unfortunately, with this study being a RCT, taking participants' preference into consideration was not possible due to the randomization procedure. This being said, randomisation may have contributed to higher dropout rates. The purpose of this KT study was to implement a scientifically proven effective walking program for OA \n[33-35]. Short term research suggests that self-management interventions \n[16,18], telephone counselling \n[22,46,71-73], peer support \n[13,74-76] and PA education \n[18,22,27,41,77] are effective behavioural strategies for improving self-efficacy. The long-term multifaceted behavioural strategy used in this study did not concur with the previous studies, as participants in the self-directed group (C group) were provided information and education on the benefits of PA. Methods of measurement for adherence for the W and WB groups were different than the self-directed (C) group. Observed attendance was recorded on site for each walking session for the W and WB groups by exercise therapists while adherence could only be measured in the self-directed group (C) using completed logbooks. The self-reporting of PA, which is commonly overstated, may have been subject to potential information bias. Selection bias may have occurred as our study population included individuals with mild to moderate OA, with a confirmed X-ray report. In addition, some participants were retired while some were still employed. Since participants were provided monetary compensation for their participation in the study, one may argue the generalizability of the results.\n\nBODY.DISCUSSION.IMPLICATIONS:\nThis KT RCT aimed to identify the best multifaceted strategy to implement a proven effective walking program for OA of the knee among older individuals. The epidemic public health problem of OA can be managed through the implementation of a proven effective walking program in existing community-based walking clubs as well as at home with minimal support. More studies are needed to improve the long-term walking adherence or knowledge uptake on SCAWP among participants with OA. Particular attention should focus on patients' characteristics and preferences.\n\nBODY.CONCLUSIONS:\nThis study partially supported our initial hypothesis regarding the impact of a multifaceted KT intervention as the combined walking and behavioural approach had greater benefits over the comparative groups resulting in improved long-term adherence to PA only over a short period (3-months). The WB group demonstrated to be the best KT strategy for reducing dropouts compared to the W group and self-directed (C) group. All three KT strategies were equivalent over the long-term period (up to 18 months) for improving behavioural change. However, the self-directed walking program (C group) was the least expensive to implement over a long-period (18 months). This RCT was a long-term adherence study as well as a KT study which addressed questions of clinical and scientific importance aimed at improving the understanding of effective KT strategies to promote the adoption and maintenance of a community-based walking program for OA.\n\nBODY.ABBREVIATIONS:\nOA: Osteoarthritis; QoL: Quality of Life; SCAWP: Supervised Community-based Aerobic Walking Program; CPG: Clinical Practice Guideline; KT: Knowledge Translation; KTAC: Knowledge-To-Action Cycle; EBCPG: Evidence-Based Clinical Practice Guidelines; CONSORT: Consolidated Standards of Reporting Trials; WB (group): Walking and Behavioural intervention; W (group): Walking intervention; C (group): Self-directed control intervention; RCT: Randomized Controlled Trial; PA: Physical Activity; PACE-ex: Program for Arthritis Control through Education and Exercise; AIMS2: Arthritis Impact Measurement Scale 2; SF-36: Short-Form 36 Health Survey; WOMAC: Western Ontario and McMaster Universities Osteoarthritis Index; MET: Multiples of basal metabolic rate; MAR: Missing At Random; CIHR: Canadian Institutes of Health Research.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nLB is a Full Professor of Rehabilitation, an epidemiologist and the principal investigator of this study and primary author of this manuscript. GAW the co-principal investigator of this study, is senior biostatistician and director, Cardiovascular Research Methods Centre at the University of Ottawa Heart Institute, and is a leading expert in the design and analysis of clinical trials. He provided assistance with the methodology and statistical analysis of the study. GPK is a Full Professor of Physiology at the University of Ottawa, and director of the university's Human Performance and Environmental Medicine Research Laboratory and Professional Fitness and Lifestyle Consultant Certification Training program. He assisted with the methodology of the study. RR is a senior researcher at the University of Ottawa Heart Institute and provided experience in applying innovative behavioural approaches aimed at increasing PA in healthy and chronic diseased populations. AM is a health economist and assisted with the economic evaluation concept in the original proposal. PT is a rheumatologist, an epidemiologist and chief of the Cochrane Musculoskeletal Group. He has experience in conducting RCTs and meta-analyses. He provided assistance with OA outcome measures in the study. MH & CM developed the PACEex program and was in charge of training the PA specialist. GDA was the research coordinator of this study and assisted with the writing of this manuscript. LC is a biostatistician and performed the analyses of this study. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/12/871/prepub\n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional file 1The knowledge –to-action framework and KT interventions. This table demonstrates how each step of the KTAC framework is applied to the RCT.Click here for file Additional file 2Post-Randomization Walking Interventions. The supervised phase of the walking intervention lasted 52 weeks following randomization. The unsupervised (follow-up) phase of the walking intervention lasted 26 additional weeks. For Intervention (one year): 52 weeks × 3 sessions =156 sessions attended; Follow-up (six month): 26 weeks x 3 sessions = 78 sessions attended. N.B. This structured and supervised walking program is based on Ottawa Panel Grade A recommendations \n[33-35] and was implemented to groups W and WB (Implementation groups). Individuals with OA in the self-directed group (C) received a pamphlet on OA and walking (which recommends regular walking in a unsupervised/self-directed way) (Dissemination group) \n[25].Click here for file Additional file 3Reasons of dropouts. This table demonstrates the reasons as to why participants decided to withdraw from the study.Click here for file Additional file 4Dropout rates and corresponding retention rates at 12-and 18-month time periods. This table demonstrates the drop-out rates and retention rates at end of intervention 12-months and follow-up at 18-months.Click here for file\n\n**Question:** Compared to Self-directed control (C) what was the result of Walking and Behavioural intervention (WB) on Total adherence to the program?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
976
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Treatment costs associated with interventional cancer clinical trials conducted at a single UK institution over 2 years (2009–2010)\n\n ABSTRACT.BACKGROUND:\nThe conduct of clinical trials should be an integral part of routine patient care. Treating patients in trials incurs additional costs over and above standard of care (SOC), but the extent of the cost burden is not known. We undertook a retrospective cost attribution analysis to quantitate the treatment costs associated with cancer clinical trial protocols conducted over a 2 year period.\n\nABSTRACT.METHODS:\nAll patients entered into oncology (non-haematology) clinical trials involving investigational medicinal products in 2009 and 2010 in a single UK institution were identified. The trial protocols on which they were treated were analysed to identify the treatment costs for the experimental arm(s) of the trial and the equivalent SOC had the patient not been entered in the trial. The treatment cost difference was calculated by subtracting the experimental treatment cost from SOC cost. For randomised trials, an average treatment cost was estimated by taking into account the number of arms and randomisation ratio. An estimate of the annual treatment costs was calculated.\n\nABSTRACT.RESULTS:\nA total of 357 adult oncology patients were treated on 53 different trial protocols: 40 phase III, 2 randomised II/III and 11 phase II design. A total of 27 trials were academic, non-commercial sponsored trials and 26 were commercial sponsored trials. When compared with SOC, the average treatment cost per patient was an excess of £431 for a non-commercial trial (range £6393 excess to £6005 saving) and a saving of £9294 for a commercial trial (range £0 to £71 480). There was an overall treatment cost saving of £388 719 in 2009 and £496 556 in 2010, largely attributable to pharmaceutical company provision of free drug supplies.\n\nABSTRACT.CONCLUSION:\nOn an average, non-commercial trial protocols were associated with a small per patient excess treatment cost, whereas commercial trials were associated with a substantially higher cost saving. Taking into account the total number of patients recruited annually, treatment of patients on clinical trial protocols was associated with considerable cost savings across both the non-commercial and commercial portfolio.\n\nBODY:\nThe drug development process is well established and randomised clinical trials remain the gold standard for testing new treatments against standard of care in order to advance clinical practice (Vogelzang et al, 2012). Even so, the proportion of patients entering clinical trials worldwide is remarkably low. Since 2001, when under 4% of cancer patients took part in a clinical trial, recruitment increased six-fold over the following 10 years in the UK due to significant investment in a unique national research infrastructure (Department of Health, 2006; NCRN, 2011), whereas recruitment in the USA has remained static (Califf et al, 2012). It is undisputed that conducting research incurs additional costs over and above standard of care (Emanuel et al, 2003), yet there is an international commitment to research being a core health service function (Zerhouni, 2003; Califf et al, 2012; The NHS Constitution, 2013). Attribution of research costs is complex and recovery of those costs, particularly for non-commercial clinical trials is very challenging (Snowdon et al, 2006; Seow et al, 2012). Costs incurred by conducting a clinical trial can broadly be attributed to one of the three elements (Attributing the costs of health and social care Research & Development (AcoRD), 2012): (1) research costs, that is, the costs of the trial itself, which end when the research ends and relate to activities being undertaken to answer the research question; (2) treatment costs, that is, the patient care costs, which would continue to be incurred if the patient care intervention continued to be provided after the trial had stopped; and (3) service support costs, that is, the additional patient care costs associated with the research, which would end once the trial had stopped, even if the patient care involved continued to be provided. For commercially sponsored trials, all three elements would normally be met by the sponsor. In the UK, pharmaceutical companies have, for many years, been able to access service support from the national health service by formal adoption into a national clinical trials portfolio. For non-commercial studies, research costs are generally met by grant funders. Service support costs are met from local R&D budgets (via comprehensive local research networks), whereas treatment costs are met through the normal commissioning process. The latter has been the cause of significant disharmony between commissioners and providers of health care. In the USA, private insurance companies are not obliged to cover non-routine patient care costs (Anonymous, 2007). Cancer drugs are generally expensive and it is widely perceived that cancer clinical trials generate an excess treatment cost, despite a lack of evidence to support this view. A single retrospective review of 932 cancer patients recruited to 35 trials in the USA between 1998–1999 suggested that the incremental costs of direct care in clinical trials was 6.5% higher than what they would be outside of a trial (Goldman et al, 2003). The investigators concluded that the additional treatment costs of an open reimbursement policy for government-sponsored cancer clinical trials seem minimal. Even so, risk of excess treatment costs remains a barrier to academic clinical trials being performed in many countries today. We undertook a review of cancer clinical trials conducted in a single, research-active institution in the UK over a 2-year period (2009–2010), to identify the extent of treatment costs involved.\n\nBODY.MATERIALS AND METHODS:\nCambridge University Hospitals NHS Foundation Trust (Addenbrooke's Hospital, Cambridge) is a major university teaching hospital and cancer centre in the UK. It also hosts the West Anglia Cancer Research Network (WACRN), one of the highest recruiting regional research networks in the UK, where 45.1% of patients newly diagnosed with cancer entered a research study in 2011/12. Although WACRN supports research across seven hospitals in the region, 64% of all patients were recruited at Addenbrooke's hospital. For this retrospective cost attribution analysis, all patients recruited to interventional, research ethics committee approved, oncology drug trials between 1 January 2009 and 31 December 2010 at Addenbrooke's Hospital were identified from the Cambridge Cancer Trials Centre patient database. Haematology trials were excluded. The clinical trial protocols for each identified patient were reviewed to determine the treatment costs associated with each arm of the trial. The following activities were attributed to being treatment costs (as described in the AcoRD guidance): (1) supplying an investigation medicinal product and pharmacy costs associated with administering oral agents under investigation; (2) supplying and administering any active comparators, but not placebo; (3) nursing time to administer the drug, time taken to deliver the treatment either as an outpatient ('day unit chair time') or as an inpatient ('bed days'); (4) investigations, including imaging and laboratory tests, which would continue to be incurred if the patient care service in question continued to be provided after the trial stopped; (5) additional clinic visits. Costs were calculated for the active treatment phase of the trial only and did not include follow-up once treatment had finished. For all protocols, a standard of care (SOC) was identified as the standard treatment that would have been offered locally to the patient had the trial not been available. For randomised trials, if the control arm differed to the local standard treatment, the local standard treatment was used to cost SOC. The treatment costs of the SOC were determined as for the experimental arm. Costs were determined for the duration of treatment defined in each trial protocol and did not include costs of follow-up once treatment stopped. For adjuvant, neo-adjuvant and some palliative protocols which specified the treatment programme duration, costs were determined for the entire prespecified treatment programme. For palliative protocols where treatment duration was until disease progression, the actual number of cycles and/or months of treatment delivered to the patients recruited to each protocol was obtained from the patient records and an average treatment duration was determined for each protocol. The same treatment duration was used for estimating SOC costs, unless the SOC treatment maximum duration was specified, in which case the shorter duration was used. Cancer drug costs were obtained from the British National Formulary (2010) and excluded additional taxes or local negotiated discounts. Supportive drugs such as antiemetics, antidiarrhoeals, steroids, intravenous hydration fluids were excluded. For dose calculations, an average body surface area of 1.75 m2 was used. Staff and facility direct costs without overheads were obtained from local reference costs. Imaging and laboratory tests were costed locally or using the NIHR costing template. Where costs differed, local costs were used. The data was cross-checked with the lead clinical trials nurse responsible for each protocol. For each protocol, the treatment cost (TC) difference for the experimental arm(s) was the difference between the experimental arm treatment cost and the SOC cost. For randomised trials, an average treatment cost was attributed to the protocol, by taking into account the number of arms of the trial and randomisation ratio (see Table 1 as an example of how the TC difference was calculated). Finally, using the total number of patients recruited to each trial protocol, the annual overall TC excess or saving in 2009 and 2010 was estimated.\n\nBODY.RESULTS:\nDuring the 2-year review period, 357 cancer patients were recruited to 53 different interventional clinical trials involving IMPs (Tables 2 and 3): 159 patients in 2009 and 198 patients in 2010. A total of 27 trials were academic, non-commercial sponsored trials with variable levels of industry support. A total of 26 trials were commercial sponsored trials, of which 26 of the non-commercial trials and 9 of the commercial trials were adopted in the NIHR portfolio. Thirty-nine of the 53 trials were randomised phase III design. The remainder were 1 non-randomised phase III, 2 randomised phase II/III, 9 randomised phase II and 2 non-randomised phase II trials. Patients received IMPs that were either unlicensed (22), licensed for the indicated disease (10) or licensed drugs being used in an unlicensed condition (21). Compared with 2009, in 2010, 24 new trials had opened to recruitment and 12 had closed. The average TC per protocol across the portfolio of 53 trials ranged from an excess of £6393 to a saving of £71 480 (Figure 1). For 27 non-commercial trials (Table 2), the average TC per protocol ranged from an excess of £6393 to a saving of £6005 (mean excess TC was £431, median £126 per protocol). The main source of excess TC was due to extra hospital day unit delivery time associated with 9 protocols evaluating IMPs given in addition to, or in place of, SOC (7 protocols) or where the SOC was observation (2 protocols). For example, in the AVAST-M clinical trial comparing 12 months of 3 weekly adjuvant bevacizumab as treatment for patients with resected melanoma versus standard observation, the additional cost of treatment delivery accounted for £4250 (83%) of the £5144 excess TC. In this case, the investigation medicinal product cost did not contribute to the excess TC, as all bevacizumab was supplied free of charge by the manufacturer. A total of 7 non-commercial trials were associated with a TC saving. Two of these trials, SCOT (comparing 3 months versus 6 months standard adjuvant chemotherapy in colorectal cancer) and PERSEPHONE (comparing 6 months versus 12 months standard adjuvant chemotherapy in breast cancer), were evaluating whether shorter durations of adjuvant treatment were as effective as SOC, in which the TCs were, in effect, halved. In 2 trials, the source of saving was free drug supplies in the experimental arms from manufacturers (SoFEA and BR13). As each trial randomised one or more treatment against a standard treatment, taking into account the number of arms and randomisation ratio, the average cost of a patient recruited to the protocol was less than SOC. In one trial (REAL-3), an industry grant provided a per payment fee of £1000 and in another trial (Telovac), one of the experimental arms was associated with fewer cycles of SOC chemotherapy given in association with the investigation medicinal product. In one trial (QUASAR2) comparing adjuvant capecitabine with capecitabine plus bevacizumab, the manufacturer supplied both the investigation medicinal product (bevacizumab) and the SOC (capecitabine) free of charge for patients recruited to either arm of the trial. With the exception of REAL-3, treatment of patients on the other six non-commercial trial protocols associated with an overall TC saving offered financial benefits that would have been passed on to commissioners, primarily by the provision of free drug that would have had to be paid for if the patient was not on trial. Considering the actual numbers of patients recruited to all of the non-commercial trials annually, the net TC saving was estimated at £67 962 in 2009 and £29 002 in 2010. For the 26 commercial trials (Table 3), the average TC per protocol ranged from £0 to a saving of £71 480 (mean saving was £9294, median £4370 per protocol). Commercial trials were not associated with excess TCs, as income from the trial conduct was gained to cover all costs associated with treatment. The main source of cost saving was through provision of free drug supplies from the sponsor. In two renal cancer trials (Bevlin and Rad001), the sponsors provided the experimental drug free and in a third renal trial (Torisel404) both experimental and SOC drug were provided. With the exception of four randomised trials where SOC was observation or best supportive care, treating patients on the remaining 22 protocols would have passed on significant drug cost savings to commissioners, by avoiding the SOC costs the commissioners would have incurred if the patients were not on trial. Using the actual numbers of patients recruited to all of the commercial trials, there was an estimated net treatment cost saving of £320 758 in 2009 and £467 554 in 2010. Figure 2 summarises the distribution of average TCs for all 26 non-commercial and 27 commercial trial protocols. In general, it can be seen that both commercial and non-commercial trials saved drug costs, whereas the extent of drug and dispensing cost savings was markedly higher for commercial compared with non-commercial trials. The main source of excess TC was the cost of delivering treatment, most often generated by trials evaluating IMPs given in addition to SOC. The contribution of extra clinic visits and investigations, in particular CT scans used for response assessment, was significantly higher for commercial compared with non-commercial trials. By taking into account the number of patients entered on each protocol, the overall annual TCs incurred were estimated. On an average, treating a patient on a clinical trial protocol was associated with a TC saving of £4340. The overall TCs incurred annually was a saving of £388 719 in 2009 and £496 556 in 2010 (Figure 3), around 90% of the savings coming from commercial sponsored trials.\n\nBODY.DISCUSSION:\nResearch is considered an integral part of healthcare by many nations, as the benefits of new treatments tested in clinical trials are recognised to improve patient outcomes. It is important that the cost of that research is identified and properly funded. In the UK, the AcoRD guidance (2012) provides a framework to identify, attribute and recover the various costs associated with research, in a transparent, robust and consistent manner. It clarifies the distinction between research costs, service support costs and TCs. We have focused on quantitating the TCs associated with treating patients in cancer clinical trials, as uncertainty surrounding the size of cost difference between treatment in or outside of trials, particularly non-commercial trials, is a barrier to their efficient conduct. We have previously sought to estimate the cost of the treatment administered to patients recruited to cancer trials using individual patient data from clinical research forms and pharmacy prescriptions (Corrie et al, 2011). However, TCs extend beyond the drug costs alone. So, to better inform service delivery, we developed a methodology designed to include all elements of TCs as described in the AcoRD guidelines. The method provides a unit TC difference associated with an individual trial protocol, taking into account the number of arms in a trial and, where appropriate, the randomisation ratio. The unit TC difference can only be an estimate, as the duration of treatment may vary for an individual patient, whereas for randomised trials, one might expect the estimate to be more accurate the more patients recruited to any protocol. In addition, we tailored SOC to local treatment guidelines, which may differ between regions. There are several limitations associated with this methodology. First, we did not take into account changes to the treatment plan associated with adverse drug reactions and did not include any associated non-elective hospital admissions, as individual patient data was not analysed in depth other than to estimate duration of palliative treatment in some instances. In addition, we did not include follow-up of patients after completion of anticancer therapy, assuming that this would account for a proportionally small element of treatment costs: commercial trials would be expected to fully reimburse all costs above SOC, whereas in general, frequency of follow-up in non-commercial trials tends to reflect routine follow-up arrangements. Finally, costing of commercial trials was based on direct costs only and did not include overheads charged to industry. Despite these limitations, we believe this methodology has the potential to be applied on a wider scale to inform providers and commissioners regarding the impact of conducting cancer clinical trials on health service resources. Review of patients recruited to interventional cancer trials in a single research active institution over a 2-year period identified significant variation in associated TCs between protocols. The average TC for 27 non-commercial trials was an excess of £431 (range £6393 excess to £6005 saving) and for 26 commercial trials was an average saving of £9294 (range £0 to £71 480). Breaking down the TCs by the main categories, by far the largest contribution to cost savings was provision or reimbursement of IMPs by pharmaceutical companies, primarily as part of the conduct of commercial sponsored studies, but also, and to a lesser extent, in partnership with academic groups conducting non-commercial NIHR portfolio studies. The distribution of TCs was rather different for commercial and non-commercial trials. For non-commercial trials, the main excess TC was due to administering IMPs. For commercial trials, considerable extra costs were incurred by additional clinic visits and imaging investigations. This might suggest that pharmaceutical companies drive a more intensive patient management plan, and/or that, as many of the commercial protocols were for international multi-centre trials, they were designed to reflect practice in countries outside of the UK. Even so, because commercial trials were fully funded, the potential excess TCs associated with these protocols should not be passed on to commissioners. Our findings confirmed that the pathway for patients entered into academic trials did, in general, reflect routine clinical practice. By assigning the 357 patients to the protocols in which they were enroled, the overall treatment cost saving to the NHS was estimated to be approaching £0.5 million per annum. In our view, this data provides overwhelming evidence to refute any concern that clinical research generates a cost pressure for the health service. On the contrary, we have demonstrated significant financial gains. The largest gains were from commercial sponsored trials. There is currently a growing emphasis on expanding the commercial trial portfolio across all disease areas. Although this may bring financial benefits, the main reason for our undertaking this review was to address concerns that academic clinical trials were a financial liability. Taken individually, this may be the case in some instances, but viewed across a whole portfolio, there are clear gains even for non-commercial clinical trials. Academic trials represent an important part of the NIHR portfolio, affording the opportunity to address questions that would not be a priority for industry sponsors. For example, two non-commercial trials testing whether shorter durations of adjuvant therapy might be as effective as longer durations were associated with significant TC savings during their conduct, whereas positive outcomes would have much wider implications for health service savings. In summary, this unique analysis of TCs associated with patients recruited to cancer clinical trials indicates a wide range of per patient TC differences associated with individual trials compared with SOC, whereas the risk of there being an excess TC (as opposed to a saving) primarily lies with non-commercial studies. However, over the course of a year, clinical trials—both commercial and non-commercial—offer an opportunity to realise considerable TC savings. These findings support the Department of Health's directive for commissioning of clinical trials as an integral part of service delivery. A balanced portfolio of both commercial and non-commercial research should offer the greatest benefits to patients and the overall health economy.\n\n**Question:** Compared to Commercial sponsored trials what was the result of Academic, non-commercial sponsored trials on Extra clinic visits and CT scans assessments?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
979
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The effect of aerobic exercise in the maintenance treatment of depression\n\n ABSTRACT:\nWe investigated the efficacy of aerobic exercise alongside antidepressant medication as an adjuvant maintenance treatment for depression. Fifty patients in remission were randomly assigned to either medication only or medication plus exercise. Assessment of psychopathology was made at 6-weekly intervals (for 24 weeks) using the Hamilton Rating Scale for Depression. The medication-plus-exercise group showed significantly more improvement at 12 and 24 weeks than the medication-only group. This study adds to a growing evidence base that suggests aerobic exercise is worthy of further development in the treatment of depressive disorders.\n\nBODY:\nThe role of exercise in treating depression has been studied extensively in recent years. For example, Rethorst et al (2009) carried out a meta-analysis of 58 randomised controlled trials, and found that groups whose treatment incorporated exercise had significantly lower depression scores than controls. De Zeeuw et al (2010) showed that exercise could have preventive potential, as it was found to reduce the likelihood of depression in high-risk employees with sedentary jobs and an inactive lifestyle. Exercise has also been reported to reduce relapse rates. For example, Babyak et al (2000) found that participants with major depressive disorder who used exercise therapy alone had significantly lower relapse rates than those who received medication. Despite these promising findings and its low cost, the effectiveness of exercise is yet to be fully addressed in the psychosocial treatment literature or incorporated into mainstream treatment practice. For example, the National Institute for Health and Clinical Excellence (NICE, 2009) recommended 'structured group physical activity' only as a low-intensity psychosocial intervention for subthreshold depressive symptoms, or depression with chronic physical health problems. This may be because many studies have been criticised for their methodological limitations and incomplete literature searches (Rethorst et al, 2009); indeed, the classification and duration of exercise are not specified in the majority of reports. Of further note is that the efficacy of exercise in a maintenance capacity remains largely unexplored. Thus, our rationale was to investigate the role of exercise alongside antidepressant medication as a post-remission maintenance therapy, in terms of reducing both the severity of psychopathology and relapse rates.\n\nBODY.METHOD:\nThe 50 patients recruited were attending the psychiatric out-patient department of Sir Sunderland Hospital, Banaras Hindu University, India. Selection was made (via referrals from two senior psychiatrists at the hospital) by the researcher who carried out the post-treatment evaluations. The following selection criteria were used:\nage 15–45 years, in order to cope with the physical exercise regimea diagnosis of major depressive or dysthymic disorder, according to DSM-IV-TR criteria (American Psychiatric Association, 2000)absence of any comorbid psychiatric illnessno major cardiac or orthopaedic illness that would contraindicate exercise. Ethical approval was granted by the University's research ethics committee. After inclusion in the study, patients were prescribed sertraline (200 mg/day – the maximum permitted daily dose) and alprazolam (1 mg/day) for 3–4 weeks in order to achieve remission of clinical depression. Once significant remission – defined as scoring less than 16 on the 21-item Hamilton Rating Scale for Depression (HRSD; Hamilton, 1960) – was obtained, patients were randomly allocated to groups; blinding was not possible, however, as participants were necessarily aware of the treatment group they were in and the researcher who administered exercise therapy also did the follow-up assessment of psychopathology.\n\nBODY.METHOD.PRE-TREATMENT EVALUATION:\nThe researcher assessed the following using a semi-structured pro forma:\nsociodemographic and clinical dataseverity of depressive psychopathology (using the HRSD, plus a structured interview guide)stressful life events (lifetime and past 3 months), using the 51-item Presumptive Stressful Life Events Scale (PSLES; Singh et al, 1984) (a culturally appropriate instrument)melancholic and atypical features of depression (as per DSM-IV-TR)baseline heart rate (mean taken from three 1-minute time slots)fitness levels using the Harvard Step Test, to monitor compliance with the exercise programme (Brouha, 1943).\n\nBODY.METHOD.TREATMENT:\nBoth groups continued to receive sertraline (200 mg/day) and alprazolam (1 mg/day) and did not take part in any form of counselling or psychotherapy. For the medication-plus-exercise group, participants also received training in daily exercise therapy. That therapy consisted of a 5-minute warm-up, 15 minutes of stationary running, and a 10-minute cool-down, this being convenient and economical for all participants regardless of gender or location and within guidelines on exer cise (Whaley et al, 2006). Participants were also advised not to perform any other exercise beyond usual daily activities. Heart rate recordings were made in order to adjust the intensity of exercise to keep within the target heart rate zone, this being 60–84% of the maximum achievable heart rate (Whaley et al, 2006); exercise sessions at home were logged and monitored by carers, and the logs were reviewed by the researcher at every follow-up. None of the participants or their carers reported missing any exercise sessions or cutting sessions short. Follow-up was at 6-weekly intervals for 24 weeks for all participants, and HRSD and PSLES scores were assessed on each occasion. The medication-plus-exercise group also received refresher exercise training and the Harvard Step Test was performed at every follow-up.\n\nBODY.METHOD.ANALYSIS:\nStatistical tests used were the chi-square and Student t-test. Epi Info 6 software, version 6.04a, was used to analyse the data.\n\nBODY.RESULTS:\nAlthough all 50 patients initially agreed to participate, seven did not complete the study (three from the medication-plus-exercise group and four from the medication-only group). Table 1 shows the sample characteristics and features of depression, which were evenly distributed between the groups. Table 2 shows the inter-group comparisons at all 6-weekly follow-ups. The medication-plus-exercise group showed a greater improvement in HRSD scores (change in score from baseline) than the medication-only group from the first follow-up, at 6 weeks, and this difference became statistically significant at weeks 12 (P = 0.04, t = 2.10) and 24 (P = 0.02, t = 2.42); there was no significant difference between the groups' PSLES scores at any follow-up point. Table 1 Baseline characteristics of the sample Variable Medication-only group ( n = 25) Medication-and-exercise group ( n = 25) Number (%) female 9 (60) 6 (40) Number (%) male 16 (46) 19 (54) Usual degree of physical activity: number (%)  sedentary  light  moderate 15 (65) 9 (41) 1 (20) 8 (35) 13 (59) 4 (80) Mean (s.d.) age (years) 29.6 (8.2) 30.5 (8.9) Mean (s.d.) PSLES score for lifetime 216.2 (124.9) 188.0 (86.3) Mean (s.d.) PSLES score for last 3 months 105.0 (72.6) 107.2 (65.3) Mean (s.d.) HRSD score * 23.2 (4.2) 22.8 (3.7) Mean (s.d.) number of melancholic features * 1.2 (1.6) 0.7 (1.1) Mean (s.d.) number of atypical features * 0.1 (0.33) 0.0 (0.3) HRSD, Hamilton Rating Scale for Depression; PSLES, Presumptive Stressful Life Events Scale. * Between group comparisons were non-significant. Table 2 Mean (s.d.) improvement (difference from baseline) in score on the Hamilton Rating Scale for Depression Follow-up Medication-only group Medication-and-exercise group t d.f. P 6 weeks 2.6 (5.3) 4.1 (3.4) 1.17 46 >0.05 12 weeks 2.3 (5.2) 5.0 (3.4) 2.10 45 0.04 18 weeks 3.0 (6.5) 5.6 (5.1) 1.45 42 >0.05 24 weeks 1.0 (7.0) 5.6 (5.4) 2.42 41 0.02 Relapse rates were also recorded via subjective reports and as a measured increase of >50% from baseline HRSD scores (Neumeister et al, 1997). In total, nine patients relapsed, two from the medication-plus-exercise group (both in week 18) and seven from the medication-only group (one relapse in week 6, two in week 12 and four in week 24). This difference in relapse rates did not reach significance (Fisher's exact test: one-tailed, P = 0.07; two-tailed, P = 0.14).\n\nBODY.DISCUSSION:\nWe found that the maintenance treatment of medication-plus-exercise resulted in significantly lower HRSD scores than medication alone at 12-week and 24-week assessments, as well as a promising reduction of relapse rates. These differences were not considered to be due to extraneous life events, as there was no significant difference between the two groups' PSLES scores. These findings are in line with Rethorst et al's (2009) meta-analysis in terms of the potential for exercise to lower depression scores, and with the finding from Babyak et al (2000) that exercise is effective in reducing relapse. We found no studies, however, on the combined role of exercise and antidepressant medication as a maintenance therapy for depressive disorder. The present findings suggest a potential role for exercise as a protective agent during remission, and one which might reduce long-term cost. The study was limited in several ways. There is potential for bias via use of a clinician-rated tool such as the HRSD, particularly in an unblinded study. However, to reduce this, the same experienced assessor was used for the pre-intervention and follow-up assessments and a second assessor made random cross-checks at least once for every participant. Assessors completed the rating scales with the participants through discussion. The sample size was limited to 50, which precluded analysis of potential confounds. For example, the medication-only group was randomly allocated a greater proportion of sedentary members from the outset. There was also an unequal proportion of male/female participants; however, this reflected the population of patients within the hospital. A further limitation was the lack of comparison with an exercise-only group. It is possible that the exercise group may have derived social gain from the increased contact with carers and researchers; interestingly, Babyak et al (2000) raise the point that psychological factors such as proactive agency may be engaged via choosing to exercise. A further issue for the current study was the drop in the rate of improvement for the medication-only group at week 24. The reasons for this are unclear; however, such a change dictates that studies with longer periods of assessment are needed to make stronger interpretations. Thus, future studies could benefit from: larger and more representative samples; extended follow-up periods; additional outcome measures (including patient-rated scales); and the inclusion of further control groups, such as exercise only, social contact only, and group versus solitary exercise. In conclusion, this study adds to the growing evi dence base that suggests exercise is valuable in the treatment of depressive disorders. In particular, we highlight its potential role as a supplementary maintenance treatment during remission. It is worth highlighting that, despite the relatively short time for which the participants exercised, there were measurable benefits. We believe this area of research warrants further attention, particularly as exercise provides a low-cost, flexible and easily implemented treatment with minimal adverse effects, and especially where costly delays in receiving mainstream psychological interventions are commonplace.\n\n**Question:** Compared to Medication only what was the result of Medication plus exercise on Improvement in Presumptive Stressful Life Events Scale?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
980
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The effect of aerobic exercise in the maintenance treatment of depression\n\n ABSTRACT:\nWe investigated the efficacy of aerobic exercise alongside antidepressant medication as an adjuvant maintenance treatment for depression. Fifty patients in remission were randomly assigned to either medication only or medication plus exercise. Assessment of psychopathology was made at 6-weekly intervals (for 24 weeks) using the Hamilton Rating Scale for Depression. The medication-plus-exercise group showed significantly more improvement at 12 and 24 weeks than the medication-only group. This study adds to a growing evidence base that suggests aerobic exercise is worthy of further development in the treatment of depressive disorders.\n\nBODY:\nThe role of exercise in treating depression has been studied extensively in recent years. For example, Rethorst et al (2009) carried out a meta-analysis of 58 randomised controlled trials, and found that groups whose treatment incorporated exercise had significantly lower depression scores than controls. De Zeeuw et al (2010) showed that exercise could have preventive potential, as it was found to reduce the likelihood of depression in high-risk employees with sedentary jobs and an inactive lifestyle. Exercise has also been reported to reduce relapse rates. For example, Babyak et al (2000) found that participants with major depressive disorder who used exercise therapy alone had significantly lower relapse rates than those who received medication. Despite these promising findings and its low cost, the effectiveness of exercise is yet to be fully addressed in the psychosocial treatment literature or incorporated into mainstream treatment practice. For example, the National Institute for Health and Clinical Excellence (NICE, 2009) recommended 'structured group physical activity' only as a low-intensity psychosocial intervention for subthreshold depressive symptoms, or depression with chronic physical health problems. This may be because many studies have been criticised for their methodological limitations and incomplete literature searches (Rethorst et al, 2009); indeed, the classification and duration of exercise are not specified in the majority of reports. Of further note is that the efficacy of exercise in a maintenance capacity remains largely unexplored. Thus, our rationale was to investigate the role of exercise alongside antidepressant medication as a post-remission maintenance therapy, in terms of reducing both the severity of psychopathology and relapse rates.\n\nBODY.METHOD:\nThe 50 patients recruited were attending the psychiatric out-patient department of Sir Sunderland Hospital, Banaras Hindu University, India. Selection was made (via referrals from two senior psychiatrists at the hospital) by the researcher who carried out the post-treatment evaluations. The following selection criteria were used:\nage 15–45 years, in order to cope with the physical exercise regimea diagnosis of major depressive or dysthymic disorder, according to DSM-IV-TR criteria (American Psychiatric Association, 2000)absence of any comorbid psychiatric illnessno major cardiac or orthopaedic illness that would contraindicate exercise. Ethical approval was granted by the University's research ethics committee. After inclusion in the study, patients were prescribed sertraline (200 mg/day – the maximum permitted daily dose) and alprazolam (1 mg/day) for 3–4 weeks in order to achieve remission of clinical depression. Once significant remission – defined as scoring less than 16 on the 21-item Hamilton Rating Scale for Depression (HRSD; Hamilton, 1960) – was obtained, patients were randomly allocated to groups; blinding was not possible, however, as participants were necessarily aware of the treatment group they were in and the researcher who administered exercise therapy also did the follow-up assessment of psychopathology.\n\nBODY.METHOD.PRE-TREATMENT EVALUATION:\nThe researcher assessed the following using a semi-structured pro forma:\nsociodemographic and clinical dataseverity of depressive psychopathology (using the HRSD, plus a structured interview guide)stressful life events (lifetime and past 3 months), using the 51-item Presumptive Stressful Life Events Scale (PSLES; Singh et al, 1984) (a culturally appropriate instrument)melancholic and atypical features of depression (as per DSM-IV-TR)baseline heart rate (mean taken from three 1-minute time slots)fitness levels using the Harvard Step Test, to monitor compliance with the exercise programme (Brouha, 1943).\n\nBODY.METHOD.TREATMENT:\nBoth groups continued to receive sertraline (200 mg/day) and alprazolam (1 mg/day) and did not take part in any form of counselling or psychotherapy. For the medication-plus-exercise group, participants also received training in daily exercise therapy. That therapy consisted of a 5-minute warm-up, 15 minutes of stationary running, and a 10-minute cool-down, this being convenient and economical for all participants regardless of gender or location and within guidelines on exer cise (Whaley et al, 2006). Participants were also advised not to perform any other exercise beyond usual daily activities. Heart rate recordings were made in order to adjust the intensity of exercise to keep within the target heart rate zone, this being 60–84% of the maximum achievable heart rate (Whaley et al, 2006); exercise sessions at home were logged and monitored by carers, and the logs were reviewed by the researcher at every follow-up. None of the participants or their carers reported missing any exercise sessions or cutting sessions short. Follow-up was at 6-weekly intervals for 24 weeks for all participants, and HRSD and PSLES scores were assessed on each occasion. The medication-plus-exercise group also received refresher exercise training and the Harvard Step Test was performed at every follow-up.\n\nBODY.METHOD.ANALYSIS:\nStatistical tests used were the chi-square and Student t-test. Epi Info 6 software, version 6.04a, was used to analyse the data.\n\nBODY.RESULTS:\nAlthough all 50 patients initially agreed to participate, seven did not complete the study (three from the medication-plus-exercise group and four from the medication-only group). Table 1 shows the sample characteristics and features of depression, which were evenly distributed between the groups. Table 2 shows the inter-group comparisons at all 6-weekly follow-ups. The medication-plus-exercise group showed a greater improvement in HRSD scores (change in score from baseline) than the medication-only group from the first follow-up, at 6 weeks, and this difference became statistically significant at weeks 12 (P = 0.04, t = 2.10) and 24 (P = 0.02, t = 2.42); there was no significant difference between the groups' PSLES scores at any follow-up point. Table 1 Baseline characteristics of the sample Variable Medication-only group ( n = 25) Medication-and-exercise group ( n = 25) Number (%) female 9 (60) 6 (40) Number (%) male 16 (46) 19 (54) Usual degree of physical activity: number (%)  sedentary  light  moderate 15 (65) 9 (41) 1 (20) 8 (35) 13 (59) 4 (80) Mean (s.d.) age (years) 29.6 (8.2) 30.5 (8.9) Mean (s.d.) PSLES score for lifetime 216.2 (124.9) 188.0 (86.3) Mean (s.d.) PSLES score for last 3 months 105.0 (72.6) 107.2 (65.3) Mean (s.d.) HRSD score * 23.2 (4.2) 22.8 (3.7) Mean (s.d.) number of melancholic features * 1.2 (1.6) 0.7 (1.1) Mean (s.d.) number of atypical features * 0.1 (0.33) 0.0 (0.3) HRSD, Hamilton Rating Scale for Depression; PSLES, Presumptive Stressful Life Events Scale. * Between group comparisons were non-significant. Table 2 Mean (s.d.) improvement (difference from baseline) in score on the Hamilton Rating Scale for Depression Follow-up Medication-only group Medication-and-exercise group t d.f. P 6 weeks 2.6 (5.3) 4.1 (3.4) 1.17 46 >0.05 12 weeks 2.3 (5.2) 5.0 (3.4) 2.10 45 0.04 18 weeks 3.0 (6.5) 5.6 (5.1) 1.45 42 >0.05 24 weeks 1.0 (7.0) 5.6 (5.4) 2.42 41 0.02 Relapse rates were also recorded via subjective reports and as a measured increase of >50% from baseline HRSD scores (Neumeister et al, 1997). In total, nine patients relapsed, two from the medication-plus-exercise group (both in week 18) and seven from the medication-only group (one relapse in week 6, two in week 12 and four in week 24). This difference in relapse rates did not reach significance (Fisher's exact test: one-tailed, P = 0.07; two-tailed, P = 0.14).\n\nBODY.DISCUSSION:\nWe found that the maintenance treatment of medication-plus-exercise resulted in significantly lower HRSD scores than medication alone at 12-week and 24-week assessments, as well as a promising reduction of relapse rates. These differences were not considered to be due to extraneous life events, as there was no significant difference between the two groups' PSLES scores. These findings are in line with Rethorst et al's (2009) meta-analysis in terms of the potential for exercise to lower depression scores, and with the finding from Babyak et al (2000) that exercise is effective in reducing relapse. We found no studies, however, on the combined role of exercise and antidepressant medication as a maintenance therapy for depressive disorder. The present findings suggest a potential role for exercise as a protective agent during remission, and one which might reduce long-term cost. The study was limited in several ways. There is potential for bias via use of a clinician-rated tool such as the HRSD, particularly in an unblinded study. However, to reduce this, the same experienced assessor was used for the pre-intervention and follow-up assessments and a second assessor made random cross-checks at least once for every participant. Assessors completed the rating scales with the participants through discussion. The sample size was limited to 50, which precluded analysis of potential confounds. For example, the medication-only group was randomly allocated a greater proportion of sedentary members from the outset. There was also an unequal proportion of male/female participants; however, this reflected the population of patients within the hospital. A further limitation was the lack of comparison with an exercise-only group. It is possible that the exercise group may have derived social gain from the increased contact with carers and researchers; interestingly, Babyak et al (2000) raise the point that psychological factors such as proactive agency may be engaged via choosing to exercise. A further issue for the current study was the drop in the rate of improvement for the medication-only group at week 24. The reasons for this are unclear; however, such a change dictates that studies with longer periods of assessment are needed to make stronger interpretations. Thus, future studies could benefit from: larger and more representative samples; extended follow-up periods; additional outcome measures (including patient-rated scales); and the inclusion of further control groups, such as exercise only, social contact only, and group versus solitary exercise. In conclusion, this study adds to the growing evi dence base that suggests exercise is valuable in the treatment of depressive disorders. In particular, we highlight its potential role as a supplementary maintenance treatment during remission. It is worth highlighting that, despite the relatively short time for which the participants exercised, there were measurable benefits. We believe this area of research warrants further attention, particularly as exercise provides a low-cost, flexible and easily implemented treatment with minimal adverse effects, and especially where costly delays in receiving mainstream psychological interventions are commonplace.\n\n**Question:** Compared to Medication only what was the result of Medication plus exercise on Relapse rates?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
987
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Singing teaching as a therapy for chronic respiratory disease - a randomised controlled trial and qualitative evaluation\n\n ABSTRACT.BACKGROUND:\nDespite optimal pharmacological therapy and pulmonary rehabilitation, patients with COPD continue to be breathless. There is a need to develop additional strategies to alleviate symptoms. Learning to sing requires control of breathing and posture and might have benefits that translate into daily life.\n\nABSTRACT.METHODS:\nTo test this hypothesis we performed a randomised controlled trial, comparing a six week course of twice weekly singing classes to usual care, in 28 COPD patients. The experience of singing was assessed in a qualitative fashion, through interviews with a psychologist. In addition, we surveyed patients with chronic respiratory conditions who participated in a series of open singing workshops.\n\nABSTRACT.RESULTS:\nIn the RCT, the physical component score of the SF36 improved in the singers (n = 15) compared to the controls (n = 13); +7.5(14.6) vs. -3.8(8.4) p = 0.02. Singers also had a significant fall in HAD anxiety score; -1.1(2.7) vs. +0.8(1.7) p = 0.03. Singing did not improve single breath counting, breath hold time or shuttle walk distance. In the qualitative element, 8 patients from the singing group were interviewed. Positive effects on physical sensation, general well-being, community/social support and achievement/efficacy emerged as common themes. 150 participants in open workshops completed a questionnaire. 96% rated the workshops as \"very enjoyable\" and 98% thought the workshop had taught them something about breathing in a different way. 81% of attendees felt a \"marked physical difference\" after the workshop.\n\nABSTRACT.CONCLUSION:\nSinging classes can improve quality of life measures and anxiety and are viewed as a very positive experience by patients with respiratory disease; no adverse consequences of participation were observed.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent Controlled Trials - ISRCTN17544114.\n\nBODY.BACKGROUND:\nChronic obstructive pulmonary disease (COPD) causes breathlessness, which can occur on exertion or, in some individuals, at rest. Although therapy including bronchodilators, pulmonary rehabilitation and oxygen can improve symptoms, the underlying pathology is largely irreversible, meaning that many patients remain limited despite optimal therapy [1,2]. The pattern of breathing typically adopted in COPD is caused by the presence of expiratory flow limitation which leads patients to adopt higher operating lung volumes - dynamic hyperinflation [3]. Although this pattern allows increased expiratory flow rates, this comes at the expense of a greater inspiratory load and mechanical disadvantage for the respiratory muscles. A rapid breathing pattern reduces expiratory time and will exacerbate this. The disparity between respiratory work and ventilatory output that develops is an important determinant of symptoms [4]. A range of non-pharmacological measures for breathlessness have been trialled in respiratory patients [5], including pursed lip breathing [6,7], yoga [8,9], singing [10] and laughter [11], all based on the premise that changing patients' pattern of breathing will improve symptoms A limitation of some physiotherapy approaches is that by focusing attention on breathing pattern they may accentuate awareness of respiratory limitation [12]. Singing requires the development of skills in controlling posture and breath that might be transferrable to everyday life. Breathlessness is a complex sensation [13]; singing lessons might offer techniques that address both the sensory component - largely control of respiratory pattern to reduce hyperinflation, and the affective component - a conscious experience of using the breath for 'something positive'. In addition, there is some evidence that singing may have beneficial effects on wellbeing in healthy [14] and chronic disease populations [15,16]. To evaluate this further we used three approaches; firstly a randomized controlled trial of a six week course of singing training in COPD, secondly an evaluation of a program of open 'drop-in' singing lessons for patients with respiratory disease and thirdly a qualitative evaluation of patients' experience of participating in singing.\n\nBODY.METHODS.RANDOMIZED CONTROLLED TRIAL:\nPatients with COPD, diagnosed according to the GOLD guidelines, who were attending respiratory clinics at the Royal Brompton Hospital, were invited to participate. The study was approved by the Royal Brompton, Harefield and National Heart and Lung Institute Research Ethics Committee and all subjects gave written informed consent. The trial was registered at Current Controlled Trials - ISRCTN17544114. Demographics, height, weight, clinical history and spirometry (Microlab, CareFusion, Kent, UK,) were recorded. Participants completed the Hospital Anxiety and Depression (HAD) Questionnaire [17], St George's Respiratory Questionnaire [18], and the Short Form 36 Questionnaire [19]. Functional exercise capacity was assessed using the incremental shuttle walk test (ISWT). Time to recovery of oxygen saturation, Borg dyspnoea score and heart rate following the walk was documented [20]. Control of breathing was assessed using two measures. 1) Breath hold test - subjects held their breath from maximum inspiration [21,22]. 2) Single breath counting - subjects were instructed to breathe in and then count out loud in time with a metronome running at 60 beats/min[23]. Three attempts at each manoeuvre were made and the mean values recorded. Both techniques were in routine use in the physiotherapy department for the assessment of hyperventilation. All subjects received a thirty minute standard session on breathing control and techniques to manage breathlessness, delivered by one of two senior respiratory physiotherapists involved in the study. Pursed lip breathing and nose breathing were also discussed in relation to managing episodes of shortness of breath. Each subject received a standard Royal Brompton Hospital \"Help Yourself - physiotherapy for people with respiratory symptoms\" booklet (Additional file 1) and was advised to practice the techniques at home. At the end of the baseline session, patients were randomised to either the singing or control groups, using block randomization through consecutive sealed envelopes. The subjects in the singing group attended a hospital-based workshop, led by a singing teacher (PC), twice weekly for six weeks. Each session lasted approximately one hour and included teaching of posture, relaxation and vocal exercises. Tasks of increasing difficulty were given as the sessions progressed - described in more detail in additional file 2. Each subject was given homework and an accompanying CD of songs to practice at home. The teacher was unaware of the tests being used to assess breathing control, so that she would not 'teach to the test'. The control group had no further intervention. At seven weeks follow-up, study participants were again assessed by the same respiratory physiotherapists, who were blinded to treatment allocation. All subjects were instructed not to tell the physiotherapist which group they had been allocated to. Control of breathing, quality of life and functional exercise capacity were measured as at baseline. Statistical analysis was performed using StatView 4.0, using t-tests to compare change in outcome measures between patients and controls. A p value of < 0.5 was taken as significant.\n\nBODY.METHODS.EVALUATION OF OPEN SINGING WORKSHOPS FOR RESPIRATORY PATIENTS:\nAlongside the randomized control study, we ran open, twice-weekly, 'Singing for BreathingTM' workshops for patients with any respiratory condition who were at Royal Brompton Hospital either as inpatients, day cases or outpatients. Conditions included interstitial lung disease, cystic fibrosis, asthma, bronchiectasis and COPD. The sessions were advertised through posters in the hospital and by word of mouth to patients from the arts team, respiratory nurses, physiotherapists and other healthcare professionals. Patients involved in the randomized control study were excluded from these sessions as were those where infection control issues existed. The open sessions were led by the same singing teacher as the randomised study and encompassed relaxation and posture education, as well as phonation exercises. Patients attending these workshops for the first time were asked to complete an anonymous satisfaction questionnaire at the end (Additional file 3).\n\nBODY.METHODS.QUALITATIVE SURVEY OF PATIENT EXPERIENCE OF SINGING:\nFollowing completion of the randomized control study, a chartered counselling psychologist (AE), who had not otherwise participated in the study, interviewed a random sample of eight individuals who had been in the singing arm of the trial. The thirty minute interviews were structured and each subject was asked about 1) physical changes they had noticed. 2) Emotional changes they had noticed. 3) Behavioural changes they had noticed. 4) Any detrimental effects or negative experiences. Participants were assured that they could make negative comments if they wished and that their responses would be anonymous. The interview sheet is shown in additional file 4.\n\nBODY.RESULTS.RANDOMIZED CONTROLLED TRIAL:\nParticipation in the study is described in the CONSORT diagram (Figure 1). Ninety two patients were approached to participate in the study. Fifty four did not wish to take part, one person who was already enrolled in another trial was excluded and one lived too far away to participate. Of the remaining thirty six patients, twenty were allocated to the singing group and sixteen to the control group. Of the singing group, two subjects withdrew from the study once the workshops had begun and three did not attend the final assessment once completing the workshops. In the control group, one withdrew and two did not attend the final assessment. Data are therefore presented for the 15 singers and 13 controls who completed the study. Of these, 6 attended 12 sessions, 4 attended 11, 2 came to 10 and 3 to 9. Figure 1CONSORT Flow Diagram. The baseline characteristics of the singing and control groups were well matched (Table 1). Four of each group were on long term oxygen therapy. In singers, there was a significant improvement in HAD anxiety score -1.1(2.7) vs. +0.8(1.7) p = 0.03 and SF-36 physical component score +7.5(14.6) vs. -3.8(8.4) p = 0.02. Breath hold time actually increased in the control group relative to the singing group -0.3(6.9) sec vs. +5.3(5.7) sec (p = 0.03). There was no significant difference in single breath counting, functional exercise capacity or recovery times following ISWT (Table 2). On direct questioning by the physiotherapist at the end of the study all participants in the singing group reported that they had found the workshops helpful, had practiced their singing at home and that they had found the singing helped them to control their breathing. Table 1 Baseline characteristics Total n = 28 Singing n = 15 Control n = 13 P value Age (yrs) 67.3 (8.1) 66.6 (9.3) 68.1 (6.8) 0.6 FEV 1 % predicted 37.2 (18.6) 36.8 (15.4) 37.7 (22.4) 0.9 Breath hold time (s) 26.3 (11.1) 27.4 (11.7) 24.9 (10.5) 0.9 Single breath counting (n) 23.4 (6.9) 24.8 (7.7) 21.8 (5.7) 0.3 HAD - anxiety score 5.8 (2.9) 6.3 (3.1) 5.3 (2.6) 0.4 HAD -depression score 5.8 (3.1) 5.7 (2.8) 5.8 (3.6) 1.0 SGRQ Total 48.4 (14.5) 51.1 (14.8) 45.3 (14.0) 0.3 SF36 PCS 36.8 (18.4) 32.0 (14.0) 42.2 (21.8) 0.1 SF36 MCS 52.0 (17.9) 47.4 (14.8) 57.2 (20.1) 0.2 ISWT (m) 198.6 (131.3) 174.0 (137.2) 226.9 (123.3) 0.3 Subjective recovery (s) 124.2 (77.6) 115.9 (50.4) 133.8 (102.0) 0.6 O 2 sat n recovery (s) 77.7 (71.6) 73.5 (44.6) 82.6 (95.7) 0.7 HR recovery (s) 131.0 (89.5) 97.6 (40.8) 169.6 (114.3) 0.03* FEV 1 - forced expiratory volume in one second, HAD - hospital anxiety and depression score, SGRQ - St George's Respiratory Questionnaire, SF36 short form 36, PCS - physical component score, MCS - mental component score, ISWT - incremental shuttle walk test distance, HR heart rate. *p < 0.05 unpaired t test singing group vs control group. Table 2 Changes from baseline Singing group n = 15 Control group n = 13 Δ breath hold time (s) - 0.3 (6.9) + 5.3 (5.7) 0.029* Δ single breath counting + 0.3 (7.7) + 2.0 (2.7) 0.46 Δ HAD anxiety - 1.1 (2.7) + 0.8 (1.7) 0.033* Δ HAD depression - 1.1 (2.5) - 0.1 (1.7) 0.21 Δ SGRQ Total - 1.1 (10.6) - 0.4 (5.6) 0.81 Δ ISWT (m) + 26 (52.6) + 11.3 (83.0) 0.58 Δ ISWT subjective recovery (s) + 9.9 (60.7) - 7.4 (81.7) 0.53 Δ O 2 Sat n recovery (s) + 47.3 (67.6) + 32.2 (124.7) 0.69 Δ HR recovery time (s) + 29 (63.8) + 19.4 (110.0) 0.78 Δ SF36 PCS + 7.5 (14.6) - 3.8 (8.4) 0.02* Δ SF36 MCS + 2.5 (20.9) - 3.2 (10.5) 0.38 HAD - hospital anxiety and depression score, SGRQ - St George's Respiratory Questionnaire, ISWT - incremental shuttle walk test distance, HR heart rate, SF36 short form 36, PCS - physical component score, MCS - mental component score. *p < 0.05 unpaired t test singing group vs control group. \n\nBODY.RESULTS.EVALUATION OF OPEN SINGING WORKSHOPS FOR RESPIRATORY PATIENTS:\n150 patients attended the open workshops. 61% had never attended singing lessons or workshops previously. 96% rated the workshops as \"very enjoyable\" and 98% thought the workshop had taught them something about breathing in a different way. 81% of attendees felt a marked physical difference after the workshop. 92% of the participants said that they would like to attend a further singing workshop at the hospital. The remaining patients stated that the distance to the hospital would be the main reason for not attending.\n\nBODY.RESULTS.QUALITATIVE SURVEY OF PATIENT EXPERIENCE OF THE SINGING CLASSES:\nEight patients who had participated in the singing groups were interviewed. The effect of the group was described by all in mainly positive terms and no participants reported any negative effects. Positive effects occurred in two main areas: physical and general well-being. Positive physical effects reported by the participants often related to the breathing training which was an integral part of the program, particularly \"diaphragmatic breathing\". This training brought a feeling of awareness and control which many of the participants found helped their breathing and eased symptoms of breathlessness. Singing training itself brought breathing control too; \"It has made my life easier; I would have liked this when I was first diagnosed\", \"I increased my out breath from 4 to 14 counts\", \"I started breathing much better, from the stomach\", \"The exercises, thinking about breathing and relaxing when I have (breathing) problems....this has been very useful\", \"I always felt better afterwards physically\". Other positive physical effects described were an impact on lifestyle and functional ability; \"I have better posture now\"; \"Walking better, I go out more when it's not cold\", \"Now things are less of a chore, housework is no longer a struggle\". Beneficial effects relating to general wellbeing also emerged. Relating to mood/pleasure participants reported that; \"It was very enjoyable\", \"It opened up a new lease of life\", \"Emotionally......during singing it lifts you. I feel on top on the world. I also feel like that the day after. It makes COPD a lot easier to live with\", \"Its uplifting to sing...this diagnosis is gloomy so the psychological effect of the group is good\". A feeling of community/social support was also reported; \"Nice to have human contact, we achieved something together in the group\", \"I enjoyed the social contact it was great fun\", \"In six weeks we gelled as a group\", \"It felt good to be part of a team\". Achievement/efficacy - many participants had carried aspects of the courses into daily life. Many of them continued to sing songs and perform the exercises from the group using printed sheets or a CD. It seems that learning to sing in a group brought a sense of achievement. \"I do the exercises from the CD 2-3 times a week. I am looking for a local choir\"; \"I will keep it up; it's more enjoyable than other kinds of exercise\"; \"I learnt a lot, you are never too old to learn\"; \"The group taught me how and when to breathe; I got to relive my past (of singing) and get praise\". Others reported that participating in the group had motivated them to reengage with pulmonary rehabilitation classes, or to take up exercises such as swimming and walking. One was having private singing lessons and several expressed a desire for such groups to be ongoing and made more available to patients with respiratory problems.\n\nBODY.DISCUSSION:\nThe main findings of this study are that (1) the program of singing classes improved quality of life and anxiety but did not improve the control of breathing measures, or functional exercise capacity. (2) In interviews, patients who had participated in the trial reported benefits in their physical performance and general well-being as well as a sense of achievement and self-efficacy. (3) Individuals who chose to participate in the open singing sessions were overwhelmingly positive about the experience.\n\nBODY.DISCUSSION.SIGNIFICANCE OF FINDINGS:\nThe idea that singing was beneficial for health goes back at least to the 19th century [24]. Two previous studies have looked specifically at singing in COPD patients. Bonilha et al randomised patients to either singing lessons, or a handicraft class once a week for 6 months [10]. Although singing practice produced an acute increase in inspiratory capacity, implying a reduction in gas trapping, it did not alter basal dyspnoea index (BDI). SGRQ improved equally in both groups and the singing group had an improvement in maximum expiratory pressure. Exercise capacity was not assessed. Engen studied 7 patients with emphysema who attended twice weekly vocal instruction classes for 6 weeks [23]. There was no change in spirometry, inspiratory muscle strength or exercise capacity, but patients had an improvement in single breath counting, which was associated with a change from a 'clavicular' to a 'diaphragmatic' pattern of breathing. In that study there was no control group and the teacher was not blind to the outcome measures. In the present study, anxiety score and physical component score of the SF36 improved and participants who were interviewed reported that they felt that the singing had been helpful in their everyday lives. HAD scores were not particularly high at baseline and patients were selected on the basis that they had symptomatic COPD and were able to attend, rather than because they were felt specifically to have a dysfunctional breathing pattern or psychological difficulties. It is possible that greater improvements might be found in particularly anxious sub-populations. The benefit in the physical rather than the mental component is consistent with the open workshop participants' reports that they felt physically different after the sessions. The immediate benefits reported by patients participating in the open workshops and from the interviews with singing patients are consistent with the previous observation that singing has an acute effect of reducing gas trapping [10]. Most reported sustained physical benefits, specifically identifying the value of learning breathing control. Several participants also mentioned that they felt that it was important to keep up the vocal exercises and singing in order to maintain any gains which may suggest that a longer period is necessary. The present study is also consistent with previous work that has found that singing training does not change, at least over the time courses explored to date, parameters such as exercise capacity [10,23]. We had expected that even if the singing class did not improve exercise tolerance it might hasten recovery, with patients adopting a more efficient breathing strategy. However, recovery time for heart rate, oxygen saturation or symptoms did not improve. Given the subjective benefits in physical sensation described by patients, the lack of change in exercise capacity is interesting. One possibility is that the improvements are entirely mediated in a psychological fashion. Since anxiety and depression are common and important co-morbidities in COPD [25,26] a novel approach to dealing with them could in any case be useful, particularly a non-drug treatment. An analogy would be the use of exercise prescriptions to treat depression [27]. Given the clear effects on wellbeing mentioned by the participants, one could also speculate that there might be benefits to efficacy and control in the absence of great physiological improvement. A patient's experience of their illness will be affected by a range of psychosocial factors in addition to their physical condition. All patients interviewed reported doing more singing and other pleasurable activities in their lives, suggesting that there may be long term benefits to participation in such a group. Consistent with this, singing has been used with some success in small studies to treat chronic pain where it seemed to improve coping [15] and after knee surgery [28]. A second possibility is that the 'dose' of singing was insufficient to alter conventional measures of breathing control or exercise capacity and that a longer or more intense period would be required to produce changes apparent in daily life. The singing teacher reported that she felt that after 6 weeks she was \"only just starting to make progress\" with many of the participants. It is interesting that in the current study, patients reported that what was identified to them as a 'diaphragmatic' breathing pattern was helpful. This involves facilitating outward movement of the abdomen while reducing upper ribcage movement during inspiration. It has been thought to be helpful in some patients, but although a reduction in oxygen cost of breathing has been noted [29], the approach has been associated with both an increased sensation of breathlessness and reduced mechanical efficiency in severe COPD [30,31] and is not recommended for routine use in the recent British Thoracic Society/Association for Chartered Physiotherapist in Respiratory Care(BTS/ACPRC) guidelines on \"Physiotherapy management of the adult, medical, spontaneously breathing patient\" [12]. The improvement in breath hold time observed in the control group was unexpected. A strength of the study was that the singing teacher was blind to the outcome measures used so could not \"teach to the test\". The teacher speculated that the singing group may have learnt to take a more controlled or comfortable breath and therefore smaller breath in, leading paradoxically to a reduction in breath hold time. Hyperventilation has been shown to increase breath hold time in patients with respiratory disease which might also be relevant [22]. If the benefits of participating in a singing group are largely psychological, it would suggest that attention would need to be focused on the aspects of the singing that addressed this, or even on the choice of material, rather than focussing on particular nuances of technique. Singing is, of course, likely to be a therapy that suits some people and not others, and the benefits that accrue are likely to be greatest in those who enjoy the experience. This is to some extent different from pulmonary rehabilitation where the health benefits of exercise are generally accepted. As with pulmonary rehabilitation the physiological and psychological effects could be complementary. A limitation of our study was the absence of an active control group. The previous paper of Bonilha et al. showed similar improvements of quality of life in patients submitted to singing training or handcraft artwork [10], so the observed improvements in anxiety and quality of life in the trial may have been due to regular contact with a social group rather than the singing specifically. However, in our study patients reported immediate positive effects on well being (probably because of a reduction in dynamic hyperinflation [10]) and this may therefore have reinforced participation in singing groups (as opposed to other 'social activities' which, although social have no physical effect), making the provision of singing classes a good strategy for reducing social isolation, a significant problem in chronic respiratory disease [32].\n\nBODY.CONCLUSIONS:\nSinging lessons improved anxiety and the physical component score of the SF36 but did not improve measures of breathing control, functional exercise capacity or recovery time. Participants reported that they found the singing beneficial and reported positive changes in their physical ability and wellbeing. It is likely that the effects of singing training will vary between individuals, but that it will be a positive experience for those who choose to take part. Further work is needed to quantify the magnitude and duration of improvement benefits, to set them in context against the resource implications of making singing groups for patients more widely available as a palliative therapy.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nNSH, VML, VJH, PC and MIP conceived the study; VML, PC, EJF, AE and JLK performed the study measurements. VML, VJH, AE and NSH performed the data analysis. VML wrote the first draft of the paper to which all authors subsequently made contributions. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2466/10/41/prepub\n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional file 1Help Yourself - physiotherapy for people with respiratory symptoms\". Standard booklet given to all patients participating in the trial.Click here for file Additional file 2Singing for breathing - description of program. Detailed description of singing program, selection of materials and conduct of classes.Click here for file Additional file 3Open sessions satisfaction survey. Survey used for participants to assess the open singing sessions.Click here for file Additional file 4Structured interview template. Template used to guide assessments by psychologists following the singing trial.Click here for file\n\n**Question:** Compared to Usual care what was the result of Singing classes on Improvement in HAD anxiety score?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1009
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effect of Stretching Exercises on Severity of Restless Legs Syndrome in Patients on Hemodialysis\n\n ABSTRACT.BACKGROUND:\nThe restless legs syndrome is a sensorimotor disorder that is very common in patients on hemodialysis. Due to pharmacological treatments which have their own side effects, nowadays, studies have turned to non-pharmacological treatments.\n\nABSTRACT.OBJECTIVES:\nThe present study aims to assess the effect of stretching exercises on the severity of restless legs syndrome in patients on hemodialysis.\n\nABSTRACT.PATIENTS AND METHODS:\nThis clinical trial study was conducted on 33 patients who had been identified using diagnostic criteria from the hemodialysis ward of Hasheminejad Hospital in Tehran. Participants were randomly divided into the intervention group (n = 17) and control group (n = 16). Stretching exercises were performed on legs during the dialysis for half an hour, three times a week for 8 weeks in intervention group. Data were collected by using the international restless legs syndrome study group scale.\n\nABSTRACT.RESULTS:\nThe results showed that the majority of participants were suffering from moderate restless legs syndrome. The symptom severity of this syndrome meaningfully changed eight weeks after intervention in the intervention group compared to the control group (P < 0.001).\n\nABSTRACT.CONCLUSIONS:\nThe results highlighted the significance of training and performing the stretching exercises during dialysis for the purpose of improving restless legs syndrome symptoms and the quality of care of hemodialysis patients.\n\nBODY.1. BACKGROUND:\nRestless legs syndrome (RLS) is a neurological movement disorder that affects a large number of people (1). RLS prevalence studies show that 10% of the adult population is affected by this disorder (2). The outbreak of the disease is more common in end stage renal disease (ESRD) patients than general population (3). 20% to 70% of the dialysis patients report symptoms of RLS (4). Some studies also indicate that 33% of ESRD patients suffer from RLS (3). Restless legs syndrome has been shown to have a significant impact on the quality of life of the patients on hemodialysis mainly because of the poor quality of sleep and inadequate rest (2). RLS is a disorder that gives the patient unpleasant sensations in his limbs, especially legs. These sensations make an urge to move the legs to stop the sensations (5). The syndrome is a sensorimotor disorder that is characterized by restlessness symptoms that lead to an irresistible urge to move some body parts (2). This comes in the evening after sitting for a long time and the sensations are expressed in tickling, pain, itching, and stretching feelings. The symptoms of RLS may appear at any age; 12% of RLS symptoms appear before ten (6). But overall, RLS often appears in middle age (5). Many factors of RLS are unknown or hereditary (7), but it is usually most common in people suffering from chronic renal diseases, diabetes, iron deficiency anemia, Parkinson's, neuropathy, pregnancy, use of caffeine, calcium channel blocker drugs, lithium, and sedative drugs withdrawal (5). The special criteria for diagnosis of RLS have been introduced by the international restless legs syndrome study group (IRLSSG) and including; 1) a uncontrolled urge to frequently move the limbs due to RLS symptoms, 2) temporary relief of the unpleasant symptoms by moving; 3) worsening of the symptoms after resting or lack of movement, 4) worsening of the symptoms in the evening or at night (8). Worsening of the symptoms in night in patients with moderate to severe RLS leads to sleep disorder. As a result, RLS patients may experience anxiety and sleep deprivation or drowsiness during the day that can disrupt the daily functioning of the patient (9). Also RLS causes sleep disorder, daily fatigue, depression, disruption of the ability to work and social isolation (10). These problems lead to poor quality of life and have adverse effects on social activities and family life (9). Regarding this information, studies have shown that the reduced quality of life in hemodialysis patients with RLS is possible because of inadequate sleep quality and quantity (2). Common medications for RLS has been dopamine agonists or benzodiazepines, but the drugs have some side-effects (1). Initially, to treat RLS, non-pharm logical treatments like sleep hygiene, warm bath, massages, reduction of stimulating factors such as caffeine and alcohol should be tried (11). Recently, some researcher has been conducted on the effect of changing lifestyle like performing exercises (1). Since the symptoms of the syndrome appear or worsen in resting and immobility times, these symptoms may improve by moving. Therefore, moderate exercise can be helpful (2). Exercises like walking, massaging, stretching, swimming, and stationary cycle can help relieve symptoms (12). The effects of exercise on RLS symptoms are unknown. A research by Ohayon et al. (13) in 2002 showed that severe physical activities like high intensity sports just before bedtime exacerbate the symptoms of RLS. Also Aukerman and Sakkas concluded that a daily mild exercise program can alleviate the RLS symptoms (1, 2).\n\nBODY.2. OBJECTIVES:\nReview of recent studies showed that we need more research about the effects of exercise on RLS symptoms. Therefore, the present study aims to assess the effects of stretching exercises on severity of RLS symptoms in patients on hemodialysis because stretching exercises are easily applicable by the patients and are applied on legs and don't cause disruption in dialysis. \n\nBODY.3. PATIENTS AND METHODS.3.1. DESIGN AND SAMPLE:\nThis study was a randomized controlled trial conducted on hemodialysis patients from the hemodialysis ward of Hasheminejad Hospital affiliated to Tehran University of Medical Sciences in Tehran. Given the study by Aukerman et al. (1) with a confidence interval of 95% and testing power of 90%, the sample size was calculated 13 for each group, but totally 18 people were considered for each group considering the possible dropout, so 36 people were enrolled.\n\nBODY.3. PATIENTS AND METHODS.3.2. INCLUSION AND EXCLUSION CRITERIA:\nThe inclusion criteria were having diagnostic criteria for RLS according to IRLSSG, age over 18, at least 6 months since the start of dialysis treatment, being on the weekly list of dialysis for 3 times a week and each time for 3 to 4 hours, having no mental or physical disability, being completely alert, having acceptable hearing and speaking ability for answering the questions, absence of any infection, injuries, and peripheral neuropathy or vascular problems in legs, absence of orthopedic problems, and confirmation of RLS by physician in research centers. The exclusion criterion was refusal to do the stretching exercises for 3 consecutive sessions and overall 6 sessions. \n\nBODY.3. PATIENTS AND METHODS.3.3. PROCEDURE AND INSTRUMENTS:\nTo identify the patients with RLS, initially, four diagnostic criteria were given to all the patients at the hemodialysis ward. Patients, who answered \"yes\" to all 4 questions, were examined by the physician for confirmation of RLS. The patients with RLS were included in the study consensually and randomly divided into intervention and control groups. Initially, the stretching exercises were taught to the intervention group during two sessions. After the training, the intervention group participated in 8 weeks of a stretching exercise program three times a week during the last two hours of hemodialysis session and each time for 30 minutes. The researcher closely monitored the exercises. Each session involved 5 minutes of initial warm-up. Then, the participants did stretching exercises on legs for 20 minutes and then cooled down for 5 minutes. The stretching exercises that were confirmed by a sports medicine specialist were applied to the legs and included hip rotation to the sides, quadriceps stretch, knee to chest stretch, hamstring stretch, gluteal stretch, straight leg raise, side lying leg lift and so on. Each muscle was stretched to three sets of ten. Duration of each stretching was 5 seconds. After training sessions these exercises was performed by the participants under the supervision of the researcher. For assessment of the effect of stretching exercises on severity of RLS we used the international RLS study group (IRLSSG) Scale. This scale was completed at baseline, at the end of the 4th week, and at the end of the 8th week by the subjects. The demographic information was collected by a questionnaire. The scale for assessing the severity of RLS involved 10 items each having 5 options. Every question had 0 to 4 points. The maximum score was 40 and a high score indicated the severity of disease. The participants with less than 10 points were mild, 11 to 20 points were moderate; 21 to 30 points were severe and above 31 was very severe. All interviewers and personnel who handled the questionnaires were blinded to study allocation of the participants. In order to determine content validity and reliability of the IRLSSG questionnaire, Cornbrash's alpha coefficient was used in this study. The validity of IRLSSG questionnaire was 0.87. The Data were collected in two months in 2012 and were analyzed using SPSS16 software program and Fisher's statistical test, two-sample Kolmogorov-Smirnov test, t-test and ANOVA. A P value < 0.05 was considered statistically significant. The study was introduced and approved by the Ethics Committee of the Tehran University of Medical Sciences in 09 September 2012. All participants signed written consent after full explanation of the procedure. \n\nBODY.4. RESULTS:\nAfter screening all the patients who were in the list of hemodialysis ward and identifying patients with RLS, 36 people were included in the study. They were divided randomly into two 18-member control and intervention groups. One patient was excluded from the intervention group because of not completing stretching exercises sessions, and two patients were excluded from the control group (one of them had died and the other one was not cooperative). In total, the study continued with 33 people (17 patients in the intervention group and 16 patients in the control group). According to the results, 51.5% of the participants were men and 30.3% were above 66-years-old. Most participants were married (63.6%) and jobless or retired (78.1%). The most common diseases among the participants were hypertension (48.5%) and diabetes (42.4%) and the most used drug was amlodipine (45.5%). The results also showed that 100% of the participants were unaware that they were suffering from RLS and they did not take any medications to relieve symptoms of RLS. An overwhelming majority of the participants (87.9%) did not do exercises and any sports regularly. Most of the participants were suffering from medium to severe of RLS and there was no meaningful difference between the intervention and control groups in the beginning of the study. Repeated ANOVA test showed that the severity of RLS scores (according to IRLSSG scale) decreased meaningfully in the intervention group by 8 weeks, but these scores didn't change significantly in the same period in the control group (P < 0.001) (Table 1). Table 1. Comparison of Severity of RLS Scores in Control and Intervention Group at Baseline, End of 4 th Week and End of 8 th Week by ANOVA Test a , b , c Variable Baseline (A) End of 4 th Week (B) End of 8 th Week (C) ANOVA Test \n Control group \n d \n 19 ± 4.78 18.43 ± 4.47 19.06 ± 4.71 0.55 \n Intervention group \n e \n 18.94 ± 5.44 17.47 ± 5.1 12.41 ± 3.79 < 0.001 a Total severity of RLS scores range from zero to 40 with higher values being indicative of severe RLS symptoms. b P < 0.05 are considered significant. c Values are expressed as Mean ± SD. d A and B, NS (P = 0.262); A and C, NS (P = 0.937); B and C, NS (P = 0.307). e A and B, SN (P = 0.001); A and C, SN (P < 0.001); B and C, SN (P < 0.001). Also the results of t-test show that the changes in the severity of RLS symptoms before intervention and after 4 weeks were not statistically meaningful between intervention and control groups. However, the changes were meaningful at the end of the 8th week (P < 0.001) (Table 2). Table 2 . Comparison of Severity of RLS Scores in Control and Intervention Group at Baseline, End of 4 th Week and End of 8 th Week by t-test a , b , c Variable Intervention Group d Control Group e t-test Value P Value \n Baseline (A) \n 18.94 ± 5.44 19 ± 4.78 -0.33 0.97 \n End of 4 \n th \n week (B) \n 17.47 ± 5.1 18.43 ± 4.47 -0.57 0.56 \n End of 8 \n th \n week (C) \n 12.4 1± 3.79 19.06 ± 4.71 -4.48 < 0.001 a Total severity of RLS scores range from zero to 40 with higher values being indicative of severe RLS symptoms. b P < 0.05 are considered significant. c Values are expressed as Mean ± SD. d A and B, SN (P = 0.001); A and C, SN (P < 0.001); B and C, SN (P < 0.001). e A and B, NS (P = 0.262); A and C, NS (P = 0.937); B and C, NS (P = 0.307). \n\nBODY.5. DISCUSSION:\nIn this study, the severity scores in the intervention group were 18.94 in the beginning and it decreased to 12.41 at the end of the 8th week. The changes are totally significant, but not in the control group. The present research indicated that stretching exercises for 8 weeks (24 sessions) on legs during hemodialysis improves the RLS symptoms (Figure 1). Figure 1.Comparison of RLS Severity Scores Changes in Intervention and Control Groups During the Eight Week's InterventionNo changes in the RLS severity scores were found in the Control Group while statistically significant differences were found only between week 0 and week 8 for the intervention group (P < 0.05). Since the hemodialysis patients are often disabled and they spend most of their time on dialysis beds, they are usually inactive. Therefore, the proposed stretching exercises can be very helpful because they focus only on lower body limbs and have no side effects such as disruption in the hemodialysis treatment due. The results of this research are compatible with the other studies. Sakkas et al. (2) showed that aerobic exercises (as a mild sport) during hemodialysis reduce the severity of RLS symptoms in 16 weeks. Aukerman et al. (1) showed that meaningful changes in the severity of RLS symptoms are observed from onset of first week to the end of 6th week. However, in this study no meaningful change was detected after the 6th week to the end of 12th week. Therefore, 8 weeks exercise can suffice to prove the effectiveness of stretching exercises on the severity of RLS symptoms. In contrast, some researchers like Ohayon et al. (13) showed that relatively severe physical activity like high intensity sports before bedtime is meaningfully linked to the severity of RLS symptoms. Therefore, hemodialysis patients are recommended to avoid severe and hard sport programs especially before bedtime. According to this fact, the present study examined the effects of intradialytic mild or moderate stretching exercises. The present research also showed that most participants (78.51%) were jobless or retired that leads to inactivity. Since the RLS symptoms appear or worsen in immobility, physical activity improves the symptoms (13). Therefore, having a mild, regular exercise program can help to alleviate the RLS symptoms and will enhance the health of the people suffering from the syndrome. Scientists believe despite the fact that the real cause of RLS is still unknown; the syndrome is more common among diabetic patients (5). On the other hand, diabetes is one of causes of renal failure; it is more common among hemodialysis patients. This study proved this finding too. Because in our findings, the most common diseases in patients suffering from RLS were diabetes. Regarding the positive effect of stretching exercises on hemodialysis patients, nurses in charge of diabetic patients are also recommended to train them in the exercises to improve their life quality. This study has found that the most common drug used by the participants is amlodipine. This drug blocks calcium channels and is used for heart diseases. According to different studies, people who use calcium channel blockers are more vulnerable to RLS (5). It seems that there is a meaningful relation between this drug and RLS. It is recommended that further studies are needed to assess this relationship. An absolute majority of the participants in this study did not perform any exercises regularly. Although hard exercises, particularly before sleep can worsen the RLS symptoms and may lead to sleep disorder and chronic insomnia, moderate or light exercises program can be helpful during the day to improve RLS symptoms. This study showed that regular stretching exercises on legs during hemodialysis can help alleviate the severity of RLS symptoms. The exercises have no side effects and no cost and can be done without help. Nurses can help alleviate the RLS symptoms by training the patients and thus take a positive step towards healthcare objectives.\n\n**Question:** Compared to No exercises what was the result of Stretching exercises on Symptom severity after 4 weeks ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1013
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Comparing the Hemodynamic Effects of Spinal Anesthesia in Preeclamptic and Healthy Parturients During Cesarean Section\n\n ABSTRACT.BACKGROUND:\nDespite controversies about the safest anesthetic technique for cesarean delivery in severely preeclamptic women, there is evidence that supports the use of spinal anesthesia in this group of patients.\n\nABSTRACT.OBJECTIVES:\nThis prospective randomized clinical trial was designed to determine the hemodynamic effects of low-dose spinal bupivacaine and the incidence of spinal anesthesia-associated hypotension in severely preeclamptic and healthy parturients undergoing cesarean sections.\n\nABSTRACT.PATIENTS AND METHODS:\nSpinal anesthesia was performed with 10 mg (= 2 mL) hyperbaric 0.5% bupivacaine plus 2.5 μg sufentanil in two groups of patients after they received 500 mL of IV lactated Ringer's solution. Heart rate and blood pressure were recorded before spinal anesthesia and at two minutes intervals for 15 minutes after the block, and then every five minutes until the end of the surgery. Hypotension was defined as more than 25% of decline in the mean arterial blood pressure compared to the baseline in both groups (or systolic blood pressure < 100 mmHg in healthy parturients) and was treated with 5 mg IV ephedrine. The total amounts of intravenous administered fluid and the total doses of ephedrine were recorded for each patient as well.\n\nABSTRACT.RESULTS:\nThe incidence rate of hypotension among the preeclamptic patients was lower than that of the healthy parturients, despite the former group receiving smaller volumes of intravenous fluids (P < 0.05). The total doses of IV ephedrine for treating hypotension were significantly lower among the preeclamptic patients (3.2 mg in preeclamptic patients versus 7.6 mg in normotensive patients) (P = 0.02). The one-minute Apgar score was significantly lower for the preeclamptic parturients (8.4 ± 0.7 versus 7.2 ± 1.5) (P = 0.001), but there was no significant difference in the five-minute Apgar scores between the two groups.\n\nABSTRACT.CONCLUSIONS:\nOur results confirm that low-dose bupivacaine spinal anesthesia is associated with a lower risk of hypotension than previously believed, and it can therefore be safely used in severe preeclamptic women undergoing cesarean delivery.\n\nBODY.1. BACKGROUND:\nPregnancy-induced hypertension is a major cause of morbidity and mortality in obstetrics, complicating 3% - 8% of pregnancies. Severe preeclampsia poses a dilemma for anesthesiologists, and there is some controversy about the best anesthetic technique for cesarean delivery in such cases (1, 2). Because of the risks related to airway edema, difficulty with the airway or failed intubation, hypertensive response to direct laryngoscopy, and aspiration pneumonitis, general anesthesia is associated with more untoward outcomes in this particular group of patients (3, 4). When there is no contraindication for performing regional anesthesia, risk-benefit considerations strongly favor neuraxial techniques over general anesthesia for cesarean delivery in cases of severe preeclampsia. Regional anesthesia techniques have been widely used recently, however, spinal anesthesia, once considered contraindicated due to the common belief that the sudden and extensive sympathetic blockade following the subarachnoid block would result in severe hypotension and compromise uteroplacental blood flow in this group of patients (5-8). Although controversial, some studies have shown the effectiveness of colloid loading on reducing the incidence of hypotension in spinal anesthesia (9, 10), but vasopressor agents and volume loading, which are commonly used to manage spinal anesthesia-induced hypotension, could put the preeclamptic patients at increased risk of hypertension and pulmonary edema (6). Recent evidence has challenged this view, suggesting that spinal anesthesia may in fact be an appropriate choice for preeclamptic women when cesarean delivery is planned, as long as neuraxial anesthesia is not contraindicated (e.g., coagulopathy, eclampsia with persistent neurologic deficits) (2, 5, 8). Although the relative safety of the subarachnoid block in these patients has been demonstrated, there are few studies that compare the differences in the hemodynamic changes and newborn well-being after single-shot spinal anesthesia between preeclamptic and healthy parturients\n\nBODY.2. OBJECTIVES:\nThis prospective randomized clinical trial was designed to compare the hemodynamic effects and the incidence of spinal anesthesia-associated hypotension after spinal anesthesia with bupivacaine plus sufentanil in severely preeclamptic versus healthy parturients undergoing cesarean sections.\n\nBODY.3. PATIENTS AND METHODS:\nAfter obtaining institutional ethics committee approval, 80 parturients (37 healthy and 43 severely preeclamptic parturients) that were being cared for in our unit from April 2011 to July 2012 were enrolled in the study after providing informed consent. Severe preeclampsia was defined as a systolic arterial blood pressure of 160 mmHg or higher, or a diastolic blood pressure of 110 mmHg or higher, associated with proteinuria > 5 g in 24 hours. Patients who were excluded were those with coagulopathy (including those with platelet counts < 50,000), placental abruption, severe fetal distress, a history of allergy to local anesthetics, oliguria of less than 500 mL in 24 hours or persistently < 30 mL/hour, cerebral or visual disturbances, pulmonary edema, hemodynamic instability, local infection of the spinal injection site, or refusal of a spinal block. All patients in the preeclampsia group received a 4 g loading dose of intravenous magnesium sulfate (Mg SO4) followed by a 1 g/hour infusion for 24 hours for seizure prophylaxis. Intravenous hydralazine of 5 mg was given at 20-minute intervals to decrease diastolic blood pressure to approximately 90 mmHg. Before performing spinal anesthesia on each patient, preoperative fluid administration equal to 10 mL/kg of Ringer's lactate solution was administered over the course of 15 - 20 minutes. All patients received 1500 - 2000 mL lactated Ringer's solution after spinal anesthesia and during the operation. The volume of administered fluid was not restricted in the preeclamptic patients because of the contracted intravascular volume in this group of patients and the high incidence of hypotension caused by spinal anesthesia-induced sympathetic blockade. Patients were monitored with standard monitoring devices including automated blood pressure cuffs, electrocardiogram, and pulse oximetry. Spinal anesthesia was performed with 10 mg hyperbaric 0.5% bupivacaine plus 2.5 - 3 μg sufentanil (2.5 mL volumes) in two groups in the sitting position in the L3-L4 or L4-L5 vertebral interspaces. Each patient was then placed in the supine position with a left lateral tilt of 15-20 degrees. The height of the sensory block was assessed using a pinprick test, and a 10°-15° head down-tilt (Trendelenburg position) was initiated if a T4 sensory level was not achieved at 10 minutes after the spinal injection. After achieving an adequate sensory block (T4 level), the patient was prepared for surgery. Heart rate and blood pressure were recorded before performing spinal anesthesia at two-minute intervals for 15 minutes after the block, and then every five minutes until the end of the surgery. Hypotension was defined as more than 25% decline in mean arterial blood pressure (MAP) compared to the baseline in both groups (or systolic blood pressure (SBP) < 100 mmHg in healthy parturients) and was treated with 5 mg IV ephedrine. The total amounts of intravenous administered fluid and the total doses of ephedrine were recorded as well. Based on the findings of previous studies, we calculated that at least 38 patients per group were required to show a 25% difference in the incidence of hypotension, with 80% power at the 5% level. Data are presented as number, median and range, mean ± SD, or percentage as appropriate. Fisher's exact test was used for intergroup comparisons of the incidence of hypotension. Mean values of most of the quantitative study variables were compared by using the unpaired Student's t-test. A P value of less than 0.05 was considered to indicate statistical significance. Data entry and analysis were performed using SPSS software version 16.\n\nBODY.4. RESULTS:\nEighty patients (severe preeclampsia = 43 and healthy = 37) were studied. Demographic data, the times from spinal injection to delivery, the median sensory blocked levels at the time of incision, the volumes of estimated blood loss, and the surgical durations were similar between the two groups (Table 1). Table 1. Maternal, Neonatal, and Anesthetic Considerations a Variable Healthy Preeclampsia P Value \n n \n 37 43 \n Age, y \n 28.1 ± 5 29.3 ± 6.6 0.12 \n Gestational age, week \n 38.8 ± 1.2 34.3 ± 3.8 0.00 \n Base line MAP \n 101.1±10.1 119.5±15.9 0.02 \n Upper Sensory Level (Median, Range) \n T4 (T2-T5) T4 (T2-T5) 0.32 \n IV fluid, mL \n 2500 ± 0.1 2400 ± 0.2 0.000 \n Spinal puncture to delivery interval, minute \n 17.5 ± 2.5 17.1 ± 2.5 0.83 \n Ephedrine dose, mg \n 7.5 ± 9.2 3.2 ± 7.8 0.04 \n APGAR score 1-min (Range) \n 8.4 ± 0.7, (5 - 10) 7.2 ± 1.5, (2 - 10) 0.001 \n APGAR score 5-min (Range) \n 9.4 ± 0.7, (7 - 10) 8.6 ± 1.1, (5 - 10) 0.35 a Values are expressed as mean ± SD unless otherwise indicated. The mean gestational age and mean one-minute Apgar scores in the patients with severe preeclampsia were significantly lower than those of the healthy parturients (Table 1). However, there was no significant difference in the five-minute APGAR score between the two groups. The incidence rate of hypotension in the preeclamptic patients (55.8%) was less than that of the healthy parturients (89.2%) (Table 2), despite the former receiving smaller volumes of intravenous fluids (2.4 versus 2.5 lit) (Table 1) (P = 0.001). Table 2. Incidence of Hypotension and Changes in Blood Pressure After Spinal Anesthesia in the Two Groups Variable Healthy Preeclampsia P Value \n Incidence of MAP hypotension (%) \n 33 (89.2%) 24 (55.8%) 0.001 \n Lowest SBP mmHg \n 98 ± 12.9 116 ± 18.6 0.02 \n Lowest DBP mmHg \n 47.5 ± 9.5 66.3 ± 14.1 0.03 \n Lowest MAP mmHg \n 64.9 ± 10.1 83.7 ± 14.9 0.008 The SBP, DBP, and MAP measured at the baseline were higher for the patients with preeclampsia, and the mean lowest SBP and MAP measured among the preeclamptic patients were consistently higher than the corresponding values among the healthy parturients (Table 2). Furthermore, the total doses of IV ephedrine for treating hypotension were significantly lower for the preeclamptic patients than for the normotensive patients (3.2 ± 7.8 mg versus 7.5 ± 9.2 mg) (P = 0.04).\n\nBODY.5. DISCUSSION:\nSpinal anesthesia-associated hypotension may occur in up to 64% - 100% of pregnant women undergoing cesarean delivery (2). Severely preeclamptic patients have been considered to be at higher risk of severe hypotension (1, 2, 5-8), and the concern of severe hypotension caused by subarachnoid block may often deter the anesthesiologist from choosing this technique for this group of patients. Epidural anesthesia has traditionally been regarded to be safer for preeclamptic parturients as it does not produce sudden hypotension. However, some studies have shown that the two techniques produce a similar incidence and severity of hypotension in preeclamptic parturients (6, 7, 11). There is growing interest in using spinal anesthesia on preeclamptic patients because of its simplicity, faster onset, lower dose of injected local anesthetic (which decreases the probability of systemic toxicity), and less tissue trauma caused by the use of a smaller gauge spinal needle (12-14). As a result of this interest, a number of studies have been conducted to show the hemodynamic consequences of spinal anesthesia in patients with preeclampsia. A prospective study by Aya et al. found that the risk of hypotension following spinal anesthesia in preeclamptic patients was significantly lower than the risk among healthy term parturients (17% in preeclamptic parturients and 53% in healthy parturients) (2). In another study, the same author suggested that the risk of hypotension following a subarachnoid block in preeclampsia was related to preeclampsia-associated factors rather than a small uterine size (15). Similar to the studies by Aya et al., the incidence of hypotension in severely preeclamptic patients undergoing spinal anesthesia for cesarean delivery was found to be significantly lower in comparison to the rate among healthy parturients (55% versus 89%) in our study. Factors such as difference in gestational age, the carrying of a smaller fetus, less aortocaval compression, sympathetic hyperactivity, and high vascular tone might have led to this finding (1, 2, 8, 16). The injection of different doses of bupivacaine (10 mg versus 8 - 12 mg) for the induction of the subarachnoid block and the different criteria for defining hypotension (a 25% versus 30% decline to baseline MAP) might explain why the incidence of hypotension was higher in both groups in our study compared to the corresponding rates in Aya et al.'s study. According to two other studies conducted by Mendes et al. (17) and Saha et al. (8), the hemodynamic changes and newborn well-being appeared to be comparable in severely preeclamptic and healthy parturients submitted to spinal anesthesia for cesarean section, and spinal anesthesia seemed to be a safe option for patients with severe preeclampsia. The ephedrine requirement for treatment of spinal anesthesia-induced hypotension in preeclampsia has been reported to be lower than that required by healthy parturients (2, 18). Preeclamptics have been reported as requiring significantly less phenylephrine to treat hypotension as well (8). These results were comparable to our findings, in that the total doses of IV ephedrine for treating hypotension were significantly lower for the preeclamptic patients (3.2 ± 7.8 mg) than for the normotensive patients (7.5 ± 9.2 mg) (P = 0.04). There is little evidence in the current literature supporting the use of phenylephrine as the vasopressor of choice in high-risk pregnancies such as those involving preeclampsia (19), so we chose to use ephedrine for treating hypotension in our patients. More studies are needed to investigate the effects of vasopressors while considering the influence on feto-maternal physiology in patients with preeclampsia. The results of a review by Dyer et al. showed that after spinal anesthesia for cesarean section, patients with preeclampsia had a lower susceptibility to hypotension and less impairment of cardiac output than healthy parturients (20). In a prospective observational study on 15 parturients with severe preeclampsia, no clinically significant change in cardiac output was shown after the subarachnoid block (21). The focus of our study was on the blood pressure changes during spinal anesthesia in the preeclamptic patients, and therefore we did not measure the cardiac output fluctuations in our patients. Further studies with larger sample sizes evaluating cardiac output are needed for better understanding of hemodynamic changes during spinal anesthesia in this group of patients. It is believed that the incidence of spinal anesthesia-induced hypotension is related to the local anesthetic dose, so one particular strategy to minimize the hemodynamic disruption after spinal anesthesia involves using small intrathecal local anesthetic doses. In a pilot study which compared the hemodynamic consequences of two doses of spinal bupivacaine (7.5 mg versus 10 mg) for cesarean delivery in those with severe preeclampsia, pre-delivery MAP was lower and the ephedrine requirements were greater in the 10 mg group (22). In another study, Roofthoof and Van de Velde have shown that when low-dose spinal anesthesia (6.5 mg bupivacaine) was administered with sufentanil as part of a combined spinal-epidural (CSE) technique in shorter surgeries (less than 60 minutes), the need for epidural supplementation was rare (23). The ED95 of intrathecal bupivacaine coadministered with intrathecal 2.5 μg sufentanil using CSE anesthesia for cesarean section in severely preeclamptic patients was reported to be 8.82 mg in another study, and using smaller doses of intrathecal bupivacaine in the patients resulted in a decrease of incidences of maternal hypotension and vasopressor requirements (24). However, no studies have compared CSE with single-shot spinal anesthesia in severe preeclampsia, and further research is needed to elucidate the best strategy to optimize the hemodynamics and uteroplacental perfusion in this particular group of patients. Considering the neonatal outcomes after various anesthesia techniques in cesarean delivery among preeclamptic patients, no statistically significant difference was found in the one- and five-minute Apgar scores and the umbilical artery blood gas markers between the two groups of patients receiving spinal or general anesthesia (25). Other studies in support of subarachnoid block have also shown that transient neonatal depression and birth asphyxia are more common among preeclamptic women who have received general anesthesia (26). Comparing umbilical arterial fetal base deficit and other markers of maternal and neonatal well-being in 70 preeclamptic patients undergoing cesarean delivery who were randomized into groups receiving either spinal or general anesthesia, the spinal group had a higher mean umbilical arterial base deficit and a lower median umbilical arterial pH, but other markers of a compromised neonatal condition, including the requirement for neonatal resuscitation, an Apgar score < 7, an umbilical arterial pH < 7.2, and the need for neonatal intermittent positive pressure ventilation were the same among the two groups (27). In comparison with healthy subjects, patients with severe preeclampsia had a younger gestational age (34 weeks versus 39 weeks) in our study, which is one of the likely causes of the lower one-minute Apgar scores of the neonates among the first group. Although there was evidence as early as 1950 that preeclampsia attenuates spinal anesthesia-induced hypotension, it has taken a long time for clinical trials to demonstrate the safety of spinal and CSE anesthesia in preeclamptic parturients. Recently, after five decades of research, the relationship between spinal anesthesia, pre-eclampsia, and hypotension can be properly acknowledged and put into clinical practice (28). Because of an altered balance of vascular tone, reduced responses to endogenous pressors, and increased synthesis of vasodilator prostaglandins and nitric oxide, the normal pregnant patient is very sensitive to spinal anesthesia. These effects increase dependence on sympathetic vascular tone in normal pregnancy, and this can be the main cause of spinal anesthesia-induced hypotension in healthy parturients, while damaged vascular epithelium results in persistent vasoconstriction in preeclampsia (8, 16). There is a dramatic increase in the use of spinal anesthesia for cesarean delivery in severe preeclampsia that could be related to the documented safety of subarachnoid block in this group of patients. Therefore, single-shot subarachnoid block may be a good choice for cesarean delivery in patients with severe preeclampsia, since it has been shown to be safe for both the mother and the neonate (28).\n\nBODY.5. DISCUSSION.5.1. CONCLUSION:\nOur results have also confirmed that single-shot low-dose bupivacaine spinal anesthesia is associated with a lower risk of hypotension and vasopressor requirements in comparison to the rates of healthy subjects, and could be safely used in patients with severe preeclampsia undergoing cesarean delivery. However, more studies with the CSE technique using smaller doses of local anesthetics and larger sample sizes are suggested. Further research is needed to find the best strategies to optimize hemodynamics and uteroplacental perfusion in severely preeclamptic parturients during spinal anesthesia for cesarean delivery.\n\n**Question:** Compared to Low-dose bupivacaine spinal anesthesia in healthy parturients what was the result of Low-dose bupivacaine spinal anesthesia in pre-eclamptic parturients on Five-minute Apgar score?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1015
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effect of Desmopressin on the Amount of Bleeding in Patients Undergoing Coronary Artery Bypass Graft Surgery with a Cardiopulmonary Bypass Pump After Taking Anti-Platelet Medicine\n\n ABSTRACT.BACKGROUND:\nCoronary artery bypass grafting (CABG) is a common surgical intervention at the end-stages of coronary artery occlusion disease. Despite the effectiveness of CABG, it may have particular complications, such as bleeding during and after surgery. So far, there have been many drugs used to reduce bleeding.\n\nABSTRACT.OBJECTIVES:\nThis study aimed at investigating the effects of desmopressin on the amount of bleeding in patients undergoing CABG with a cardiopulmonary bypass pump (CPBP) who were taking anti-platelet medicine.\n\nABSTRACT.METHODS:\nOne hundred patients scheduled for elective CABG with a CPBP were included in a prospective, placebo-controlled, double-blinded clinical trial study. They were randomly divided into two groups. One group received desmopressin (40 μg) and the other group received a placebo (isotonic saline). Seven patients were excluded from the study, and 47 and 46 patients participated in the desmopressin and control groups, respectively. The methods of monitoring and the anesthetic techniques were similar in both groups, and all surgeries were performed by one surgeon. Variables including age, gender, pump time, aortic clamp time, duration of surgery, complications (e.g., nausea and vomiting, blood pressure changes), the necessity to receive blood products, and coagulation tests (prothrombin time, partial thromboplastin time, international normalized ratio, and bleeding time) were assessed. Data were statistically analyzed with SPSS software version 17.\n\nABSTRACT.RESULTS:\nThere was no significant difference between the groups regarding age, gender, pump time, clamp time, duration of surgery, complications, and the changes in hemoglobin and coagulation test measurements (P > 0.05). No significant difference was noted between the groups regarding the rate of bleeding after surgery (359.3 ± 266.2 in group D vs. 406.3 ± 341.6 in group P (control group); P = 0.208). However, the platelet changes after surgery in both groups were significantly different. The analysis revealed that the rate of thrombocytopenia after surgery was higher in the control group (P = 0.012).\n\nABSTRACT.CONCLUSIONS:\nOur study showed that desmopressin could not reduce the amount of blood loss after CABG. Also, desmopressin did not have a significant effect on coagulation status. Therefore, based on the results of our study, it seems that the use of this medication cannot be a helpful for patients with any indication for CABG.\n\nBODY.1. BACKGROUND:\nCoronary artery disease (CAD) is one of the leading causes of death worldwide. Patients with CAD require coronary artery revascularization as well as other medical treatments (1). Coronary artery bypass grafting (CABG) is a common interventional surgery in order to treat end-stage coronary artery occlusion disease. Although CABG is an effective method, it may also come with particular complications during and after the surgery (1, 2). CABG may lead to disorders in platelet count, coagulation, and fibrinolysis. Excessive amounts of bleeding after surgery is a common problem that sometimes requires blood transfusions or even reoperation (3-6). Desmopressin is an artificial form of vasopressin which is commonly used to prevent bleeding in hemophilia and Von Willebrand disease (7, 8). Previous investigations have shown that desmopressin can decrease the amount of blood products needed in transfusion for cardiac surgery (9). However, some studies have indicated that desmopressin was not effective when used as a prophylaxis (10, 11). Still, some other studies have shown that using desmopressin could reduce bleeding in post-CABG patients, patients with long cardiopulmonary bypass pump (CPBP) times, and high-risk patients who had taken anti-platelet medication (9, 12-16).\n\nBODY.2. OBJECTIVES:\nAccording to the general importance of the issue and the conflicting results on the effects of desmopressin on bleeding after cardiac surgery, and also concerning the lack of previous studies which have assessed the effects of intranasal desmopressin on bleeding after cardiac surgery, we aimed to investigate the effects of desmopressin on the bleeding rates in patients who had undergone CABG with a CPBP after taking anti-platelet medicine.\n\nBODY.3. METHODS:\nWe conducted a randomized double-blind clinical trial in the anesthesiology research center of Guilan University of Medical Sciences (GUMS) in Rasht, Iran. This study was approved by the ethics committee of GUMS with the reference number of 1930252810 and registered in the Iranian registry of clinical trials (IRCT) under the number IRCT201409304345N3. Inclusion criteria were Class II-III status according to the American society of anesthesiologists' physical status classification system, 3-vessel coronary artery disease, ejection fraction of ≥ 40%, reception of anti-platelet medicines including aspirin of 80 mg/day and clopidogrel of 75 mg/day before surgery, no history of coagulopathy or renal impairment, and having been scheduled for CABG with a CPBP. Exclusion criteria were a pump time > 120 minutes, concomitant valve procedure and CABG, inotrope administration for separating the patient from the cardiopulmonary pump, need for anticoagulant therapy during and/or in the post-operative period, and coronary endarterectomy. The sample size was determined according to the following formula derived from the study by Gratz et al. (17): α = 0.05 Z1-α/2 = 1.96 (95% confidence interval was noted) β = 0.20 Z1-β = 0.85 μ1 = 1176, S1 = 674 μ2 = 833, S2 = 311 Equation 1.n=(Z1-α2+Z1\n- β)2(S12+S22)(μ1\n-μ2)2 n = 50 Patients in each group. Using the method of random blocking, 100 patients were divided into two groups (desmopressin: group D and placebo: group P) with 50 patients in each group. At the time of hospitalization, the type of surgery and anesthesia method were explained for patients, and written informed consent was obtained. In the preoperative period, blood levels of hemoglobin (Hb), platelets, prothrombin time (PT), partial thromboplastin time (PTT), international normalized ratio (INR), and fibrinogen were measured and recorded. The anesthesiologist advised patients to fast the night before surgery. For premedication, oral lorazepam of 1 mg was administered the night before and one hour prior to surgery. An hour before induction of anesthesia, intramuscular morphine sulfate of 0.1 mg/kg was injected into all patients. Thirty minutes before anesthesia, the desmopressin or placebo was administered. In group D, two puffs of intranasal desmopressin (Sina Vista Daroo manufacturing, Tehran, Iran) were sprayed. Each puff contained 10 μg of desmopressin. In group P, 2 puffs of intranasal saline (normal saline in a desmopressin container) were sprayed. The patients and the anesthesiologist had no awareness about the drug type administered during and after surgery. Also, the laboratory technician was unaware of the drug type. The researcher was the only one who was aware of the medication so that he could take any necessary actions if complications occurred. Patients were monitored under standard protocols including 7-lead electrocardiography (EKG), pulse oximetry, and bispectral indexing (BIS) in the operating room. Before induction of anesthesia, the cardiac anesthesiologist inserted one peripheral venous catheter, a left radial artery catheter, and a central venous catheter under local anesthesia with lidocaine of 1%. Patients received 5 - 7 ml/kg isotonic saline before induction of anesthesia. After preoxygenation, anesthesia was induced by slow injection of midazolame of 0.05 mg/kg and sufentanil of 1 μg/kg (over the course of 10 minutes) intravenously, followed by infusion of 0.2 mg/kg cisatracurium for muscle relaxation. After intubation, anesthesia was maintained by propofol of 50 - 75 μg/kg/min, sufentanil of 0.2 μg/kg/h, and atracurium of 0.6 μg/kg/h. Then, a nasopharyngeal thermometer was inserted, and in order to maintain the depth of anesthesia, BIS during surgery was maintained between 40 and 60. Then, patients underwent median sternotomy, a standard technique was used to establish a CPBP. Activated clotting time (ACT) was also measured following the administration of 300 μg/kg heparin. Cardiopulmonary bypass was initiated if ACT ≥ 480 seconds. Upon completion of the grafts, if vital signs were stable, patients were separated from the pump, and the circulating heparin was antagonized by protamine sulfate. After the surgery, patients were transferred to the intensive care unit (ICU), resuming ventilation under close observation. Patients were extubated about six to eight hours after operation if they were awake, calm, had normal arterial blood gas analysis, and acceptable breathing status. If the patients had VAS > 3, intravenous morphine of 0.1 mg/kg was injected. Six hours after transferring the patients to the ICU, in group D, two puffs of intranasal desmopressin equivalent to 20 μg, and in group P, two puffs of intranasal placebo (isotonic saline), were sprayed. If the patients had nausea and vomiting (desmopressin complications), metoclopramide of 0.1 mg/kg was injected. In the ICU, the amount of blood in the chest bottle, Hb level, and platelet count were observed. If the Hb level reduced to less than 10gr/dL, or if the platelet level was less than 100,000/mm3, packed red blood cells (PRBC) or platelet concentrate were injected, respectively. Investigators measured the levels of Hb, platelets, fibrinogen, PT, PTT, and BT 24 hours after surgery.\n\nBODY.3. METHODS.3.1. STATISTICAL ANALYSIS:\nAfter collection, data were entered into SPSS software version 17. Data were reported by descriptive statistics (number, percent, mean, and standard deviation) and analyzed with a chi-squared test, an independent T-test, a Mann-Whitney U test, and a paired T-test. For intragroup comparison of variables after surgery, ANCOVA was used. A P value < 0.05 was considered as statistically significant, and a 95% confidence interval was noted.\n\nBODY.4. RESULTS:\nOf the 100 patients initially enrolled in this study, seven patients were excluded. Therefore, 47 and 46 patients participated in groups D and P, respectively. Two patients in group D and two patients in group P were excluded as a result of bleeding and reoperation required to control it. A prolonged cardiopulmonary pump time resulted in the exclusion of one patient in group D and two patients in group P (Figure 1). Figure 1.Enrollment and Outcomes There were no significant differences between the groups regarding age, gender, body mass index (BMI), ejection fraction (Table 1), duration of surgery, pump time, and aortic clamp time (Table 2). Also, there was no significant difference between the groups regarding the outbreak of complications such as nausea, vomiting, blood pressure alteration, and the amount of bleeding (P > 0.05). Table 1. Demographic Characteristics of the Patients Variables Group D Group P P Value \n Age, y \n 54.1 ± 7.58 56.92 ± 7.33 0.074 a \n Height, cm \n 166.3 ± 9.48 167.46 ± 3.04 0.215 a \n Weight, kg \n 71.8 ±7.63 72.4 ± 5.3 0.65 a \n BMI \n 25.93 ± 2.29 25.82 ± 1.66 0.79 a \n Ejection fraction (EF) \n 47.19 ± 4.18 47.86 ± 4.81 0.47 a \n Gender \n 0.217 b Male 35 (74.5%) 39 (84.8%) Female 12 (25.5%) 7 (15.2%) a Independent T-test. b Chi-squared test. Table 2. The Surgery Times, Pump Times, and Clamp Times of the Patients Variables Group D Group P P Value a \n Duration of surgery, h \n 2.89 ± 0.42 2.92 ± 0.44 0.73 \n Pump time, min \n 57.2 ± 9.01 56.7 ± 8.83 0.79 \n Aortic clamp time, min \n 37.34 ± 9.43 36.08 ± 9 0.51 a Independent T-test. Thirty-nine patients received PRBC, and 13 patients received platelet concentrate. The results showed no significant difference between these groups (p=0.472) (Table 3). Table 3. Frequency of Complications in the Two Groups Variables Group D Group P P Value \n Hypotension \n 0.533 a Yes 6 (12.8%) 8 (17.4%) No 41 (87.2%) 38 (82.6%) \n Nausea \n 0.109 a Yes 11 (23.4%) 5 (10.9%) No 36 (76.6%) 41 (89.1%) \n Vomiting \n 0.57 a Yes 2 (4.3%) 1 (2.2%) No 45 (95.7%) 45 (97.8%) \n Bleeding volume, mL \n 359.3 ± 266.2 406.3 ± 341.6 0.208 b \n Blood transfusion, packed cell \n 0.472 a Yes 18 (38.3%) 21 (45.7%) No 29 (61.7%) 25 (54.3%) \n Platelet transfusion \n 0.797 a Yes 7 (14.9%) 6 (13%) No 40 (85.1%) 40 (87%) \n Packed red blood cell transfusion, U \n 0.65 ± 1.06 0.82 ± 1.32 0.596 b a Chi-squared test. b Mann-Whitney U test. Table 4. Changes in Hemoglobin, Platelet, and Coagulation Results Before and After Surgery in the Two Groups Variables Group D Group P Estimate Within Group , P Value a Estimates Between Groups b \n Hemoglobin \n 0.134 Preoperative 13.63 ± 1.78 13.12 ± 2.18 0.221 Postoperative 11.71 ± 1.45 11.15 ± 1.31 0.058 Estimate within group, value c 0.0001 0.0001 \n Platelets \n 0.012 Preoperative 243489 ± 74577 240239 ± 46991 0.802 Postoperative 226176 ± 91502 194456 ± 46514 0.038 Estimate within group, P value c 0.042 0.0001 \n INR \n 0.665 Preoperative 1.0 ± 0.21 1.05 ± 0.14 0.86 Postoperative 1.11 ± 0.19 1.14 ± 0.22 0.964 Estimate within group, P value c 0.005 0.011 \n PT \n 0.463 Preoperative 12.85 ± 0.71 12.82 ± 0.62 0.88 Postoperative 13.28 ± 1.29 13.55 ± 2.12 0.941 Estimate within group, P value c 0.017 0.011 \n PTT \n 0.588 Preoperative 30.97 ± 5.96 30.98 ± 7.47 0.134 Postoperative 29.73 ± 8.63 28.77 ± 8.55 0.517 Estimate within group, P value c 0.029 0.085 \n BT \n 0.129 Preoperative 1.3 ± 0.39 1.42 ± 0.42 0.223 Postoperative 1.69 ± 0.56 1.55 ± 0.58 0.173 Estimate within group, P valuec 0.0001 0.285 \n Fibrinogen \n 0.265 Preoperative 302.9 ± 34.3 288.7 ± 45.6 0.091 Postoperative 261.4 ± 37.4 247.8 ± 35.1 0.075 Estimate within group, P value c 0.0001 0.0001 Abbreviation: BT, bleeding time; INR, international normalized ratio; PT, prothrombin time; PTT, partial thromboplastin time. a Independent T-test. b ANCOVA. c Paired T-test. A paired T-test revealed that the Hb levels decreased significantly in each group after surgery (P = 0.0001), but the difference was not statistically significant between the groups (0.134). The alterations of platelets in the D and P groups were statistically significant (P = 0.042, P = 0.0001, respectively). A T-test showed no significant difference in mean platelet levels between the groups before surgery (P = 0.802), but there was a significant difference between the groups in terms of the mean platelet levels after surgery (P = 0.038). The PT, PTT, INR, BT, and fibrinogen changes before and after surgery showed no significant differences between the groups (P > 0.05).\n\nBODY.5. DISCUSSION:\nSevere bleeding after cardiac surgery consistently remains a challenge for cardiac surgeons and anesthesiologists. Using a cardiopulmonary pump in cardiac surgery may induce blood coagulation disorders and platelet dysfunction, requiring transfusion of proper functioning platelets or the administration of other medicines (3-6). Up until now, diverse products including desmopressin, albumin, recombinant factor VII, and tranexamic acid have been used to reduce bleeding after cardiac surgery (12, 15, 18-20). Desmopressin is an artificial form of vasopressin or an antiduretic hormone (ADH) analogue that exerts its anti-hemorrhagic effects through increased plasma levels of factor VΙΙΙ and increased activity of the Von Willebrand factor. It also has a direct effect on the blood vessel. Our study showed no significant differences between the groups regarding the demographic characteristics, pump times, clamp times, and durations of surgery. Also, the frequency of complications such as nausea, vomiting, and hypotension was not significantly different between the groups. Furthermore, no significant difference was noted between the groups regarding the amount of blood collected in the chest bottle and the number of transfused PRBC units. Pleym et al. observed no significant difference in the amount of blood loss after prescribing 0.3 mcg/kg of desmopressin in patients undergoing CABG (21). They reported 606 and 601 cc of blood loss in the desmopressin and saline groups, respectively. They finally declared that taking desmopressin had no significant effect on bleeding after CABG in patients treated with aspirin the day before surgery. Their finding was consistent with our study, but with increasing doses of desmopressin, different results might be achieved. Cattaneo et al. showed that although the use of desmopressin reduced postoperative bleeding, it did not reduce the need for blood transfusions (12). In both groups, the same amount of blood was transfused. Also, in this study, researchers noted that the effectiveness of desmopressin depended on the amount of bleeding, and by an increase in bleeding of about 1.1 liters, the effect of desmopressin on reducing blood loss could be more apparent (12). It seems that low rates of bleeding could indicate the supposed ineffectiveness of desmopressin in our study (359.3 ± 266.2 mL in group D, and 406.3 ± 341.6 in Group P). In our study, it was observed that Hb levels dropped after surgery in each group, and this amount was significant. However, comparing Hb levels before and after surgery in both groups indicated no significant difference. Also, Pleym et al. noted similar results (21). Furthermore, platelet counts after surgery decreased significantly in both groups in our study. Platelet counts in group P were lower than those of group D, but this difference was not clinically considerable, and in both groups, postoperative platelet counts were enough to maintain homeostasis. In contrast to our study, Pleym et al. reported a different result (21). They indicated that despite a decrease in platelets after surgery in both groups, the rate of thrombocytopenia was higher in the desmopressin group. This different result might be due to other factors such as the cardiopulmonary pump effect. In addition, there was no significant difference between groups in BT before and after surgery (P = 0.129). In a study by Keyl et al. which examined the effects of aspirin and intravenous desmopressin of 0.4 μg/kg on platelet function in patients undergoing CABG, they observed that desmopressin did not affect platelet activity (22). In our study, 40 μg of intranasal desmopressin in the form of four puffs was administered before (two puffs) and six hours (two puffs) after surgery. Intranasal desmopressin at high dosages may produce an elevation of blood pressure, which disappears with reduction in dosage. Therefore, this drug should be used with caution in patients with coronary artery insufficiency and/or hypertensive cardiovascular disease and heart failure. For this reason, in our study, the lower dosage of desmopressin was used. In Keyl et al.'s study, similar to ours, desmopressin did not improve platelet function. However, their study focused on platelet activity more than platelet count. In our study, platelet count and BT were also investigated. Our results showed that PT and INR increased after surgery, but this difference was not significant when comparing the two groups. The only variable with a downward trend in coagulation parameters was PTT. However, a significant difference in this variable was not observed between the groups. Generally, desmopressin did not improve platelet count or function, nor did it improve coagulation status. In a review study, Wademan and Galvin (23) examined the effects of desmopressin in reducing bleeding after heart surgery. They announced that desmopressin could not induce beneficial effects either before or after heart surgery. In their study, desmopressin improved the hemostatic status in those patients consuming aspirin for seven days prior surgery, having a pump time of more than 140 minutes, or having preoperative platelet dysfunction. Also, in a meta-analysis, Levi et al. (24) showed that desmopressin might cause a slight reduction in the amount of bleeding after heart surgery without any clinical improvements.\n\nBODY.5. DISCUSSION.5.1. LIMITATIONS:\nIn this study, we investigated the platelet counts and BT. It is recommended that in future studies, further platelet function tests such as optical aggregometry, flow cytometry, or PFA-100 could be used. Also, it seems that measuring the Von Willebrand factor and factor VΙΙΙ levels before and after surgery could prove to be valuable.\n\nBODY.5. DISCUSSION.5.2. CONCLUSION:\nOur study showed that desmopressin could not reduce the amount of blood loss after CABG surgery. Also, desmopressin did not have a significant effect on coagulation status. Therefore, based on our study, it seems that the use of this medication cannot be considered as helpful in candidates for CABG surgery.\n\n**Question:** Compared to Placebo what was the result of Desmopressin on Bleeding rate after surgery?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1017
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effect of Desmopressin on the Amount of Bleeding in Patients Undergoing Coronary Artery Bypass Graft Surgery with a Cardiopulmonary Bypass Pump After Taking Anti-Platelet Medicine\n\n ABSTRACT.BACKGROUND:\nCoronary artery bypass grafting (CABG) is a common surgical intervention at the end-stages of coronary artery occlusion disease. Despite the effectiveness of CABG, it may have particular complications, such as bleeding during and after surgery. So far, there have been many drugs used to reduce bleeding.\n\nABSTRACT.OBJECTIVES:\nThis study aimed at investigating the effects of desmopressin on the amount of bleeding in patients undergoing CABG with a cardiopulmonary bypass pump (CPBP) who were taking anti-platelet medicine.\n\nABSTRACT.METHODS:\nOne hundred patients scheduled for elective CABG with a CPBP were included in a prospective, placebo-controlled, double-blinded clinical trial study. They were randomly divided into two groups. One group received desmopressin (40 μg) and the other group received a placebo (isotonic saline). Seven patients were excluded from the study, and 47 and 46 patients participated in the desmopressin and control groups, respectively. The methods of monitoring and the anesthetic techniques were similar in both groups, and all surgeries were performed by one surgeon. Variables including age, gender, pump time, aortic clamp time, duration of surgery, complications (e.g., nausea and vomiting, blood pressure changes), the necessity to receive blood products, and coagulation tests (prothrombin time, partial thromboplastin time, international normalized ratio, and bleeding time) were assessed. Data were statistically analyzed with SPSS software version 17.\n\nABSTRACT.RESULTS:\nThere was no significant difference between the groups regarding age, gender, pump time, clamp time, duration of surgery, complications, and the changes in hemoglobin and coagulation test measurements (P > 0.05). No significant difference was noted between the groups regarding the rate of bleeding after surgery (359.3 ± 266.2 in group D vs. 406.3 ± 341.6 in group P (control group); P = 0.208). However, the platelet changes after surgery in both groups were significantly different. The analysis revealed that the rate of thrombocytopenia after surgery was higher in the control group (P = 0.012).\n\nABSTRACT.CONCLUSIONS:\nOur study showed that desmopressin could not reduce the amount of blood loss after CABG. Also, desmopressin did not have a significant effect on coagulation status. Therefore, based on the results of our study, it seems that the use of this medication cannot be a helpful for patients with any indication for CABG.\n\nBODY.1. BACKGROUND:\nCoronary artery disease (CAD) is one of the leading causes of death worldwide. Patients with CAD require coronary artery revascularization as well as other medical treatments (1). Coronary artery bypass grafting (CABG) is a common interventional surgery in order to treat end-stage coronary artery occlusion disease. Although CABG is an effective method, it may also come with particular complications during and after the surgery (1, 2). CABG may lead to disorders in platelet count, coagulation, and fibrinolysis. Excessive amounts of bleeding after surgery is a common problem that sometimes requires blood transfusions or even reoperation (3-6). Desmopressin is an artificial form of vasopressin which is commonly used to prevent bleeding in hemophilia and Von Willebrand disease (7, 8). Previous investigations have shown that desmopressin can decrease the amount of blood products needed in transfusion for cardiac surgery (9). However, some studies have indicated that desmopressin was not effective when used as a prophylaxis (10, 11). Still, some other studies have shown that using desmopressin could reduce bleeding in post-CABG patients, patients with long cardiopulmonary bypass pump (CPBP) times, and high-risk patients who had taken anti-platelet medication (9, 12-16).\n\nBODY.2. OBJECTIVES:\nAccording to the general importance of the issue and the conflicting results on the effects of desmopressin on bleeding after cardiac surgery, and also concerning the lack of previous studies which have assessed the effects of intranasal desmopressin on bleeding after cardiac surgery, we aimed to investigate the effects of desmopressin on the bleeding rates in patients who had undergone CABG with a CPBP after taking anti-platelet medicine.\n\nBODY.3. METHODS:\nWe conducted a randomized double-blind clinical trial in the anesthesiology research center of Guilan University of Medical Sciences (GUMS) in Rasht, Iran. This study was approved by the ethics committee of GUMS with the reference number of 1930252810 and registered in the Iranian registry of clinical trials (IRCT) under the number IRCT201409304345N3. Inclusion criteria were Class II-III status according to the American society of anesthesiologists' physical status classification system, 3-vessel coronary artery disease, ejection fraction of ≥ 40%, reception of anti-platelet medicines including aspirin of 80 mg/day and clopidogrel of 75 mg/day before surgery, no history of coagulopathy or renal impairment, and having been scheduled for CABG with a CPBP. Exclusion criteria were a pump time > 120 minutes, concomitant valve procedure and CABG, inotrope administration for separating the patient from the cardiopulmonary pump, need for anticoagulant therapy during and/or in the post-operative period, and coronary endarterectomy. The sample size was determined according to the following formula derived from the study by Gratz et al. (17): α = 0.05 Z1-α/2 = 1.96 (95% confidence interval was noted) β = 0.20 Z1-β = 0.85 μ1 = 1176, S1 = 674 μ2 = 833, S2 = 311 Equation 1.n=(Z1-α2+Z1\n- β)2(S12+S22)(μ1\n-μ2)2 n = 50 Patients in each group. Using the method of random blocking, 100 patients were divided into two groups (desmopressin: group D and placebo: group P) with 50 patients in each group. At the time of hospitalization, the type of surgery and anesthesia method were explained for patients, and written informed consent was obtained. In the preoperative period, blood levels of hemoglobin (Hb), platelets, prothrombin time (PT), partial thromboplastin time (PTT), international normalized ratio (INR), and fibrinogen were measured and recorded. The anesthesiologist advised patients to fast the night before surgery. For premedication, oral lorazepam of 1 mg was administered the night before and one hour prior to surgery. An hour before induction of anesthesia, intramuscular morphine sulfate of 0.1 mg/kg was injected into all patients. Thirty minutes before anesthesia, the desmopressin or placebo was administered. In group D, two puffs of intranasal desmopressin (Sina Vista Daroo manufacturing, Tehran, Iran) were sprayed. Each puff contained 10 μg of desmopressin. In group P, 2 puffs of intranasal saline (normal saline in a desmopressin container) were sprayed. The patients and the anesthesiologist had no awareness about the drug type administered during and after surgery. Also, the laboratory technician was unaware of the drug type. The researcher was the only one who was aware of the medication so that he could take any necessary actions if complications occurred. Patients were monitored under standard protocols including 7-lead electrocardiography (EKG), pulse oximetry, and bispectral indexing (BIS) in the operating room. Before induction of anesthesia, the cardiac anesthesiologist inserted one peripheral venous catheter, a left radial artery catheter, and a central venous catheter under local anesthesia with lidocaine of 1%. Patients received 5 - 7 ml/kg isotonic saline before induction of anesthesia. After preoxygenation, anesthesia was induced by slow injection of midazolame of 0.05 mg/kg and sufentanil of 1 μg/kg (over the course of 10 minutes) intravenously, followed by infusion of 0.2 mg/kg cisatracurium for muscle relaxation. After intubation, anesthesia was maintained by propofol of 50 - 75 μg/kg/min, sufentanil of 0.2 μg/kg/h, and atracurium of 0.6 μg/kg/h. Then, a nasopharyngeal thermometer was inserted, and in order to maintain the depth of anesthesia, BIS during surgery was maintained between 40 and 60. Then, patients underwent median sternotomy, a standard technique was used to establish a CPBP. Activated clotting time (ACT) was also measured following the administration of 300 μg/kg heparin. Cardiopulmonary bypass was initiated if ACT ≥ 480 seconds. Upon completion of the grafts, if vital signs were stable, patients were separated from the pump, and the circulating heparin was antagonized by protamine sulfate. After the surgery, patients were transferred to the intensive care unit (ICU), resuming ventilation under close observation. Patients were extubated about six to eight hours after operation if they were awake, calm, had normal arterial blood gas analysis, and acceptable breathing status. If the patients had VAS > 3, intravenous morphine of 0.1 mg/kg was injected. Six hours after transferring the patients to the ICU, in group D, two puffs of intranasal desmopressin equivalent to 20 μg, and in group P, two puffs of intranasal placebo (isotonic saline), were sprayed. If the patients had nausea and vomiting (desmopressin complications), metoclopramide of 0.1 mg/kg was injected. In the ICU, the amount of blood in the chest bottle, Hb level, and platelet count were observed. If the Hb level reduced to less than 10gr/dL, or if the platelet level was less than 100,000/mm3, packed red blood cells (PRBC) or platelet concentrate were injected, respectively. Investigators measured the levels of Hb, platelets, fibrinogen, PT, PTT, and BT 24 hours after surgery.\n\nBODY.3. METHODS.3.1. STATISTICAL ANALYSIS:\nAfter collection, data were entered into SPSS software version 17. Data were reported by descriptive statistics (number, percent, mean, and standard deviation) and analyzed with a chi-squared test, an independent T-test, a Mann-Whitney U test, and a paired T-test. For intragroup comparison of variables after surgery, ANCOVA was used. A P value < 0.05 was considered as statistically significant, and a 95% confidence interval was noted.\n\nBODY.4. RESULTS:\nOf the 100 patients initially enrolled in this study, seven patients were excluded. Therefore, 47 and 46 patients participated in groups D and P, respectively. Two patients in group D and two patients in group P were excluded as a result of bleeding and reoperation required to control it. A prolonged cardiopulmonary pump time resulted in the exclusion of one patient in group D and two patients in group P (Figure 1). Figure 1.Enrollment and Outcomes There were no significant differences between the groups regarding age, gender, body mass index (BMI), ejection fraction (Table 1), duration of surgery, pump time, and aortic clamp time (Table 2). Also, there was no significant difference between the groups regarding the outbreak of complications such as nausea, vomiting, blood pressure alteration, and the amount of bleeding (P > 0.05). Table 1. Demographic Characteristics of the Patients Variables Group D Group P P Value \n Age, y \n 54.1 ± 7.58 56.92 ± 7.33 0.074 a \n Height, cm \n 166.3 ± 9.48 167.46 ± 3.04 0.215 a \n Weight, kg \n 71.8 ±7.63 72.4 ± 5.3 0.65 a \n BMI \n 25.93 ± 2.29 25.82 ± 1.66 0.79 a \n Ejection fraction (EF) \n 47.19 ± 4.18 47.86 ± 4.81 0.47 a \n Gender \n 0.217 b Male 35 (74.5%) 39 (84.8%) Female 12 (25.5%) 7 (15.2%) a Independent T-test. b Chi-squared test. Table 2. The Surgery Times, Pump Times, and Clamp Times of the Patients Variables Group D Group P P Value a \n Duration of surgery, h \n 2.89 ± 0.42 2.92 ± 0.44 0.73 \n Pump time, min \n 57.2 ± 9.01 56.7 ± 8.83 0.79 \n Aortic clamp time, min \n 37.34 ± 9.43 36.08 ± 9 0.51 a Independent T-test. Thirty-nine patients received PRBC, and 13 patients received platelet concentrate. The results showed no significant difference between these groups (p=0.472) (Table 3). Table 3. Frequency of Complications in the Two Groups Variables Group D Group P P Value \n Hypotension \n 0.533 a Yes 6 (12.8%) 8 (17.4%) No 41 (87.2%) 38 (82.6%) \n Nausea \n 0.109 a Yes 11 (23.4%) 5 (10.9%) No 36 (76.6%) 41 (89.1%) \n Vomiting \n 0.57 a Yes 2 (4.3%) 1 (2.2%) No 45 (95.7%) 45 (97.8%) \n Bleeding volume, mL \n 359.3 ± 266.2 406.3 ± 341.6 0.208 b \n Blood transfusion, packed cell \n 0.472 a Yes 18 (38.3%) 21 (45.7%) No 29 (61.7%) 25 (54.3%) \n Platelet transfusion \n 0.797 a Yes 7 (14.9%) 6 (13%) No 40 (85.1%) 40 (87%) \n Packed red blood cell transfusion, U \n 0.65 ± 1.06 0.82 ± 1.32 0.596 b a Chi-squared test. b Mann-Whitney U test. Table 4. Changes in Hemoglobin, Platelet, and Coagulation Results Before and After Surgery in the Two Groups Variables Group D Group P Estimate Within Group , P Value a Estimates Between Groups b \n Hemoglobin \n 0.134 Preoperative 13.63 ± 1.78 13.12 ± 2.18 0.221 Postoperative 11.71 ± 1.45 11.15 ± 1.31 0.058 Estimate within group, value c 0.0001 0.0001 \n Platelets \n 0.012 Preoperative 243489 ± 74577 240239 ± 46991 0.802 Postoperative 226176 ± 91502 194456 ± 46514 0.038 Estimate within group, P value c 0.042 0.0001 \n INR \n 0.665 Preoperative 1.0 ± 0.21 1.05 ± 0.14 0.86 Postoperative 1.11 ± 0.19 1.14 ± 0.22 0.964 Estimate within group, P value c 0.005 0.011 \n PT \n 0.463 Preoperative 12.85 ± 0.71 12.82 ± 0.62 0.88 Postoperative 13.28 ± 1.29 13.55 ± 2.12 0.941 Estimate within group, P value c 0.017 0.011 \n PTT \n 0.588 Preoperative 30.97 ± 5.96 30.98 ± 7.47 0.134 Postoperative 29.73 ± 8.63 28.77 ± 8.55 0.517 Estimate within group, P value c 0.029 0.085 \n BT \n 0.129 Preoperative 1.3 ± 0.39 1.42 ± 0.42 0.223 Postoperative 1.69 ± 0.56 1.55 ± 0.58 0.173 Estimate within group, P valuec 0.0001 0.285 \n Fibrinogen \n 0.265 Preoperative 302.9 ± 34.3 288.7 ± 45.6 0.091 Postoperative 261.4 ± 37.4 247.8 ± 35.1 0.075 Estimate within group, P value c 0.0001 0.0001 Abbreviation: BT, bleeding time; INR, international normalized ratio; PT, prothrombin time; PTT, partial thromboplastin time. a Independent T-test. b ANCOVA. c Paired T-test. A paired T-test revealed that the Hb levels decreased significantly in each group after surgery (P = 0.0001), but the difference was not statistically significant between the groups (0.134). The alterations of platelets in the D and P groups were statistically significant (P = 0.042, P = 0.0001, respectively). A T-test showed no significant difference in mean platelet levels between the groups before surgery (P = 0.802), but there was a significant difference between the groups in terms of the mean platelet levels after surgery (P = 0.038). The PT, PTT, INR, BT, and fibrinogen changes before and after surgery showed no significant differences between the groups (P > 0.05).\n\nBODY.5. DISCUSSION:\nSevere bleeding after cardiac surgery consistently remains a challenge for cardiac surgeons and anesthesiologists. Using a cardiopulmonary pump in cardiac surgery may induce blood coagulation disorders and platelet dysfunction, requiring transfusion of proper functioning platelets or the administration of other medicines (3-6). Up until now, diverse products including desmopressin, albumin, recombinant factor VII, and tranexamic acid have been used to reduce bleeding after cardiac surgery (12, 15, 18-20). Desmopressin is an artificial form of vasopressin or an antiduretic hormone (ADH) analogue that exerts its anti-hemorrhagic effects through increased plasma levels of factor VΙΙΙ and increased activity of the Von Willebrand factor. It also has a direct effect on the blood vessel. Our study showed no significant differences between the groups regarding the demographic characteristics, pump times, clamp times, and durations of surgery. Also, the frequency of complications such as nausea, vomiting, and hypotension was not significantly different between the groups. Furthermore, no significant difference was noted between the groups regarding the amount of blood collected in the chest bottle and the number of transfused PRBC units. Pleym et al. observed no significant difference in the amount of blood loss after prescribing 0.3 mcg/kg of desmopressin in patients undergoing CABG (21). They reported 606 and 601 cc of blood loss in the desmopressin and saline groups, respectively. They finally declared that taking desmopressin had no significant effect on bleeding after CABG in patients treated with aspirin the day before surgery. Their finding was consistent with our study, but with increasing doses of desmopressin, different results might be achieved. Cattaneo et al. showed that although the use of desmopressin reduced postoperative bleeding, it did not reduce the need for blood transfusions (12). In both groups, the same amount of blood was transfused. Also, in this study, researchers noted that the effectiveness of desmopressin depended on the amount of bleeding, and by an increase in bleeding of about 1.1 liters, the effect of desmopressin on reducing blood loss could be more apparent (12). It seems that low rates of bleeding could indicate the supposed ineffectiveness of desmopressin in our study (359.3 ± 266.2 mL in group D, and 406.3 ± 341.6 in Group P). In our study, it was observed that Hb levels dropped after surgery in each group, and this amount was significant. However, comparing Hb levels before and after surgery in both groups indicated no significant difference. Also, Pleym et al. noted similar results (21). Furthermore, platelet counts after surgery decreased significantly in both groups in our study. Platelet counts in group P were lower than those of group D, but this difference was not clinically considerable, and in both groups, postoperative platelet counts were enough to maintain homeostasis. In contrast to our study, Pleym et al. reported a different result (21). They indicated that despite a decrease in platelets after surgery in both groups, the rate of thrombocytopenia was higher in the desmopressin group. This different result might be due to other factors such as the cardiopulmonary pump effect. In addition, there was no significant difference between groups in BT before and after surgery (P = 0.129). In a study by Keyl et al. which examined the effects of aspirin and intravenous desmopressin of 0.4 μg/kg on platelet function in patients undergoing CABG, they observed that desmopressin did not affect platelet activity (22). In our study, 40 μg of intranasal desmopressin in the form of four puffs was administered before (two puffs) and six hours (two puffs) after surgery. Intranasal desmopressin at high dosages may produce an elevation of blood pressure, which disappears with reduction in dosage. Therefore, this drug should be used with caution in patients with coronary artery insufficiency and/or hypertensive cardiovascular disease and heart failure. For this reason, in our study, the lower dosage of desmopressin was used. In Keyl et al.'s study, similar to ours, desmopressin did not improve platelet function. However, their study focused on platelet activity more than platelet count. In our study, platelet count and BT were also investigated. Our results showed that PT and INR increased after surgery, but this difference was not significant when comparing the two groups. The only variable with a downward trend in coagulation parameters was PTT. However, a significant difference in this variable was not observed between the groups. Generally, desmopressin did not improve platelet count or function, nor did it improve coagulation status. In a review study, Wademan and Galvin (23) examined the effects of desmopressin in reducing bleeding after heart surgery. They announced that desmopressin could not induce beneficial effects either before or after heart surgery. In their study, desmopressin improved the hemostatic status in those patients consuming aspirin for seven days prior surgery, having a pump time of more than 140 minutes, or having preoperative platelet dysfunction. Also, in a meta-analysis, Levi et al. (24) showed that desmopressin might cause a slight reduction in the amount of bleeding after heart surgery without any clinical improvements.\n\nBODY.5. DISCUSSION.5.1. LIMITATIONS:\nIn this study, we investigated the platelet counts and BT. It is recommended that in future studies, further platelet function tests such as optical aggregometry, flow cytometry, or PFA-100 could be used. Also, it seems that measuring the Von Willebrand factor and factor VΙΙΙ levels before and after surgery could prove to be valuable.\n\nBODY.5. DISCUSSION.5.2. CONCLUSION:\nOur study showed that desmopressin could not reduce the amount of blood loss after CABG surgery. Also, desmopressin did not have a significant effect on coagulation status. Therefore, based on our study, it seems that the use of this medication cannot be considered as helpful in candidates for CABG surgery.\n\n**Question:** Compared to Placebo what was the result of Desmopressin on Hb levels?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1022
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Management of \n\n ABSTRACT.INTRODUCTION:\nPrevalence of diabetes and its complications have been a burden to the society from the ancient times, in the present and also will be in the future unless proper measures are taken to prevent its manifestation. There have been an increasing number of death associated-amputation cases which are mainly caused by nonhealing wounds. These facts urge researchers to develop new, more effective wound treatments for diabetic patients.\n\nABSTRACT.AIM:\nTo evaluate and compare the effect of Katupila Kalka (Securinega leucopyrus [Willd.] Muell. leaf paste) and Tila Taila (Sesamum indicum oil) in Madhumehajanya Vrana (diabetic wounds/ulcers) with Betadine ointment.\n\nABSTRACT.MATERIALS AND METHODS:\nA total of 23 patients of Madhumehajanya Vrana were chosen and randomly divided into two groups (Group A and B). Patients of Group A were treated with local application of Katupila Kalka with Tila Taila, whereas, in Group B, Betadine ointment was applied on the affected parts, once a day in the morning for 30 days. The relief in signs and symptoms were assessed on scoring pattern.\n\nABSTRACT.RESULTS:\nIn Group A, diabetic wounds treated with Katupila paste got healed within 28 days with minimal scar formation without any complications, whereas in Group B, wound was healed completely only in two patients within 28 days. In both the groups, no patients reported any adverse drug reaction during the entire course of treatment as well as in follow-up period.\n\nABSTRACT.CONCLUSION:\nStudy concluded that the drug Katupila Kalka possesses Vrana Ropana (wound healing) activity with fine scaring.\n\nBODY.INTRODUCTION:\nIn 2011, it was estimated that 366 million people worldwide had diabetes, representing roughly 8.3% of the adult population (20–79 years of age group). The number is expected to reach 552 million by 2030, representing 9.9% of the adult population. Diabetes is associated with various short and long-term complications, many of which, if left untreated, can be fatal. All these also have the potential to reduce the quality of life of people with diabetes and their families. The most common long-term complications affect the heart, eyes, kidneys, and feet.[1] Foot infections in persons with diabetes are a common and complex.[23] In addition to causing severe morbidities, they now account for the largest number of diabetes-related hospital bed-days[4] and are the most common proximate, nontraumatic cause of amputations.[56] Diabetic foot infections require careful attention and coordinated management, preferably by a multidisciplinary foot-care team.[7] Ayurveda has its own principles, but it is incorporating new theories and drugs in it and is presenting them according to its principles. Number of drugs known to Ayurveda since Vedic era is increasing with the time. Acharya Charaka says that a physician can add new drugs to these Mahakashayas by his own intelligence that will help him to treat the patients.[8] He also states that one should get the knowledge about plants from tribal people or cowherds as they know and use the plants in a better way.[9] However, after Acharya Bhavamishra, the process of addition of new drugs to science has diminished or somewhat ceased. It is said that the wounds/ulcers of the diabetic patient are cured with difficulty.[10] Even with the latest technology and modern medicine in hand, highly trained medical team around, yet the majority of the diabetic ulcers end up with more or less amputation of the concerned major or minor part of the lower limb. Securinega leucopyrus (Willd.) Muell. is known as Humari in India, Katupila in Sri Lanka and Spinous Fluggea in English. It is a desert climatic plant used topically in paste form for the healing of chronic and nonhealing wounds. Hence, in the light of the above references from classics, Katupila Kalka (paste) was selected for this study with an aim to assess its effect in the management of Madhumehajanya Vrana (diabetic ulcer).\n\nBODY.MATERIALS AND METHODS:\nDiagnosed patients of diabetic wounds were selected from outpatient and inpatient departments of Shalya Tantra irrespective of the age, sex, religion, caste, occupation, etc. The study was started after obtaining approval from the Institutional Ethics Committee (PGT/7-A/2012-2013/1964 dated 21/09/2012) and study has been registered under Clinical Trial Registry of India (No.: CTRI/2013/09/004020). Informed written consent from each patient was obtained before starting treatment course.\n\nBODY.MATERIALS AND METHODS.INCLUSION CRITERIA:\nDiagnosed cases of Madhumehajanya Vrana with Type I and Type II diabetes mellitus of 18–70 years age of either sex.\n\nBODY.MATERIALS AND METHODS.EXCLUSION CRITERIA:\nThe patients with a history of tuberculosis, wounds other than diabetic wounds, uncontrolled hypertension, cardiac diseases, such as myocardial infarction, coronary artery disease, and ischemic heart disease were excluded in this study. Patients with human immune virus (HIV), venereal disease research laboratory (VDRL) and hepatitis-B test positive were also excluded from the study.\n\nBODY.MATERIALS AND METHODS.LABORATORY INVESTIGATIONS:\nRoutine hematological parameters such as hemoglobin percentage, total leukocyte count, differential leucocyte count, bleeding time, clotting time, erythrocyte sedimentation rate; blood biochemical parameters such as fasting blood sugar, postprandial blood sugar, hepatitis B surface antigen, VDRL, and HIV; urine examination (routine and microscopic) was done to find out any other pathological condition.\n\nBODY.MATERIALS AND METHODS.RADIOLOGICAL EXAMINATION:\nX-ray of the affected foot (dorsoventral and lateral view) was done to rule out the osteomylitis of bone.\n\nBODY.MATERIALS AND METHODS.GROUPING AND POSOLOGY:\nTotally 23 patients of the diabetic wound were allocated randomly into two groups (Group A and B). \nGroup A (n = 13): In this group, Katupila Kalka (paste of S. leucopyrus leaves) with Tila Taila (sesame oil) was applied locally on the affected part. The dressing was done once in the morning up to 1 month or healing of wound which is earlierGroup B (n = 10): In this group, patients were treated with dressing of Betadine ointment once daily in the morning and treated as the standard control group.\n The leaves of Katupila were obtained from Sri Lanka and authenticated[11] at Pharmacognosy Laboratory, IPGT and RA, Jamnagar. Patients were advised to continue their anti-diabetic and anti-hypertensive medications (Allopathic/Ayurvedic) during treatment.\n\nBODY.MATERIALS AND METHODS.ASSESSMENT OF THERAPY:\nThe assessment of results was carried out mainly on the basis of relief in the signs and symptoms of the ulcers that are odor (Gandha), peri-wound skin (Varna), exudates level (Srava), type of exudates, pain (Vedana), wound measurement in cm (Akruti), and signs of infection [Table 1]. Table 1 Gradation for assessment of results Effect of therapy was statistically analysed by using student 't' test.\n\nBODY.MATERIALS AND METHODS.OVERALL EFFECT OF THERAPY:\n\nCured: 100% improvement in subjective and objective parametersMarked improvement: 70–99%Moderate improvement: 50–69%Mild improvement: 30–49%Unchanged: <30%.\n\n\nBODY.OBSERVATIONS:\nThe demographic data reveal that maximum patients were from the age group of 46 to 65 years (57%). Male patients were more that is 78.3% may be due to their irregular eating habits, addiction, excess standing work, and less care. Maximum patients were following Hindu religion (78%) and married (87%). Socioeconomically middle-class patients were in the majority (78.2%). The 78.27% of patients were having a history of diabetes and 21.73% of patients found with a history of both diabetes and hypertension. The 69.5% of patients had family history of diabetes. For the management of diabetic wound, 39.1% of patients had received medical treatment in past and 52.1% of patients had undergone surgery, whereas 8.6% of patients had both medical and surgical treatment. The majority of the patients that is 56.5% were suffering from wounds since 1 to 6 months, 34.7% of patients noted traumatic onset and maximum 73.9% were found with single wound. The 82.6% of patients were having Puya Srava (purulent exudates) and 95.6% of patients had Ghana Srava with 43.4% were having moderate Srava (discharge). Maximum numbers of patients presented without pain whereas only 34.9% of patients were found with pain. Bad odor was noted in all wounds and fever was found in maximum 73.91% of patients. In maximum 47.81% of patients the location of wound was at foot, as this part of the body more often gets trauma.\n\nBODY.RESULTS:\nIn this study, infection was controlled within 3–4 days and foul smell was controlled after 3 days in patients of Group A. In Group B, foul smell from ulcer continues until the wound was healed. The surrounded skin color became normal skin color within 7 days in maximum patients (76%) of Group A, whereas only 32% patients showed normal peri-wound skin color in Group B. In Group A, 55% of patients showed a complete cessation of discharge within 7 days whereas in Group B, exudation found reduced in 34% of patients at the end of 28th day. In all patients, pain sensation at wound site regained by 7th day in Group A, due to neo-vascularization and restored pain sensation in nerve fibers. In Group A, 12 patients wound size was reduced completely within 28 days, whereas in Group B, only 29% of patients wound size was reduced completely within 28 days. In Group A, signs of infections relieved 61% by the 7th day and totally relieved at the end of 28th day, whereas in Group B, patients infection was relieved 41% by the end of 28th day [Tables 2 and 3]. Table 2 Effect of therapy on signs and symptoms of wound in Group A Table 3 Effect of therapy on signs and symptoms of wound in Group B In Group A, 92.3% of patients were completely cured, whereas in Group B, 20% of patients were completely cured [Figure 1]. Wound healing effect of Katupila in four cases is depicted in Figure 2. Figure 1Overall effect of therapy Figure 2Before and after treatment wound status of Katupila treated cases\n\nBODY.DISCUSSION:\nIn this study, foul smell was controlled after 3 days in Group A patients, which was found statistically significant (P < 0.001). In Group B, foul smell from ulcer continues until the wound was healed. The Krimighna, Vishaghna, Rakshoghna (antimicrobial) properties of the Katupila helped to control the local infection and ultimately the bad odor.[12] In Group A, 100% normal skin coloration was achieved which was found statistically significant (P < 0.001). In 32% of patients of Group B, peri-wound skin color became normal which was found statistically significant (P > 0.05). Katupila may have Rakta Shodhana, Pittashamana, Varnya, Twaka Prasadana properties which helped to improve the surrounding wound skin color. Complete cessation of discharge within 7 days was observed in 55% of patients of Group A, while in remaining patients discharge was reduced by 28th day which was statistically significant (P < 0.001). Krimighna property of drug helped to control the infection. Hence, it can be said that Katupila paste had a better role than Betadine to reduce exudates level. In Group A, 77% of patients presented without pain while the pain was reported in 23% of patients due to diabetic neuropathy. After treating with Katupila, pain sensation regained by 7th day due to neo-vascularization and restored pain sensation in nerve fibers. The statistically insignificant result in pain relief was seen due to the restoration of pain sensation by correcting neuropathy. The Snigdha Guna of Tila Taila helped in control the vitiation of Vata and helped to relief in pain.[13] In Group B, there was minimal neovascularization, but the pain was relieved in 25% of patients due to the healed wounds which were statistically significant (P > 0.05). Hence, it can be inferred that Katupila is better than Betadine ointment to regain pain sensation (initial stage) as well as to relieve pain (in later stage). Complete wound healing was observed within 28 days in 92% cases so wound size reduction was found statistically highly significant (P < 0.001) which may be due to Katupila which may have Lekhana (scraping), Shodhana (cleaning) and Ropana (healing) properties. Madhura Rasa, Prinana (nourishment), Dhatuvardhana (tissue strength) and Dhatuposhana (nutrition) actions of Katupila helped to reduce the wound size by promoting healing and the rate of contraction. The infection at the wound site was assessed on the basis of odor, exudate level, local temperature, and swelling in both the groups. In Group A, signs of infections relieved 61% by the 7th day and totally relieved at the end of 28th day. Local application of Katupila reduces the discharge, swelling, pain by controlling the bacterial load, so wound became Shuddha (clean or healthy). Whereas in Group B, infection was relieved in 41% by the end of 28th day. Both drugs have shown statistically highly significant (P < 0.001) result in controlling infection. However, on the comparison, the duration of infection control by Katupila paste (that is 1st week) was found earlier than Betadine which took longer duration in controlling infections. The strength and positive finding of this study is Katupila Kalka had the potential to cure the diabetic wounds early than Betadine ointment. Findings in previously published case reports also showed the good potential in wound healing by Katupila.[1415] None of the adverse drug reaction noted during the treatment period. A reverse pharmacological study in the suitable animal model is a further scope of this work. As this drug is being used by traditional people and of herbal origin, there is a need to generate safety and toxicity profile of Katupila. The limitation of this research work is less number of patients due to the limited time of the postgraduate study.\n\nBODY.CONCLUSION:\nThe study concluded that Katupila Kalka (paste) possesses highly qualitative efficacy in \"Vrana Ropana\" with fine scaring. Therefore, it may be recommended that this low cost, easily preparable application may be prescribed for continuous use for the healing purposes.\n\nBODY.CONCLUSION.FINANCIAL SUPPORT AND SPONSORSHIP:\nIPGT & RA, Gujarat Ayurved University, Jamnagar.\n\nBODY.CONCLUSION.CONFLICTS OF INTEREST:\nThere are no conflicts of interest.\n\n**Question:** Compared to Betadine ointment what was the result of Katupila Kalka with Tila Taila on Healing of diabetic wounds?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1027
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomised controlled trial to prevent smoking relapse among recently quit smokers enrolled in employer and health plan sponsored quitlines\n\n ABSTRACT.OBJECTIVE:\nTo test adding an interactive voice response (IVR)-supported protocol to standard quitline treatment to prevent relapse among recently quit smokers.\n\nABSTRACT.DESIGN:\nParallel randomised controlled trial with three arms: standard quitline, standard plus technology enhanced quitline with 10 risk assessments (TEQ-10), standard plus 20 TEQ assessments (TEQ-20).\n\nABSTRACT.SETTING:\nQuit For Life (QFL) programme.\n\nABSTRACT.PARTICIPANTS:\n1785 QFL enrolees through 19 employers or health plans who were 24+ h quit.\n\nABSTRACT.INTERVENTIONS:\nQFL is a 5-call telephone-based cessation programme including medications and web-based support. TEQ interventions included 10 or 20 IVR-delivered relapse risk assessments over 8 weeks with automated transfer to counselling for those at risk.\n\nABSTRACT.MAIN OUTCOME MEASURES:\nSelf-reported 7-day and 30-day abstinence assessed at 6-month and 12-month post-enrolment (response rates: 61% and 59%, respectively). Missing data were imputed.\n\nABSTRACT.RESULTS:\n1785 were randomised (standard n=592, TEQ-10 n=602, TEQ-20 n=591). Multiple imputation-derived, intent-to-treat 30-day quit rates (95% CI) at 6 months were 59.4% (53.7% to 63.8%) for standard, 62.3% (57.7% to 66.9%) for TEQ-10, 59.4% (53.7% to 65.1%) for TEQ-20 and 30-day quit rates at 12 months were 61.2% (55.6% to 66.8%) for standard, 60.6% (56.0% to 65.2%) for TEQ-10, 54.9% (49.0% to 60.9%) for TEQ-20. There were no significant differences in quit rates. 73.3% of TEQ participants were identified as at-risk by IVR assessments; on average, participants completed 0.41 IVR-transferred counselling calls. Positive risk assessments identified participants less likely (OR=0.56, 95% CI 0.42 to 0.76) to be abstinent at 6 months.\n\nABSTRACT.CONCLUSIONS:\nStandard treatment was highly effective, with 61% remaining abstinent at 12 months using multiple imputation intent-to-treat (intent-to-treat missing=smoking quit rate: 38%). TEQ assessments identified quitters at risk for relapse. However, adding IVR-transferred counselling did not yield higher quit rates. Research is needed to determine if alternative designs can improve outcomes.\n\nABSTRACT.TRIAL REGISTRATION NUMBER:\nNCT00888992.\n\nBODY:\nStrengths and limitations of this studyFirst large randomised controlled trial to examine the use of interactive voice response (IVR) technology for prevention of smoking relapse.Large sample of participants recruited from employer and health plan quitline programmes serving tobacco users across the USA.Protocol used an innovative and potentially cost-effective technology to connect those at risk for relapse to immediate counselling.We compared two risk assessment intensities over the first 8 weeks following successful abstinence.Although participants rated the IVR system positively, fewer accepted immediate risk-focused counselling than anticipated; thus, the increase in treatment exposure was small. Smoking continues to be the leading preventable cause of premature death and disease in the USA and worldwide.1\n2 Quitting tobacco is one of the most important steps a tobacco user can take to improve their health.3 Free publicly (state and government) and privately (eg, employer and health plan) funded telephone-based tobacco cessation quitlines are available throughout the USA and increasingly around the world to help tobacco users quit. With the advent of the Affordable Care Act in the USA, telephone-based tobacco cessation services offer an important resource to ensure that tobacco users have access to the empirically supported tobacco treatments that are required components of coverage. Although extensive evidence exists regarding the efficacy and cost-effectiveness of tobacco cessation quitlines,4–6 as with other behavioural and pharmacological tobacco treatment options, little is known about how best to intervene with callers at the time of greatest risk for relapse. A recent Cochrane Review concluded that previously tested behavioural relapse prevention interventions, which primarily included skills training or extended therapeutic contact, did not provide additional benefit over no treatment or intervention without a relapse prevention component.7 Survival studies have identified that the highest relapse risk, on average, is in the first 2 weeks after a quit attempt; however, there is considerable between-individual and within-individual variability in risk-related symptom patterns.8 Relapse prevention interventions may be most effective if provided at the time of relapse risk given the multifaceted and individualised nature of withdrawal and environmentally induced relapse risks over time.8 Interactive voice response (IVR) technology offers a resource-efficient solution for connecting tobacco users with relapse prevention interventions at the time of need, especially those already enrolled in phone-based tobacco cessation treatment. IVR is increasingly incorporated into some tobacco treatment protocols,9\n10 and has been used in chronic disease management, posthospital discharge management, medication compliance monitoring, and in a small number of addiction and depression self-help interventions.11 IVR has been piloted as a smoking relapse prevention tool following a single group counselling call and 12 weeks of varenicline,12 but was insufficiently powered to draw meaningful conclusions about the effectiveness of the nine-call and nurse call-back intervention. The purpose of this present study is to examine the effectiveness of augmenting standard multisession tobacco quitline treatment with repeated IVR assessments to monitor recent quitters' relapse risk, and transfer those at risk to a Quit Coach for a brief, focused, intervention targeting the identified relapse trigger. We hypothesised that smokers receiving IVR risk assessments during the first 2 months after quitting would be more likely to stay quit than those receiving standard quitline treatment, and that more frequent assessments (20 vs 10) would be more beneficial.\n\nBODY.METHODS.STUDY DESIGN:\nIn this parallel randomised controlled trial, tobacco users enrolled in the Quit For Life (QFL) programme who achieved abstinence for at least 24 h after their quit date were randomised into one of three study groups: (1) standard treatment, (2) Technology Enhanced Quitline-10 (TEQ-10): standard treatment plus 10 IVR-delivered relapse risk assessments which triggered a transfer to a Quit Coach for participants exceeding risk thresholds and (3) Technology Enhanced Quitline-20 (TEQ-20): same as the TEQ-10 condition except with 20 relapse risk assessments rather than 10. Participants were randomised using blocked randomisation to ensure similar group sizes throughout the study; every nine participants within each employer or health plan contract were randomly assigned with an allocation ratio for the three groups of 3:3:3. These random allocation tables were generated by the study management team and integrated into the QFL computer system where they were not accessible to coaches or participants. Once study eligibility was confirmed, the Quit Coach clicked a 'Randomise' button and the system automatically assigned the participant to the group listed in the next available cell of the allocation table. The planned and final sample size was 1785 and was determined a priori to achieve at least 80% power to detect 30-day point prevalence quit rate differences of 8% between any pair of three arms for intent-to-treat (ITT) analyses, including the traditional penalised imputation approach in which non-responders to data collection are considered relapsed13 and the state-of-the-art multiple imputation method14 (assuming quit rates of 29.6%, 37.6% and 45.6% at 6 months and 26.6%, 34.6% and 42.6% at 12 months for standard care, TEQ-10 and TEQ-20, respectively; n=595) and of 10% for a responder analysis between standard care and either intervention at 6 or 12 months (assuming quit rates of 52.1% standard care, 62.1% TEQ-10, 62.1% TEQ-20; assuming 25% attrition, with 446 responders per arm). The primary analysis using ITT multiple imputation has slightly greater than 80% power because, like ITT using penalised imputation (missing=smoking), it uses all persons with baseline data. However, compared with penalised imputation, or responder-only analysis, multiple imputation ITT uses more information and produces results with greater power, less bias, and greater accuracy.13\n14 Participants gave informed consent prior to study enrolment and received a US$25 gift card for completed 6-month and 12-month follow-up surveys (potential for US$50 total).\n\nBODY.METHODS.PARTICIPANTS:\nTobacco users were eligible for the study if they had enrolled in the QFL programme between April 2010 and October 2012 through 1 of 19 employers or health plans who offered QFL as part of their benefits package. Because the purpose of this study was to examine the potential benefit of IVR assessments, and their triggering of counselling at the point of need to reduce relapse, tobacco users were screened for study criteria and invited to participate after reporting they had been quit for 24 h or more after their quit date. This typically occurred during their second or third call (their quit date call or the call after) of the five-call programme. Additional eligibility criteria included being English-speaking, 18 years of age or older, and having access to a touch-tone phone. Tobacco users were excluded if they exclusively used smokeless tobacco, were actively participating in another tobacco cessation programme, had previously enrolled in the QFL programme during the past 6 months, or had limited phone access (eg, incarceration). Of the 3723 tobacco users who received the study offer, 1840 (49.4%) agreed to participate and 1785 completed the baseline survey and were randomised to one of the three study groups (figure 1). Figure 1CONSORT Diagram. *36 participants did not receive all of their relapse risk assessments due to a programming error in the interactive voice response (IVR) code.\n\nBODY.METHODS.THE QFL PROGRAMME:\nQFL, operated by Alere Wellbeing (formerly Free & Clear), has been commercially available for nearly 30 years, and its effectiveness and cost-effectiveness has been demonstrated in various studies.5\n15–19 The QFL programme includes five counselling calls (an initial assessment and planning call plus four additional proactive outbound calls from an expert Quit Coach). Quit Coaches are required to have a bachelor's degree in a related field, complete more than 200 h of training, and receive ongoing supervision and feedback. The call schedule and content is tailored to each participant's needs. The default call schedule includes the initial planning and assessment call, a quit date call (QDC), a quit date follow-up call 7 days after the QDC, and two additional follow-up calls that take place approximately every 3 weeks after the previous call. Participants are also encouraged to call in for support as needed (ie, participant-initiated inbound calls). Calls vary in length: the planning and assessment call is 25–30 min, while other calls are 15–20 min. Calls focus on creating a quit plan, developing problem-solving and coping skills, identifying social support, and using cessation medications to achieve and maintain abstinence. Participants may receive over-the-counter nicotine replacement products directly via mail (or other prescription cessation medications depending upon their cessation benefit with their employer or health plan). They also receive a printed quit guide, and access to Web Coach, an online interactive web-based programme designed to complement the phone programme.\n\nBODY.METHODS.IVR INTERVENTION:\nIn the two TEQ groups, participants were contacted for relapse risk assessments through automated IVR calls over their first 8 weeks postquit. Two intensities of IVR monitoring were examined. TEQ-10 participants were contacted twice weekly for the first 2 weeks, then weekly for 6 weeks. TEQ-20 participants were contacted daily for the first 2 weeks, then weekly for 6 weeks. An IVR service contractor programmed and delivered the risk assessments (approximately 5 min), which included questions to identify relapse risk on five factors: lapses, cravings, negative affect, self-efficacy and motivation to remain quit. An algorithm was used to flag participants as 'at risk' if they answered any of the screening questions over an established threshold, based on previous research.20–22 Participants who exceeded the threshold were then transferred directly to a Quit Coach for a brief intervention (approximately 15 min) specifically addressing the risk factor(s) that triggered their transfer. Intervention fidelity was monitored by study staff and investigators. Randomly sampled recordings of IVR-triggered telephone counselling sessions were rated for compliance with the study protocol (the primary criterion being whether the counsellor focused the call on identified relapse risk(s)) by two independent reviewers (a study staff member and a Quit Coach supervisor). Reviews revealed some Quit Coaches did not receive information about the risk factor that triggered a participant's transfer due to technological issues. These programming errors were corrected by the IVR vendor. In these circumstances, participants still received empirically based tobacco cessation support at the time of need. As part of quality monitoring, any protocol deviations were addressed with individual feedback from Quit Coach supervisors and ongoing training for all Quit Coaches.\n\nBODY.MEASURES.BASELINE MEASURES:\nDuring registration and the first Quit Coach call, participants completed standard assessment questions regarding demographic characteristics (age, gender, health comorbidities) and tobacco use (tobacco type, time to first use, cigarettes per day, number of previous quit attempts, social contact with smokers). After study recruitment, participants completed an additional baseline measure assessing age, race, ethnicity, education, depressive symptoms (PHQ-2; 2-item measure21), and stress (Perceived Stress Scale; 4-item measure23).\n\nBODY.MEASURES.IVR-POSITIVE SCREENS AND PROGRAM CALL COMPLETION:\nThe IVR system stored the number of relapse risk assessments participants answered, participants' responses to the relapse risk questions, and the number of IVR assessments on which a participant had a positive risk screen. Counselling calls completed as a result of a positive IVR screen and successful transfer to a Quit Coach (ie, IVR-transferred counselling calls) were tracked in addition to standard programme calls (5 planned calls plus participant-initiated inbound calls).\n\nBODY.MEASURES.OUTCOMES MEASURED AT 6 AND 12 MONTHS:\nAt 6 and 12 months after randomisation, participants completed telephone interviewsi administered by outcomes evaluators from the Indiana University Center for Survey Research who were blind to study condition to measure abstinence. All participants rated their satisfaction with the QFL service, and those in the TEQ groups were asked their opinions of the IVR system during the 6-month survey. When participants could not be reached after repeated phone attempts, we also mailed a brief paper survey with only the smoking abstinence questions. Survey response rates were 61% and 59% at 6 and 12 months, respectively. The primary study outcomes were quit rates at 6 and 12 months, defined as 7-day and 30-day self-reported point prevalence abstinence measured by the question, 'In the last [7/30] days, have you smoked a cigarette, even a puff?'. Time to relapse was examined as a secondary outcome and was identified using three questions: 'Since [end of two-week grace period], have you ever smoked at least a part of a cigarette on 7 days in a row?'; 'Since [end of two-week grace period], have you ever smoked any in each of two consecutive weeks (at least 1 day for 2 weeks in a row)?', and [if yes to either of the 2 previous questions] 'What was the first day of that 7-day or two-week period that you smoked?'.16\n\nBODY.MEASURES.STATISTICAL ANALYSES:\nMissing values on all 6-month and 12-month outcomes were imputed using the state-of-the-art imputation method called 'multiple imputation'24 with sequential regression.25 Multiple imputation allows all cases (N=1785) to be included in the analysis. Two imputation models were performed: one for 7-day and one for 30-day point prevalence abstinence. With this approach, variables with no missing values served as independent variables. The variable with the least amount of missing data was imputed first, and used as the dependent variable. Once imputation was carried out, the next variable with the least amount of missing data was imputed using all variables with non-missing values as independent variables including the variable imputed on the previous step. This cycle continued until all missing values were imputed resulting in a series of regressions, hence the name sequential regression. Variables used to provide information for imputation (and to be imputed themselves if missing) included intervention group (no data missing), baseline characteristics listed in table 1 (with the exception of years of tobacco use and stress, due to high correlations with age and depression, respectively), and number of planned calls plus participant-initiated inbound counselling calls completed. These variables had a minimal amount of missing data with, at most, 3.5% missing for the motivation variable. Abstinence outcomes measured at both 6 and 12 months were included in each imputation model and had the greatest amount of missing data. Ten multiple-imputed data sets were generated and analysed. Table 1 Baseline characteristics (N=1785) Variable Overall (N=1785) Standard (N=592) Standard plus TEQ-10 (N=602) Standard plus TEQ-20 (N=591) Demographics Age, mean (SD) 43.4 (11.9) 43.3 (12.2) 44.0 (11.5) 43.0 (12.0) Gender, N (%)  Female 967 (54.2) 321 (54.2) 326 (54.1) 320 (54.2)  Male 818 (45.8) 271 (45.8) 276 (45.9) 271 (45.8) Race, N (%)  Caucasian 1489 (83.4) 497 (84.0) 501 (83.2) 491 (83.1)  African-American 204 (11.4) 63 (10.6) 71 (11.8) 70 (11.8)  Other 51 (2.9) 16 (2.7) 19 (3.2) 16 (2.7)  Missing 43 (2.3) 16 (2.7) 11 (1.8) 14 (2.4) Education, N (%)  High school or less 667 (37.4) 229 (38.7) 225 (37.4) 213 (36.0)  More than high school 1117 (62.6) 363 (61.3) 376 (62.6) 378 (64.0) Comorbidities, N (%)  Asthma 176 (9.9) 62 (10.5) 61 (10.1) 53 (9.0)  COPD 108 (6.0) 39 (6.6) 32 (5.3) 37 (6.3)  CAD 75 (4.2) 27 (4.6) 25 (4.2) 23 (3.9)  Diabetes 138 (7.7) 53 (8.9) 46 (7.6) 39 (6.6)  Any chronic comorbidity* 395 (22.1) 136 (23.0) 136 (22.6) 123 (20.8) Tobacco use Time to first cigarette, N (%), min  5 539 (30.2) 165 (27.9) 197 (32.7) 177 (30.0)  6–30 618 (34.6) 213 (36.0) 201 (33.4) 204 (34.5)  31–60 285 (16.0) 89 (15.0) 101 (16.8) 95 (16.1)  >60 289 (16.2) 105 (17.7) 88 (14.6) 96 (16.2)  Missing 54 (3.0) 20 (3.4) 15 (2.5) 19 (3.2) Cigarettes per day, mean (SD) 16.8 (9.0) 16.8 (9.1) 17.2 (8.9) 16.4 (9.1) Number of quit attempts, N (%)  0–1 377 (21.1) 116 (19.6) 127 (21.1) 134 (22.7)  2–5 1033 (57.9) 371 (62.7) 335 (55.7) 327 (55.3)  6+ 347 (19.4) 97 (16.4) 127 (21.3) 122 (20.6)  Missing 28 (1.6) 8 (1.4) 12 (2.0) 8 (1.4) Tobacco use ≥20 years, N (%)  Yes 1098 (61.5) 361 (61.0) 383 (63.6) 354 (59.9)  No 671 (37.6) 227 (38.3) 211 (35.1) 233 (39.4)  Missing 16 (0.9) 4 (0.7) 8 (1.3) 4 (0.7) Social contact with smokers, N (%)  No 470 (26.3) 146 (24.7) 179 (29.7) 145 (24.5)  Work 572 (32.0) 192 (32.4) 188 (31.2) 192 (32.5)  Home 342 (19.2) 122 (20.6) 104 (17.3) 116 (19.6)  Both work and home 344 (19.3) 112 (18.9) 111 (18.4) 121 (20.5)  Missing 57 (3.2) 20 (3.4) 20 (3.3) 17 (2.9) Scale scores Motivation to quit, mean (SD) 9.1 (1.3) 9.0 (1.3) 9.0 (1.2) 9.1 (1.2) Depression, mean (SD) 1.0 (1.3) 1.0 (1.3) 1.0 (1.3) 1.0 (1.3) Perceived stress, mean (SD) 4.3 (2.8) 4.3 (2.9) 4.2 (2.8) 4.5 (2.8) *Defined as having asthma or COPD or CAD or diabetes. CAD, coronary artery disease; COPD, chronic obstructive pulmonary disease; TEQ-10, Technology Enhanced Quitline-10; TEQ-20, Technology Enhanced Quitline-20. Logistic regression models were used to model intervention efficacy on the 6-month and 12-month outcomes of abstinence, while adjusting for theoretically important potentially confounding baseline covariates.26 There were four main outcomes (7-day and 30-day point prevalence quit rates assessed at 6 and 12 months) and each was analysed with a separate logistic regression model. The multiple imputation quit rate results were our primary outcome metric. However, outcomes were reported using three methods: (1) for all survey respondents (responder analysis), (2) using penalised imputation reporting (assumes non-respondents are treatment failures) which is the traditional ITT analysis used in the smoking literature and (3) with results imputed for participants with missing data using the state-of-the-art multiple imputation method. Although ITT outcomes using penalised imputation are commonly reported in cessation trials, methodological research, including simulation studies, have shown that multiple imputation ITT uses more information and produces greater power, less bias, and more accuracy compared with simpler ad hoc methods for handling missing data such as traditional penalised imputation ITT, or responder-only analysis.13\n14 Missing values for relapse date were imputed using the procedures described above. Using this relapse date and the baseline date, proportional hazards regression was used to examine factors associated with time to relapse. The variables representing number of calls (table 2) were generally skewed to the right with a large number of zeroes. These data were complete and did not require imputation. To compare study groups on these count variables, a generalised linear model with log link function and negative binomial distribution was used. Six-month satisfaction data were also analysed (without imputation) to determine levels of satisfaction with the QFL programme and IVR protocol. The satisfaction variable was not imputed since it was not a primary outcome. Satisfaction comparisons between intervention groups were made using a χ2 test. Mplus software for Windows (V.7.2) was used to generate imputed data and PROC MIANALYZE from SAS/STAT software (V.9.3) was used to pool results and analyse the imputed data sets. All other statistical comparisons were performed using SAS/BASE and SAS/STAT software for Windows (V.9.3). Table 2 Descriptives for number of phone calls (N=1785) Variable Overall (N=1785) Standard (N=592) Standard plus TEQ-10 (N=602) Standard plus TEQ-20 (N=591) p Value* (1) Planned programme counselling calls (1–5), mean (SD); range 3.29 (1.09); 1–5 3.29 (1.09); 1–5 3.32 (1.09); 1–5 3.25 (1.08); 1–5 0.8085 (2) Participant-initiated inbound calls, mean (SD); range 0.38 (1.05); 0–14 0.37 (0.92); 0–8 0.40 (1.13); 0–14 0.37 (1.08); 0–12 0.9082 ≥1 vs 0, N (%) 284 (15.9) 99 (16.7) 96 (16.0) 89 (15.1) 0.7361† (1+2) Sum of planned and participant-initiated inbound calls, mean (SD); range 3.67 (1.53); 2–17 3.66 (1.48); 2–13 3.72 (1.62); 2–17 3.62 (1.48); 2–13 0.6851 (3) IVR-transferred counselling calls, mean (SD); range 0.41 (0.75); 0–6 N/A 0.39 (0.72); 0–6 0.43 (0.79); 0–6 0.2791‡ ≥1 vs 0, N (%) 361 (30.3) N/A 175 (29.1) 186 (31.5) 0.3665†‡ (1+2+3) Sum of all calls, mean (SD); range 3.94 (1.71); 2–19 3.66 (1.48); 2–13 4.11 (1.82); 2–19 4.06 (1.77); 2–17 <0.0001 *p Value obtained from generalised linear model for a count outcome except where indicated. †p Value obtained from χ 2 test. ‡p Value is for comparison between TEQ-10 and TEQ-20 groups. IVR, interactive voice response; TEQ-10, Technology Enhanced Quitline-10. \n\nBODY.RESULTS.PARTICIPANTS:\nBaseline characteristics across the three intervention groups are shown in table 1. Overall, the average age of study participants was 43 years (range 18–85). A little over half (54%) were women. A majority of the sample was Caucasian (83%) and reported having a high school education or more (62%). Just under a quarter of the sample reported having ≥1 chronic comorbidity (22%). The average number of reported cigarettes used per day was 17 (range 0–60) with about two-thirds of the sample reporting having had a cigarette within 30 min from the time they wake in the morning. About 20% of the sample reported 0 or one quit attempts, with a similar percentage reporting six or more quit attempts. A little over 60% reported using tobacco for 20 or more years, and about three-quarters reported having contact with smokers at work, home, or both. Overall, participants reported feeling motivated to quit smoking, with an average rating of 9 on a 1–10 scale (1=low, 10=high). The average PHQ-2 depression score was 1.0 (possible range 0–6, with high scores indicating higher levels of depression,21 and the average perceived stress score was 4.3 (possible range 0–16, with higher scores indicating higher perceived stress.23 At the time of study offer, 56.5% of participants had been quit for 1–6 days, 40% had been quit for 7–29 days, and 3.5% had been quit for 30 days or more.\n\nBODY.RESULTS.PROGRAMME ENGAGEMENT AND POSITIVE RISK SCREENS:\nProgramme engagement was measured as the number of counselling calls completed by participants:1 planned programme counselling calls,2 participant-initiated inbound calls, or3 IVR-transferred counselling calls (table 2). Overall, the average number of total calls completed was 3.9 (SD=1.7; range 2–19). There was a statistical difference between groups; the TEQ-10 and TEQ-20 groups completed slightly more calls than the standard study group due to the additional IVR-transferred counselling calls they received after a positive screen (p<0.001). There were no statistical differences between participant-initiated (p=0.91) or planned programme calls completed (p=0.81). Also, there was no statistical difference in IVR-transferred counselling calls between the TEQ-10 and TEQ-20 groups (p=0.28). There was a statistical difference in the number of positive screens between the two intervention groups, with TEQ-20 having slightly more positive screens than TEQ-10 (on average, 1.52 (SD=1.66) for TEQ-10 and 1.82 (SD=1.99) for TEQ-20, p=0.003). The percentage of one or more positive screens was also larger in the TEQ-20 group (76% vs 70%; p=0.03).\n\nBODY.RESULTS.SIX-MONTH AND 12-MONTH OUTCOMES:\nSeven-day and 30-day point prevalence abstinence rates at 6-month and 12-month follow-up are shown in table 3. As aforementioned, outcomes are reported using three methods: (1) for all survey respondents (responder analysis), (2) using a crude ITT analysis based on penalised imputation reporting (assumes non-respondents are treatment failures) and (3) using state-of-the-art ITT based on multiple imputation for participants with missing data. Since we had missing data, we used the multiple imputation quit rate as our primary outcome. Table 3 Estimates of abstinence at 6 and 12 months, N=1785 Intervention group 6-Month survey outcomes 12-Month survey outcomes N Did not smoke in last 7 days N Did not smoke in the last 30 days N Did not smoke in last 7 days N Did not smoke in the last 30 days Responders*  Standard 368 66.0 (61.2 to 70.9) 368 60.6 (55.6 to 65.6) 363 65.3 (60.4 to 70.2) 362 61.6 (56.6 to 66.6)  Standard plus TEQ-10 368 69.6 (64.9 to 74.3) 368 65.2 (60.4 to 70.1) 370 67.0 (62.2 to 71.8) 371 63.1 (58.2 to 68.0)  Difference compared to standard 3.5 (−3.2 to 10.3) p=0.3051 4.6 (−2.4 to 11.6) p=0.1946 1.7 (−5.1 to 8.6) p=0.6191 1.5 (−5.5 to 8.5) p=0.6812  Standard plus TEQ-20 352 67.3 (62.4 to 72.2) 352 61.1 (56.0 to 66.2) 310 62.6 (57.2 to 68.0) 311 56.6 (51.1 to 62.1)  Difference compared to standard 1.3 (−5.6 to 8.2) p=0.7121 0.5 (−6.6 to 7.6) p=0.8947 −2.7 (−10.0 to 4.6) p=0.4655 −5.0 (−12.5 to 2.4) p=0.1871 Intent-to-treat penalised imputation (missing=smoking)†  Standard 592 41.1 (37.1 to 45.0) 592 37.6 (33.8 to 41.6) 592 40.0 (36.1 to 44.0) 592 37.7 (33.8 to 41.6)  Standard plus TEQ-10 602 42.5 (38.6 to 46.5) 602 39.9 (36.0 to 43.8) 602 41.2 (37.3 to 45.1) 602 38.9 (35.0 to 42.8)  Difference compared to standard 1.5 (−4.1 to 7.1) p=0.6047 2.2 (−3.3 to 7.7) p=0.4357 1.2 (−4.4 to 6.7) p=0.6827 1.2 (−4.3 to 6.7) p=0.6693  Standard plus TEQ-20 591 40.1 (36.2 to 44.1) 591 36.4 (32.5 to 40.3) 591 32.8 (29.0 to 36.6) 591 29.8 (26.1 to 33.5)  Difference compared to standard −1.0 (−6.5 to 4.7) p=0.7405 −1.3 (−6.8 to 4.2) p=0.6459 −7.2 (−12.7 to −1.7) p=0.0100 −7.9 (−13.3 to −2.5) p=0.0041 Multiple imputation‡  Standard 592 64.2 (59.5 to 68.9) 592 59.4 (53.7 to 63.8) 592 63.3 (58.4 to 68.4) 592 61.2 (55.6 to 66.8)  Standard plus TEQ-10 602 67.8 (63.0 to 72.7) 602 62.3 (57.7 to 66.9) 602 65.1 (60.3 to 69.9) 602 60.6 (56.0 to 65.2)  Difference compared to standard 3.6 (−3.3 to 10.5) p=0.2980 3.6 (−3.2 to 10.4) p=0.2969 1.7 (−4.4 to 7.7) p=0.5831 −0.6 (−7.6 to 6.4) p=0.8647  Standard plus TEQ-20 591 64.7 (59.2 to 70.2) 591 59.4 (53.7 to 65.1) 591 60.9 (56.4 to 65.5) 591 54.9 (49.0 to 60.9)  Difference compared to standard 0.5 (−7.0 to 8.0) p=0.8982 0.7 (−6.4 to 7.8) p=0.8468 −2.5 (−9.4 to 4.5) p=0.4808 −6.3 (−14.9 to 2.3) p=0.1471 Percentage of participants reporting abstinence is shown in table above with 95% CIs in parentheses. p Values obtained from χ 2 test for difference between proportions. *Estimates were obtained among participants having completed a 6-month or 12-month survey. †Estimates were obtained assuming participants missing a 6-month or 12-month survey were not abstinent. ‡Estimates were obtained by pooling results across 10 multiply imputed data sets. TEQ-10, Technology Enhanced Quitline-10. ITT-penalised imputation quit rates at 12 months were significantly lower for the TEQ-20 group compared with the standard and TEQ-10 groups. However, there were no significant differences in responder or imputed quit rates at 6 or 12 months, or rates of relapse. The quit rates based on multiple imputation at 6 and 12 months ranged from 55% to 68% depending on intervention group, follow-up time point, and outcome measure. Controlling for baseline characteristics, logistic regression models further illustrated no statistical difference between intervention groups for either primary outcome and either time point (table 4). A few baseline characteristics were statistically associated with the outcomes. Across the four models shown in table 4, a higher number of counselling calls, and being motivated to quit, were consistently associated with higher odds of being abstinent (all p<0.05). Depression also played a role in three of the models, with higher depression scores being associated with lower odds of being abstinent (all p<0.05). Older age resulted in higher odds of abstinence for 6-month outcomes only (both p<0.01). Having at least one comorbid chronic condition was associated with lower odds of abstinence in three of the models (all p<0.05). Table 4 ORs from adjusted logistic models of multiple imputation quit rates at 6 and 12 months, N=1785 6-Month survey outcomes 12-Month survey outcomes Variable Did not smoke in the last 7 days Did not smoke in the last 30 days Did not smoke in the last 7 days Did not smoke in the last 30 days Randomisation groups  Standard plus TEQ-10 vs standard 1.15 (0.83 to 1.59) 1.14 (0.85 to 1.54) 1.07 (0.82 to 1.41) 0.96 (0.71 to 1.30)  Standard plus TEQ-20 vs standard 1.00 (0.71 to 1.42) 1.01 (0.74 to 1.37) 0.87 (0.64 to 1.19) 0.75 (0.52 to 1.09) Number of programme counselling calls  Planned+participant−initiated 1.10 (1.02 to 1.18)* 1.10 (1.02 to 1.19)* 1.12 (1.02 to 1.22)* 1.10 (1.02 to 1.19)* Demographics Gender  Female vs male 0.92 (0.70 to 1.21) 1.00 (0.80 to 1.25) 1.17 (0.91 to 1.49) 1.16 (0.92 to 1.46)  Age 1.02 (1.01 to 1.03)*** 1.02 (1.00 to 1.03)** 1.01 (0.99 to 1.02) 1.01 (0.99 to 1.02) Race  African–American vs Caucasian 1.32 (0.89 to 1.96) 1.41 (0.91 to 2.21) 1.16 (0.75 to 1.80) 1.12 (0.72 to 1.76)  Other vs Caucasian 0.73 (0.34 to 1.56) 0.72 (0.32 to 1.66) 1.09 (0.55 to 2.16) 0.98 (0.53 to 1.81) Education  High school or less vs more than high school 1.12 (0.84 to 1.50) 1.05 (0.77 to 1.43) 0.86 (0.65 to 1.13) 1.01 (0.76 to 1.33) Comorbidities  Any chronic comorbidity 0.69 (0.51 to 0.94)* 0.65 (0.46 to 0.92)* 0.72 (0.53 to 0.98)* 0.76 (0.56 to 1.03) Tobacco use  Cigarettes per day 0.98 (0.97 to 0.99)* 0.99 (0.97 to 1.00) 0.98 (0.97 to 1.00) 0.99 (0.98 to 1.00) Time to first cigarette  5 min vs all others 0.92 (0.68 to 1.24) 1.10 (0.82 to 1.48) 0.88 (0.64 to 1.20) 0.85 (0.65 to 1.12) Number of quit attempts  2–5 vs 0–1 0.96 (0.69 to 1.34) 0.98 (0.75 to 1.29) 0.80 (0.58 to 1.10) 0.93 (0.69 to 1.24)  6+ vs 0–1 0.88 (0.57 to 1.35) 0.85 (0.56 to 1.28) 0.68 (0.47 to 0.99)* 0.79 (0.55 to 1.13) Social contact with smokers  Work or home vs all others 0.74 (0.56 to 0.97)* 0.95 (0.73 to 1.24) 0.98 (0.74 to 1.30) 0.89 (0.68 to 1.17) Scale scores  Motivation to quit 1.24 (1.12 to 1.36)*** 1.24 (1.11 to 1.39)*** 1.22 (1.11 to 1.34)*** 1.23 (1.12 to 1.35)***  Depression 0.87 (0.78 to 0.96)** 0.88 (0.80 to 0.97)** 0.94 (0.83 to 1.06) 0.89 (0.80 to 0.99)* All variables shown were included in each model. Estimates for each model were obtained by combining results across 10 multiply imputed data sets; 95% CIs are shown in parentheses. *p<0.05; **p<0.01; ***p<0.001. TEQ-10, Technology Enhanced Quitline-10. \n\nBODY.RESULTS.TIME TO RELAPSE:\nThere was no significant association between time to relapse and randomisation group (table 5). Other factors related to time to relapse included number of counselling calls, having comorbid health condition, number of quit attempts, motivation to quit and depression. Higher number of counselling calls and higher levels of motivation to quit were associated with being less likely to relapse sooner (HR=0.87, p<0.001; HR=0.84, p<0.001, respectively). By contrast, participants reporting higher levels of depression were more likely to relapse sooner (HR=1.07, p=0.02). Compared to those with no chronic comorbidities, participants reporting one or more chronic comorbidities were more likely to relapse sooner (HR=1.29, p=0.04). Those reporting six or more quit attempts were also more likely to relapse sooner compared to those with 0–1 attempts (HR=1.32, p=0.04). Table 5 HRs from adjusted proportional hazard models of time to relapse, N=1785 Variable HR (95% CI) p Value Randomisation groups  Standard plus TEQ-10 vs standard 0.85 (0.69 to 1.04) 0.1171  Standard plus TEQ-20 vs standard 1.05 (0.85 to 1.31) 0.6282 Number of programme counselling calls  Planned+participant−initiated 0.87 (0.82 to 0.92)*** <0.0001 Demographics Gender  Female vs male 0.90 (0.75 to 1.09) 0.2838  Age 1.00 (0.99 to 1.01) 0.5278 Race  African–American vs Caucasian 0.97 (0.75 to 1.26) 0.8400  Other vs White 1.11 (0.67 to 1.84) 0.6762 Education  High school or less vs more than high school 1.06 (0.87 to 1.30) 0.5368 Comorbidities  Any chronic comorbidity 1.29 (1.01 to 1.64)* 0.0430 Tobacco use  Cigarettes per day 1.01 (0.99 to 1.02) 0.0639 Time to first cigarette  5 min vs all others 1.08 (0.88 to 1.34) 0.4430 Number of quit attempts  2–5 vs 0–1 1.14 (0.90 to 1.43) 0.2657  6+ vs 0–1 1.32 (1.02 to 1.72)* 0.0345 Social contact with smokers  Work or home vs all others 0.99 (0.80 to 1.24) 0.9713 Scale scores  Motivation to quit 0.84 (0.79 to 0.90)*** <0.0001  Depression 1.07 (1.01 to 1.13)* 0.0198 All variables shown were included in each model. Estimates for each model were obtained by combining results across 10 multiply imputed data sets; 95% CIs are shown in parentheses. *p<0.05; **p<0.01; ***p<0.001. TEQ-10, Technology Enhanced Quitline-10. \n\nBODY.RESULTS.USING IVR TO IDENTIFY RISK:\nFigure 2 presents quit outcomes for TEQ-10 and TEQ-20 groups (separately in panel A and combined in panel B) according to their positive screen status (≥1 positive screen vs no positive screen); 70.4% in TEQ-10 and 76.1% in TEQ-20 had ≥1 positive screen during the study. Compared with participants who screened positive in either TEQ group, participants who did not screen positive were more likely to be abstinent at 6 (OR=1.69, p=0.01 for 7-day point prevalence; OR=1.77, p≤0.001 for 30-day point prevalence) and 12 months (OR=1.47, p=0.03 for 7-day point prevalence; OR=1.50, p=0.01 for 30-day point prevalence). Figure 2Abstinence rates for intervention and positive screen groups, N=1785. Estimates for each model were obtained by combining results across 10 multiple-imputed data sets. Error bars indicate 95% CIs. (A) Displays results for the TEQ-10 and TEQ-20 groups separately. (B) Displays data for the two TEQ-groups combined. '1+ positive screen' is defined as exceeding the predetermined risk threshold on any of the IVR assessment questions (topics: lapses, cravings, negative effect, self-efficacy, and motivation to remain quit) on any IVR assessment call. IVR, interactive voice response; TEQ-10, Technology Enhanced Quitline-10; TEQ-20, Technology Enhanced Quitline-20.\n\nBODY.RESULTS.SATISFACTION WITH QFL AND THE IVR PROTOCOL:\nSatisfaction was measured at 6 months after enrolment. Participants completing the 6-month survey (n=1091, 61.1%) were generally satisfied with the overall QFL programme and the counselling they received (96% and 98%, respectively, were somewhat satisfied, satisfied, or very satisfied at 6 months). A similar percentage (98%) would recommend the programme to others. There were no differences in QFL programme satisfaction between the three study groups. Compared to TEQ-10 participants, those receiving TEQ-20 were more likely to report the number of relapse risk assessments were too many (36% vs 23%, p≤0.001) and also were marginally more likely to agree that the risk assessments took too much time (28% vs 23%, p=0.09). The majority of participants reported it was easy to answer questions through the IVR system (95%), that the system was helpful (87%), and that they would recommend the IVR system to others trying to stay quit (86%), with no significant differences between the TEQ-10 and TEQ-20 groups.\n\nBODY.DISCUSSION:\nIn this randomised controlled trial investigating the use of an automated, IVR-supported protocol to identify risk and prevent smoking relapse among recently quit tobacco users enrolled in the QFL programme, there were no differences in tobacco abstinence rates between the control (standard QFL) and technology-enhanced (IVR-delivered risk assessments and transfer for counselling) groups at 6-month and 12-month follow-up. Although the risk assessment and transfer protocols did not yield better abstinence rates, the IVR-delivered screenings did successfully identify participants who were less likely to be quit at follow-up. Participants who did not screen positive during the relapse risk assessments were 77% more likely to be abstinent at 6-month follow-up. We tested two risk screening and transfer protocols of varying intensities. The more intensive screening protocol with 20 IVR-delivered assessments did not demonstrate any benefit over the 10-assessment group, and the TEQ-20 respondents were more likely to indicate there were too many IVR assessments. In addition, we explored whether the TEQ intervention was more effective for certain populations in this study with regard to quit outcomes, by testing interaction effects, but did not identify any significant moderating relationships. There are several possible explanations for why the intervention did not reduce relapse rates. The TEQ interventions were designed to immediately transfer participants who met risk thresholds for tobacco counselling to target the identified relapse risk. About three-quarters (73.3%) of the TEQ participants were identified as being at risk at least one time over the 8-week intervention period. Having a positive screen was predictive of relapse; however, it is possible that other risk factors are more predictive of risk than those included in the assessment. Also, participants identified as at risk by the IVR assessments and automatically transferred to a coach failed to remain on the line to receive cessation counselling at the anticipated rate. That is, less than one third of those in the TEQ arms completed an IVR-transferred counselling call and 41.3% of those with a positive risk screen completed one or more IVR-transferred counselling calls, resulting in participants in the TEQ conditions completing only 0.41 calls, on average, more than the planned programme calls from the standard programme. Thus, it is possible that we failed to observe an effect because we did not increase the actual treatment exposure much above the base rate of calls. It remains unclear why some callers did not accept these additional calls when the satisfaction data showed positive perceptions of the IVR automated assessment system (87% felt it was helpful when they need to talk to a Quit Coach; 86% would recommend). Most callers were still receiving planned standard programme calls at the time of their risk assessments; perhaps callers who had recently spoken to a coach did not feel they needed another call at that time. Future studies should assess reasons for lack of counselling uptake and consider an automated system that would allow participants to schedule a call in the near future or would initiate outbound calls to participants identified as at risk who do not immediately transfer to a counsellor. The study had high quit outcomes, in spite of the fact that the TEQ intervention did not significantly improve outcomes. The 61% respondent and 38% ITT penalised imputation (missing=smoking) 30-day point prevalence quit rates at 6 months in the standard treatment group indicated that tobacco users who receive a multiple-call phone counselling programme, have access to cessation medication benefits, and successfully achieve at least a 24 h abstinence period early in treatment do quite well. These quit rates are high compared to studies focusing on all commercial quitline enrolees (eg, 34.3–41.1% responder quit rate at 6 months27\n28). The relatively strong performance and engagement in standard treatment for this study may have made achieving improvements with a low intensity relapse prevention intervention even more challenging. This finding also aligns with previous recommendations that increasing quit attempts may be one of the most important factors for increasing cessation at the population level,29 and demonstrates the importance of achieving at least 24 h of abstinence. More counselling calls completed, higher motivation to quit, lower depression scores, and not having chronic health condition were consistently related to greater likelihood of being abstinent at follow-up. These variables were also associated with time to relapse; additionally, callers with a greater number of previous quit attempts (6+) were more likely to relapse sooner. These findings are consistent with previous quitline studies which have demonstrated the importance of programme engagement (ie, number of calls completed15\n30\n31). Callers in this study had a high motivation to quit (9 out of 10, on average); a previous study has linked high motivation to quit with making a quit attempt but not with likelihood of remaining quit.32 Moreover, previous work has identified that callers with depressive symptoms33 and chronic health conditions31 may have more difficulty quitting. Dependence level (time to first use and cigarettes per day) was not consistently related to outcomes in this study, which differs from previous literature that has often linked baseline dependence and outcomes.31\n32 Compared with other behavioural relapse prevention research, strengths of this study included a relatively large sample size, use of an innovative and potentially cost-effective technology that could be disseminated to large populations if shown to be effective, and attempting to connect those at risk to immediate counselling. Most previous behavioural relapse prevention studies have focused on skills training and have not yielded significant effects.7 As noted by Hajek et al,7 future relapse prevention research efforts may be best applied in examining new and alternative approaches to those previously studied. One recent small pilot study examined extended IVR assessments following varenicline treatment, and also did not find significant effects for the extended IVR group.12 That article did not report on whether those who screened positive during IVR assessments were more likely to have returned to smoking at follow-up, and likely had too small of sample size to meaningfully do so. Additionally, the McNaughton et al's12 study included an intensive IVR contact schedule with 80 contacts over 40 weeks; low IVR assessment completion rates (37% of IVR calls and 16% of assessments were completed) in that study align with our findings that more contact is not necessarily better. Identifying the most effective and cost-effective number of contacts should continue to be a focus in future studies using IVR technology. The response rates to our 6-month and 12-month outcome surveys were 61% and 59%, respectively, which may limit the validity of our findings. Multiple imputation, a preferred method for ITT analysis, was used to reduce potential non-response bias, and the findings from these analyses agreed closely with results from a sensitivity analysis using respondent-only data. Another potential limitation to our study design is the lack of biochemical verification of abstinence, particularly in light of the high reported abstinence rates. However, our study procedures were consistent with recommendations of the SRNT Subcommittee on Biochemical Verification,34 and the results should be considered with respect to this. Additionally, we have no reason to believe there was a systematic bias in reporting abstinence across groups. Third, the protocol aimed for IVR-triggered counselling calls to focus on the specific risk factor(s) identified by the IVR assessment; all participants received empirically based support at the time of need, but not all calls were focused on the identified risk factor(s). Finally, as with other research on prevention of smoking relapse, the time frame for intervention and data collection poses limitations. The IVR-delivered risk assessments in this study were limited to the first 8 weeks following successful abstinence. Although previous research has identified the first weeks as being the greatest risk for relapse, callers also relapse at later dates. Ongoing plans for prevention of relapse, or risk intervention, may be needed; future research may be able to identify those in need of ongoing relapse risk monitoring and intervention. Augmenting tobacco quitline treatment with IVR-delivered relapse risk screening assessments was acceptable to participants, and successfully identified recently quit tobacco users who may be in need of more intensive or different intervention. More research is needed to identify how to engage at-risk participants in additional treatment. Using a technology-based screening for relapse risk to connect recent tobacco users to cessation counselling at the time of risk could offer a scalable, cost-effective approach for delivering tailored services to those in need, if future research can identify the appropriate services to deliver at that time of need.\n\n**Question:** Compared to Standard quitline treatment what was the result of Technology Enhanced Quitline-10 and Technology Enhanced Quitline-20 on Time to relapse?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1030
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Tranexamic acid for patients with traumatic brain injury: a randomized, double-blinded, placebo-controlled trial\n\n ABSTRACT.BACKGROUND:\nTraumatic brain injury (TBI) is commonly accompanied by intracranial bleeding which can worsen after hospital admission. Tranexamic acid (TXA) has been shown to reduce bleeding in elective surgery and there is evidence that short courses of TXA can reduce rebleeding in spontaneous intracranial haemorrhage. We aimed to determine the effectiveness and safety of TXA in preventing progressive intracranial haemorrhage in TBI.\n\nABSTRACT.METHODS:\nThis is a double blinded, placebo controlled randomized trial. We enrolled 238 patients older than 16 years with moderate to severe TBI (post-resuscitation Glasgow Coma Scale (GCS) 4 to 12) who had a computerized tomography (CT) brain scan within eight hours of injury and in whom there was no immediate indication for surgery. We excluded patients if they had a coagulopathy or a serum creatinine over than 2.0 milligrams%. The treatment was a single dose of 2 grams of TXA in addition to other standard treatments. The primary outcome was progressive intracranial haemorrhage (PIH) which was defined as an intracranial haemorrhage seen on the second CT scan that was not seen on the first CT scan, or an intracranial haemorrhage seen on the first scan that had expanded by 25% or more on any dimension (height, length, or width) on the second scan.\n\nABSTRACT.RESULTS:\nProgressive intracranial haemorrhage was present in 21 (18%) of 120 patients allocated to TXA and in 32 (27%) of 118 patients allocated to placebo. The difference was not statistically significant [RR = 0.65 (95% CI 0.40 to 1.05)]. There were no significant difference in the risk of death from all causes in patients allocated to TXA compared with placebo [RR = 0.69 (95% CI 0.35 to 1.39)] and the risk of unfavourable outcome on the Glasgow Outcome Scale [RR = 0.76 (95% CI 0.46 to 1.27)]. There was no evidence of increased risk of thromboembolic events in those patients allocated to TXA.\n\nABSTRACT.CONCLUSIONS:\nTXA may reduce PIH in patients with TBI; however, the difference was not statistically significant in this trial. Large clinical trials are needed to confirm and to assess the effect of TXA on death or disability after TBI.\n\nBODY.BACKGROUND:\nEach year, world-wide more than 1.5 million people die and about 10 million people are hospitalized following traumatic brain injury (TBI) [1]. Many survivors experienced long term disability [2]. In Thailand, each year around 8,000 people died following acute TBI and tens of thousands were hospitalized. A follow up study of 418 patients who were admitted to Khon Kaen regional hospital following TBI found that 29% of patients were either dead or severely disabled six months following the injury [3]. TBI is commonly accompanied by intracranial bleeding which occurs in 25% to 45%, 3% to 12% and 0.2% of severe, moderate and mild TBI cases respectively [4]. There were evidences indicating that this bleeding can develop or continue after hospital admission and that larger bleeds have a worse prognosis [5]–[12]. Delayed enlargement of traumatic intraparenchymal contusions and hematomas is the most common cause of clinical deterioration and death in patients who had a lucid interval after TBI [8,13]. A systematic review of randomized controlled trials (RCT) shows that antifibrinolytic agents are effective in reducing bleeding in elective surgery. The review summarized data from 211 RCTs involving 20,781 participants showed that both Aprotinin and Tranexamic acid (TXA) reduced the need for blood transfusion in elective surgery by about one third [RR = 0.61 (95% CI 0.54 to 0.69)] with no evidences of increased adverse effects [14]. If antifibrinolytics reduced intracranial bleeding in patients with TBI, this would have important clinical implications. However, a systematic review of haemostatic drugs in patients with TBI found no RCTs [15]. There were some evidences on the use of TXA in patients with spontaneous aneurysmal intracranial bleeding. A systematic review of RCTs of TXA in aneurysmal haemorrhage found that although rebleeding was reduced by 45% [odds ratio = 0.55 (95% CI 0.42 to 0.71)], this benefit was offset by cerebral ischaemia such that there was no overall patients benefit [16]. However, these trials used high doses TXA for about six weeks and it has been suggested that a shorter and lower dose might prevent rebleeding whilst avoiding the risk of ischaemia. Indeed, since the review was published a trial of early short course (3 days) TXA showed that it reduced rebleeding from 10.8% to 2.4% without ischaemic adverse effects [17]. We conducted a double blind randomized controlled trials to evaluate the effect of early short course TXA on the occurrence of progressive intracranial haemorrhage (PIH) in patients with TBI treated at a large regional trauma centre in rural Thailand.\n\nBODY.METHODS:\nA randomized, double-blind, placebo-controlled, parallel group trial was conducted. The trial protocol was approved by the Khon Kaen hospital's ethics committee and the Khon Kaen University's ethics committee. The trial protocol was registered with http://www.clinicaltrials.gov with the trial identifier NCT00755209. All written informed consent forms were obtained from legally acceptable representative of comatose participants.\n\nBODY.METHODS.PARTICIPANTS:\nAll patients, older than 16 years, with moderate to severe TBI (post-resuscitation Glasgow Coma Scale (GCS) 4 to 12) who had a computerized tomography (CT) brain scan performed within eight hours of injury, and whom there was no immediate indication for surgery, were eligible for inclusion. There were both of isolated TBI and polytrauma patients whom there were critical concern for TBI management during the admission period. Patients were excluded if they were pregnant, had evidences of coagulopathy, known to be receiving a medication which affects haemostasis, or had a serum creatinine over than 2 mg/decilitre. Coagulopathy was considered present if any of the following hematological parameters were observed: (1) platelet count less than 100,000 cells/mm3; (2) Prothrombin time (PT) or international normalized ratio (INR) prolonged more than 1.5 times normal value; (3) activated partial thromboplastin time (aPTT) more than 10 seconds greater than normal value. Coagulopathy was a risk factor for developing PIH and mortality in previous studies [10,11,18]–[22] or recent reports [23,24]. Hence it was confounding factor to be controlled by exclusion.\n\nBODY.METHODS.INTERVENTION:\nPatients were randomly allocated to receive TXA (loading dose of 1.0 gram over 30 minutes followed by a maintenance dose of 1.0 gram infused over eight hours) or matching placebo. The placebo was sterile water and was purchased on the open market in Thailand.\n\nBODY.METHODS.STUDY OUTCOMES:\nThe primary outcome was progressive intracranial haemorrhage. It would have more association to therapeutic effect of given tranexamic acid than other outcomes in this study [25]. Moreover it was a significant source or link to morbidity and mortality in TBI [8,10,18]–[20,26]. PIH was defined as an intracranial haemorrhage seen on the second CT scan that was not seen on the first CT scan, or an intracranial haemorrhage seen on the first scan that had expanded by 25% or more on any dimension (height, length, or width) on the second scan. Progressive pressure effect was defined as either an increase in midline shift of greater than 1 mm or an increase in basal cistern between the first and second CT scan. The second CT scan was to be taken 24 hours ± 8 hours after the first CT scan. Improved Glasgow Coma Scale (GCS) motor score was also recorded to see if there is compatible change between clinical and radiological progression at 24 hours. The presence or absence of PIH was assessed by two independent readers. Both were neurosurgeons at KKH with experiences in reading posttraumatic CT scans. When there was disagreement about the presence or absence of PIH this was resolved by a third neurosurgeon reader. Inter-rater reliability was assessed by kappa statistic. Secondary outcomes were death, functional status assessed using the Glasgow Outcome Scale (GOS) at hospital discharge, blood transfusion, neurosurgical operation and any in-hospital thromboembolic events (myocardial infarction, pulmonary embolism, deep vein thrombosis, and stroke).\n\nBODY.METHODS.SAMPLE SIZE:\nWe planned to randomize approximately 240 patients, 120 to each group. We estimated that the proportion of patients with PIH would be 30% in the placebo group and that TXA would reduce to be 15%. A trial with 240 patients would have about 80% power at the 0.05 level of significance (two-sided test) to detect a treatment effect of this magnitude.\n\nBODY.METHODS.RANDOMISATION AND BLINDING:\nThe randomisation sequence (with a randomly varied block size) was generated from a computer by a person who was not involved with the trial and this sequence was used to prepare the sequentially numbered treatment packs. Whenever an eligible patient was recruited, the recruiting clinician asked that the next sequentially numbered sealed opaque treatment pack be opened and that the trial loading dose and maintenance infusion be prepared and sent to the relevant ward. This preparation was done out of sight of the recruiting clinician and research participants by nurses who were not involved in the trial. Each treatment pack contained unlabelled vials of either drug or placebo. Although drug and placebo vials contained an identical amount of colorless solution, there was a small size discrepancy between the drug and placebo vials. It was for this reason that the vials were enclosed within sequentially numbered sealed opaque envelopes that were opened by nurses who were not involved in the trial. This approach ensured good allocation concealment and also ensured that those caring for the patient and those conducting the trial did not know the assigned treatment. The allocation scheme was kept confidential from all research participants until the end of the study.\n\nBODY.METHODS.STATISTICAL METHODS:\nThe primary analysis was on an intention-to-treat (ITT) basis with complete case analysis for other outcomes and was done using STATA software version 10.0. The presence or absence of PIH was analyzed as a dichotomous variable. The Glasgow Outcome Scale was also dichotomized such that death, persistent vegetative state, and severe disability constituted an unfavorable outcome while favorable outcome included moderate disability and good recovery. We calculated relative risks, risk difference with their 95% confidence intervals and hypothesis testing between the two treatment groups.\n\nBODY.RESULTS.PATIENT RECRUITMENT:\nFigure 1 shows the participant flow into the trial. The first patient was recruited on the 23rd October 2008 and the last patient on 14th August 2009 by which time a total of 238 patients had been included in the trial. All patients received the allocated trial treatments (TXA or placebo) and there were no protocol violations. There were nine patients for whom a second CT scan could not be obtained: seven patients died before the second CT scan, one patient could not be scanned because of agitation, and one patient refused the second scan. There were two consents withdrawal in the placebo group after randomization because they were signed by the unauthorized relatives. The related ethic committees were informed with an agreement for this exclusion. The inter-rater reliability of the assessment of the presence or absence of PIH was high with a kappa statistic of 0.95. The patients were enrolled with comparable profile including about mean age (40 years), male gender (80%), injury onset (within 7 hours), associated organ injury with injury severity score 24 (range from 9–43) and initial haematocrit level (38 volumes%) with moderately severe GCS severity. There were similar pressures effects finding of the first CT scan in both groups. Treatment and control groups were approximately balanced with respect to baseline characteristics (Table 1). Figure 1Participant flow. Table 1 Baseline characteristics Characteristics TXA (n =120) Placebo (n = 118) Age at randomization (years) 34.8 (16.0) 34.1 (15.3) Male 103 (86%) 107 (91%) Glasgow coma scale (GCS) severity: Moderate (9 – 12) Severe (4 – 8) 52 (53%) 47 (47%) 68 (49%) 71 (51%) Baseline haematocrit (volume %) 38 (7.4) 38 (6.7) Time since injury (hours) 6.6 (1.69) 7.1 1.29) Isolated TBI injury 20 (17%) 16 (14%) Polytrauma with TBI injury 100 (83%) 102 (86%) Mean injury severity score (range) 23.3 (9–43) 24.8 (9–43) Midline shift (>3 mm) on first CT (mm) 2 (0.02%) 3 (0.03%) Basal cistern compression on first CT 54 (45%) 53 (45%) Table 2 shows the effect of TXA on the study outcomes by the intention to treat analysis with assuming poor outcome which is used to represent the effectiveness of treatment effect. Progressive intracranial haemorrhage was present in 21 (18%) of patients allocated to TXA and in 32 (27%) of patients allocated to placebo. The difference was not statistically significant [RR = 0.65 (95% CI 0.40 to 1.05)]. The relative risk of death from all causes in patients allocated to TXA compared with placebo was 0.69 (95% CI 0.35 to 1.39) and the relative risk for unfavourable outcome on the Glasgow Outcome Scale was 0.76 (95% CI 0.46 to 1.27). The relative risk of blood transfusion need in patients allocated to TXA compared with placebo was 0.92 (95% CI 0.61 to 1.40). Although we had informed the clinical condition and proposed for emergency neurosurgical operation in all patients with PIH to their relatives. The neurosurgical interventions were not done in placebo group because the patients' relatives did not allow us to perform the operation hence the relative risk for neurosurgical interventions could not be calculated. There were very few adverse effects in both groups. There was no patient in TXA group who developed vascular occlusion event in this study. Table 2 Study outcomes Outcomes TXA Placebo RD RR (n = 120) (n = 118) [95% CI] [95% CI] Progressive intracranial haemorrhage (PIH) 21 (18%) 32 (27%) −0.10 0.65 moderate TBI (n = 24) 7 (6%) 17 (14%) [(−0.20)- [0.40-1.05] severe TBI (n = 20) 9 (8%) 11 (9%) (−0.01)]   indicated neurosurgery 6 (5%) 6 (5%)     Increase in pressure effect* *11 (10%) 12 (11%) −0.01 0.91 [(−0.09) – 0.07] [0.42-1.97] Improved GCS motor score at 24 hours 37 (31%) 37 (31%) −0.01 0.98 [(−0.12) – 0.11] [0.67-1.44] Neurosurgical intervention 3 (3%) 0 0.03 --- [(−0.00) – 0.05] --- Blood products transfusion 31 (26%) 33 (28%) −0.02 0.92 [(−0.13) – 0.09] [0.61-1.40] Death 12 (10%) 17 (14%) −0.04 0.69 [(−0.13) – 0.04] [0.35-1.39] Unfavorable (GOS) outcome 21 (18%) 27 (23%) −0.05 0.76 [(−0.16) – 0.05] [0.46-1.27] Adverse events     Stroke 0 3 --- --- Pulmonary embolus 0 0 --- --- Deep vein thrombosis 0 0 --- --- Gastrointestinal bleeding 0 1 --- --- *Denominator for outcomes is 114 for TXA group and 115 for placebo group. This is an analysis based on complete case analysis that is not assumed the missing outcome. Table 3 shows the effect of various methods of handling missing response in the primary outcome. The different analyses have different assumptions on existing PIH. A complete case analysis may be undertaken for primary outcome to avoid the assumption about PIH without imputation for the unavailable outcome. However different analyses give similar trend of potential benefit for treatment with tranexamic acid. Table 3 Effect of various methods of handling missing response in trial Methods of analysis PIH in TXA and placebo group Group n/N Rate RD RR (person) (%) [95% CI] [95% CI] 1. Complete case analysis TXA 16/115 14% −0.11 0.57   Placebo 28/114 25% [(−0.21)-(−0.01)] [0.32-0.99] 2. Assuming poor outcome TXA 21/120 18% −0.10 0.65   Placebo 32/118 27% [(−0.20)-(−0.01)] [0.40-1.05] 3. Assuming good outcome TXA 16/120 13% −0.10 0.56   Placebo 28/118 24% [(−0.20)-(−0.01)] [0.32-0.98] 4. Extreme case favoring placebo TXA 21/120 18% −0.06 0.74   Placebo 28/118 24% [(−0.16)-0.04] [0.44-1.22] 5. Extreme case favoring TXA TXA 16/120 13% −0.14 0.49   Placebo 32/118 27% [(−0.24)-(−0.04)] [0.29-0.85] \n\nBODY.DISCUSSION.PRINCIPAL FINDINGS:\nWe found a statistically insignificant difference but with a trend towards significant reduction in the risk of PIH in patients allocated to TXA without evidence of increased risk of thrombotic adverse events. The risks of death and of unfavorable outcome on the Glasgow Outcome Score were lower for patients allocated to TXA. The safety of early short course treatment of TXA in our TBI patients was compatible to no increasing risk of non-fatal vascular occlusive events with early short course treatment of TXA in traumatic bleeding patients in CRASH2 trial [27].\n\nBODY.DISCUSSION.STRENGTHS AND WEAKNESSES:\nThis is the leading randomized trial to examine the effectiveness of the early administration of a short course of TXA in patients with acute TBI. We are aware of a recent study publication of CRASH2 trial and we anticipate that the results of all relevant trials will ultimately be combined in the Cochrane systematic review of haemostatic drugs for TBI [15]. Our trial was properly randomized with good allocation concealment. The timing of the second CT scan was pre-specified in the protocol and all outcome measurements were made without knowledge of treatment allocation. The main weakness of our study is the low power to estimate the effect of TXA on clinical outcomes. In particular, although our study has shown statistically insignificant difference that TXA may reduce PIH, because of the low powers to examine the effect of TXA on death and on disability, the clinical implications of our findings are still limited. Another limitation is lacking to explore an effect of associated injury or injury severity score in multiple trauma patients which was concerned in some studies [19,21]. We precluded exploring this association to the treatment by our limited sample size. Polytrauma subjects were almost patients whom there was agreement from trauma care team to concern treatment for TBI as same as isolated TBI subjects at first moment. Therefore we assigned serious, critical and serious injury degree for moderate TBI, severe TBI and other anatomical organ injuries in the category of injury severity score assessment respectively. The common associated injuries were extremity fracture, facial fracture, rib or clavicular fracture and a few cases of other chest injury or minor abdominal trauma. Finally the exclusion of coagulopathy is also a limitation of our study and this subject could be a point to explore in further research.\n\nBODY.DISCUSSION.PIH IN TBI:\nVarious terms have been used to describe the development or enlargement of intracranial bleeding after TBI. Terms include delayed traumatic intracranial haemorrhage (DTICH) [7], expanding hematoma [19] and progressive hemorrhagic injury [10]. In this paper we combined these concepts such that we included both new haemorrhage and expanding haemorrhage (progressive intracranial haemorrhage). Our rationale was that both lesions may exacerbate intracranial hypertension and the occurrence of both may be affected by the administration of TXA. Although patients were excluded if they had evidence of coagulopathy at baseline the proportion of patients with PIH among the patients was surprisingly high. Had we not excluded coagulopathic patients it is likely that the proportion of patients with PIH would have been even higher. Previous estimates of the occurrence of PIH vary widely from 7% to 60% [20]. These differences are likely to reflect difference in timing of the CT scan, clinical setting and diagnostic criteria. Nevertheless, given the relatively high prevalence of PIH in patients with TBI observed in this study the potential for TXA to improve clinical outcomes could be high.\n\nBODY.DISCUSSION.METHODS OF HANDLING MISSING RESPONSE IN TRIAL:\nWe summarize the result of handling missing outcome for the primary outcome in Table 3.\n\nBODY.CONCLUSIONS:\nWe have not shown that TXA improves clinical outcomes and this information would be required in order to make any recommendation about the use of TXA in clinical practice. Our results have important implications for research. If an early short course of TXA could be demonstrated to improve clinical outcomes after TBI without any important adverse effects, then this treatment, because it is cheap and widely practicable, could contribute importantly to reducing mortality and disability after TBI.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nSY, BT, NP and PL designed, analysed and wrote the paper. WK and PP suggested the design, collected the data and conducted analyses. SY wrote the paper with input from all other authors. All authors had full access to all the data in the study and had final responsibility for the decision to submit for publication. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-227X/13/20/prepub\n\n**Question:** Compared to Placebo what was the result of Tranexamic acid on Risk of death?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1031
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Tranexamic acid for patients with traumatic brain injury: a randomized, double-blinded, placebo-controlled trial\n\n ABSTRACT.BACKGROUND:\nTraumatic brain injury (TBI) is commonly accompanied by intracranial bleeding which can worsen after hospital admission. Tranexamic acid (TXA) has been shown to reduce bleeding in elective surgery and there is evidence that short courses of TXA can reduce rebleeding in spontaneous intracranial haemorrhage. We aimed to determine the effectiveness and safety of TXA in preventing progressive intracranial haemorrhage in TBI.\n\nABSTRACT.METHODS:\nThis is a double blinded, placebo controlled randomized trial. We enrolled 238 patients older than 16 years with moderate to severe TBI (post-resuscitation Glasgow Coma Scale (GCS) 4 to 12) who had a computerized tomography (CT) brain scan within eight hours of injury and in whom there was no immediate indication for surgery. We excluded patients if they had a coagulopathy or a serum creatinine over than 2.0 milligrams%. The treatment was a single dose of 2 grams of TXA in addition to other standard treatments. The primary outcome was progressive intracranial haemorrhage (PIH) which was defined as an intracranial haemorrhage seen on the second CT scan that was not seen on the first CT scan, or an intracranial haemorrhage seen on the first scan that had expanded by 25% or more on any dimension (height, length, or width) on the second scan.\n\nABSTRACT.RESULTS:\nProgressive intracranial haemorrhage was present in 21 (18%) of 120 patients allocated to TXA and in 32 (27%) of 118 patients allocated to placebo. The difference was not statistically significant [RR = 0.65 (95% CI 0.40 to 1.05)]. There were no significant difference in the risk of death from all causes in patients allocated to TXA compared with placebo [RR = 0.69 (95% CI 0.35 to 1.39)] and the risk of unfavourable outcome on the Glasgow Outcome Scale [RR = 0.76 (95% CI 0.46 to 1.27)]. There was no evidence of increased risk of thromboembolic events in those patients allocated to TXA.\n\nABSTRACT.CONCLUSIONS:\nTXA may reduce PIH in patients with TBI; however, the difference was not statistically significant in this trial. Large clinical trials are needed to confirm and to assess the effect of TXA on death or disability after TBI.\n\nBODY.BACKGROUND:\nEach year, world-wide more than 1.5 million people die and about 10 million people are hospitalized following traumatic brain injury (TBI) [1]. Many survivors experienced long term disability [2]. In Thailand, each year around 8,000 people died following acute TBI and tens of thousands were hospitalized. A follow up study of 418 patients who were admitted to Khon Kaen regional hospital following TBI found that 29% of patients were either dead or severely disabled six months following the injury [3]. TBI is commonly accompanied by intracranial bleeding which occurs in 25% to 45%, 3% to 12% and 0.2% of severe, moderate and mild TBI cases respectively [4]. There were evidences indicating that this bleeding can develop or continue after hospital admission and that larger bleeds have a worse prognosis [5]–[12]. Delayed enlargement of traumatic intraparenchymal contusions and hematomas is the most common cause of clinical deterioration and death in patients who had a lucid interval after TBI [8,13]. A systematic review of randomized controlled trials (RCT) shows that antifibrinolytic agents are effective in reducing bleeding in elective surgery. The review summarized data from 211 RCTs involving 20,781 participants showed that both Aprotinin and Tranexamic acid (TXA) reduced the need for blood transfusion in elective surgery by about one third [RR = 0.61 (95% CI 0.54 to 0.69)] with no evidences of increased adverse effects [14]. If antifibrinolytics reduced intracranial bleeding in patients with TBI, this would have important clinical implications. However, a systematic review of haemostatic drugs in patients with TBI found no RCTs [15]. There were some evidences on the use of TXA in patients with spontaneous aneurysmal intracranial bleeding. A systematic review of RCTs of TXA in aneurysmal haemorrhage found that although rebleeding was reduced by 45% [odds ratio = 0.55 (95% CI 0.42 to 0.71)], this benefit was offset by cerebral ischaemia such that there was no overall patients benefit [16]. However, these trials used high doses TXA for about six weeks and it has been suggested that a shorter and lower dose might prevent rebleeding whilst avoiding the risk of ischaemia. Indeed, since the review was published a trial of early short course (3 days) TXA showed that it reduced rebleeding from 10.8% to 2.4% without ischaemic adverse effects [17]. We conducted a double blind randomized controlled trials to evaluate the effect of early short course TXA on the occurrence of progressive intracranial haemorrhage (PIH) in patients with TBI treated at a large regional trauma centre in rural Thailand.\n\nBODY.METHODS:\nA randomized, double-blind, placebo-controlled, parallel group trial was conducted. The trial protocol was approved by the Khon Kaen hospital's ethics committee and the Khon Kaen University's ethics committee. The trial protocol was registered with http://www.clinicaltrials.gov with the trial identifier NCT00755209. All written informed consent forms were obtained from legally acceptable representative of comatose participants.\n\nBODY.METHODS.PARTICIPANTS:\nAll patients, older than 16 years, with moderate to severe TBI (post-resuscitation Glasgow Coma Scale (GCS) 4 to 12) who had a computerized tomography (CT) brain scan performed within eight hours of injury, and whom there was no immediate indication for surgery, were eligible for inclusion. There were both of isolated TBI and polytrauma patients whom there were critical concern for TBI management during the admission period. Patients were excluded if they were pregnant, had evidences of coagulopathy, known to be receiving a medication which affects haemostasis, or had a serum creatinine over than 2 mg/decilitre. Coagulopathy was considered present if any of the following hematological parameters were observed: (1) platelet count less than 100,000 cells/mm3; (2) Prothrombin time (PT) or international normalized ratio (INR) prolonged more than 1.5 times normal value; (3) activated partial thromboplastin time (aPTT) more than 10 seconds greater than normal value. Coagulopathy was a risk factor for developing PIH and mortality in previous studies [10,11,18]–[22] or recent reports [23,24]. Hence it was confounding factor to be controlled by exclusion.\n\nBODY.METHODS.INTERVENTION:\nPatients were randomly allocated to receive TXA (loading dose of 1.0 gram over 30 minutes followed by a maintenance dose of 1.0 gram infused over eight hours) or matching placebo. The placebo was sterile water and was purchased on the open market in Thailand.\n\nBODY.METHODS.STUDY OUTCOMES:\nThe primary outcome was progressive intracranial haemorrhage. It would have more association to therapeutic effect of given tranexamic acid than other outcomes in this study [25]. Moreover it was a significant source or link to morbidity and mortality in TBI [8,10,18]–[20,26]. PIH was defined as an intracranial haemorrhage seen on the second CT scan that was not seen on the first CT scan, or an intracranial haemorrhage seen on the first scan that had expanded by 25% or more on any dimension (height, length, or width) on the second scan. Progressive pressure effect was defined as either an increase in midline shift of greater than 1 mm or an increase in basal cistern between the first and second CT scan. The second CT scan was to be taken 24 hours ± 8 hours after the first CT scan. Improved Glasgow Coma Scale (GCS) motor score was also recorded to see if there is compatible change between clinical and radiological progression at 24 hours. The presence or absence of PIH was assessed by two independent readers. Both were neurosurgeons at KKH with experiences in reading posttraumatic CT scans. When there was disagreement about the presence or absence of PIH this was resolved by a third neurosurgeon reader. Inter-rater reliability was assessed by kappa statistic. Secondary outcomes were death, functional status assessed using the Glasgow Outcome Scale (GOS) at hospital discharge, blood transfusion, neurosurgical operation and any in-hospital thromboembolic events (myocardial infarction, pulmonary embolism, deep vein thrombosis, and stroke).\n\nBODY.METHODS.SAMPLE SIZE:\nWe planned to randomize approximately 240 patients, 120 to each group. We estimated that the proportion of patients with PIH would be 30% in the placebo group and that TXA would reduce to be 15%. A trial with 240 patients would have about 80% power at the 0.05 level of significance (two-sided test) to detect a treatment effect of this magnitude.\n\nBODY.METHODS.RANDOMISATION AND BLINDING:\nThe randomisation sequence (with a randomly varied block size) was generated from a computer by a person who was not involved with the trial and this sequence was used to prepare the sequentially numbered treatment packs. Whenever an eligible patient was recruited, the recruiting clinician asked that the next sequentially numbered sealed opaque treatment pack be opened and that the trial loading dose and maintenance infusion be prepared and sent to the relevant ward. This preparation was done out of sight of the recruiting clinician and research participants by nurses who were not involved in the trial. Each treatment pack contained unlabelled vials of either drug or placebo. Although drug and placebo vials contained an identical amount of colorless solution, there was a small size discrepancy between the drug and placebo vials. It was for this reason that the vials were enclosed within sequentially numbered sealed opaque envelopes that were opened by nurses who were not involved in the trial. This approach ensured good allocation concealment and also ensured that those caring for the patient and those conducting the trial did not know the assigned treatment. The allocation scheme was kept confidential from all research participants until the end of the study.\n\nBODY.METHODS.STATISTICAL METHODS:\nThe primary analysis was on an intention-to-treat (ITT) basis with complete case analysis for other outcomes and was done using STATA software version 10.0. The presence or absence of PIH was analyzed as a dichotomous variable. The Glasgow Outcome Scale was also dichotomized such that death, persistent vegetative state, and severe disability constituted an unfavorable outcome while favorable outcome included moderate disability and good recovery. We calculated relative risks, risk difference with their 95% confidence intervals and hypothesis testing between the two treatment groups.\n\nBODY.RESULTS.PATIENT RECRUITMENT:\nFigure 1 shows the participant flow into the trial. The first patient was recruited on the 23rd October 2008 and the last patient on 14th August 2009 by which time a total of 238 patients had been included in the trial. All patients received the allocated trial treatments (TXA or placebo) and there were no protocol violations. There were nine patients for whom a second CT scan could not be obtained: seven patients died before the second CT scan, one patient could not be scanned because of agitation, and one patient refused the second scan. There were two consents withdrawal in the placebo group after randomization because they were signed by the unauthorized relatives. The related ethic committees were informed with an agreement for this exclusion. The inter-rater reliability of the assessment of the presence or absence of PIH was high with a kappa statistic of 0.95. The patients were enrolled with comparable profile including about mean age (40 years), male gender (80%), injury onset (within 7 hours), associated organ injury with injury severity score 24 (range from 9–43) and initial haematocrit level (38 volumes%) with moderately severe GCS severity. There were similar pressures effects finding of the first CT scan in both groups. Treatment and control groups were approximately balanced with respect to baseline characteristics (Table 1). Figure 1Participant flow. Table 1 Baseline characteristics Characteristics TXA (n =120) Placebo (n = 118) Age at randomization (years) 34.8 (16.0) 34.1 (15.3) Male 103 (86%) 107 (91%) Glasgow coma scale (GCS) severity: Moderate (9 – 12) Severe (4 – 8) 52 (53%) 47 (47%) 68 (49%) 71 (51%) Baseline haematocrit (volume %) 38 (7.4) 38 (6.7) Time since injury (hours) 6.6 (1.69) 7.1 1.29) Isolated TBI injury 20 (17%) 16 (14%) Polytrauma with TBI injury 100 (83%) 102 (86%) Mean injury severity score (range) 23.3 (9–43) 24.8 (9–43) Midline shift (>3 mm) on first CT (mm) 2 (0.02%) 3 (0.03%) Basal cistern compression on first CT 54 (45%) 53 (45%) Table 2 shows the effect of TXA on the study outcomes by the intention to treat analysis with assuming poor outcome which is used to represent the effectiveness of treatment effect. Progressive intracranial haemorrhage was present in 21 (18%) of patients allocated to TXA and in 32 (27%) of patients allocated to placebo. The difference was not statistically significant [RR = 0.65 (95% CI 0.40 to 1.05)]. The relative risk of death from all causes in patients allocated to TXA compared with placebo was 0.69 (95% CI 0.35 to 1.39) and the relative risk for unfavourable outcome on the Glasgow Outcome Scale was 0.76 (95% CI 0.46 to 1.27). The relative risk of blood transfusion need in patients allocated to TXA compared with placebo was 0.92 (95% CI 0.61 to 1.40). Although we had informed the clinical condition and proposed for emergency neurosurgical operation in all patients with PIH to their relatives. The neurosurgical interventions were not done in placebo group because the patients' relatives did not allow us to perform the operation hence the relative risk for neurosurgical interventions could not be calculated. There were very few adverse effects in both groups. There was no patient in TXA group who developed vascular occlusion event in this study. Table 2 Study outcomes Outcomes TXA Placebo RD RR (n = 120) (n = 118) [95% CI] [95% CI] Progressive intracranial haemorrhage (PIH) 21 (18%) 32 (27%) −0.10 0.65 moderate TBI (n = 24) 7 (6%) 17 (14%) [(−0.20)- [0.40-1.05] severe TBI (n = 20) 9 (8%) 11 (9%) (−0.01)]   indicated neurosurgery 6 (5%) 6 (5%)     Increase in pressure effect* *11 (10%) 12 (11%) −0.01 0.91 [(−0.09) – 0.07] [0.42-1.97] Improved GCS motor score at 24 hours 37 (31%) 37 (31%) −0.01 0.98 [(−0.12) – 0.11] [0.67-1.44] Neurosurgical intervention 3 (3%) 0 0.03 --- [(−0.00) – 0.05] --- Blood products transfusion 31 (26%) 33 (28%) −0.02 0.92 [(−0.13) – 0.09] [0.61-1.40] Death 12 (10%) 17 (14%) −0.04 0.69 [(−0.13) – 0.04] [0.35-1.39] Unfavorable (GOS) outcome 21 (18%) 27 (23%) −0.05 0.76 [(−0.16) – 0.05] [0.46-1.27] Adverse events     Stroke 0 3 --- --- Pulmonary embolus 0 0 --- --- Deep vein thrombosis 0 0 --- --- Gastrointestinal bleeding 0 1 --- --- *Denominator for outcomes is 114 for TXA group and 115 for placebo group. This is an analysis based on complete case analysis that is not assumed the missing outcome. Table 3 shows the effect of various methods of handling missing response in the primary outcome. The different analyses have different assumptions on existing PIH. A complete case analysis may be undertaken for primary outcome to avoid the assumption about PIH without imputation for the unavailable outcome. However different analyses give similar trend of potential benefit for treatment with tranexamic acid. Table 3 Effect of various methods of handling missing response in trial Methods of analysis PIH in TXA and placebo group Group n/N Rate RD RR (person) (%) [95% CI] [95% CI] 1. Complete case analysis TXA 16/115 14% −0.11 0.57   Placebo 28/114 25% [(−0.21)-(−0.01)] [0.32-0.99] 2. Assuming poor outcome TXA 21/120 18% −0.10 0.65   Placebo 32/118 27% [(−0.20)-(−0.01)] [0.40-1.05] 3. Assuming good outcome TXA 16/120 13% −0.10 0.56   Placebo 28/118 24% [(−0.20)-(−0.01)] [0.32-0.98] 4. Extreme case favoring placebo TXA 21/120 18% −0.06 0.74   Placebo 28/118 24% [(−0.16)-0.04] [0.44-1.22] 5. Extreme case favoring TXA TXA 16/120 13% −0.14 0.49   Placebo 32/118 27% [(−0.24)-(−0.04)] [0.29-0.85] \n\nBODY.DISCUSSION.PRINCIPAL FINDINGS:\nWe found a statistically insignificant difference but with a trend towards significant reduction in the risk of PIH in patients allocated to TXA without evidence of increased risk of thrombotic adverse events. The risks of death and of unfavorable outcome on the Glasgow Outcome Score were lower for patients allocated to TXA. The safety of early short course treatment of TXA in our TBI patients was compatible to no increasing risk of non-fatal vascular occlusive events with early short course treatment of TXA in traumatic bleeding patients in CRASH2 trial [27].\n\nBODY.DISCUSSION.STRENGTHS AND WEAKNESSES:\nThis is the leading randomized trial to examine the effectiveness of the early administration of a short course of TXA in patients with acute TBI. We are aware of a recent study publication of CRASH2 trial and we anticipate that the results of all relevant trials will ultimately be combined in the Cochrane systematic review of haemostatic drugs for TBI [15]. Our trial was properly randomized with good allocation concealment. The timing of the second CT scan was pre-specified in the protocol and all outcome measurements were made without knowledge of treatment allocation. The main weakness of our study is the low power to estimate the effect of TXA on clinical outcomes. In particular, although our study has shown statistically insignificant difference that TXA may reduce PIH, because of the low powers to examine the effect of TXA on death and on disability, the clinical implications of our findings are still limited. Another limitation is lacking to explore an effect of associated injury or injury severity score in multiple trauma patients which was concerned in some studies [19,21]. We precluded exploring this association to the treatment by our limited sample size. Polytrauma subjects were almost patients whom there was agreement from trauma care team to concern treatment for TBI as same as isolated TBI subjects at first moment. Therefore we assigned serious, critical and serious injury degree for moderate TBI, severe TBI and other anatomical organ injuries in the category of injury severity score assessment respectively. The common associated injuries were extremity fracture, facial fracture, rib or clavicular fracture and a few cases of other chest injury or minor abdominal trauma. Finally the exclusion of coagulopathy is also a limitation of our study and this subject could be a point to explore in further research.\n\nBODY.DISCUSSION.PIH IN TBI:\nVarious terms have been used to describe the development or enlargement of intracranial bleeding after TBI. Terms include delayed traumatic intracranial haemorrhage (DTICH) [7], expanding hematoma [19] and progressive hemorrhagic injury [10]. In this paper we combined these concepts such that we included both new haemorrhage and expanding haemorrhage (progressive intracranial haemorrhage). Our rationale was that both lesions may exacerbate intracranial hypertension and the occurrence of both may be affected by the administration of TXA. Although patients were excluded if they had evidence of coagulopathy at baseline the proportion of patients with PIH among the patients was surprisingly high. Had we not excluded coagulopathic patients it is likely that the proportion of patients with PIH would have been even higher. Previous estimates of the occurrence of PIH vary widely from 7% to 60% [20]. These differences are likely to reflect difference in timing of the CT scan, clinical setting and diagnostic criteria. Nevertheless, given the relatively high prevalence of PIH in patients with TBI observed in this study the potential for TXA to improve clinical outcomes could be high.\n\nBODY.DISCUSSION.METHODS OF HANDLING MISSING RESPONSE IN TRIAL:\nWe summarize the result of handling missing outcome for the primary outcome in Table 3.\n\nBODY.CONCLUSIONS:\nWe have not shown that TXA improves clinical outcomes and this information would be required in order to make any recommendation about the use of TXA in clinical practice. Our results have important implications for research. If an early short course of TXA could be demonstrated to improve clinical outcomes after TBI without any important adverse effects, then this treatment, because it is cheap and widely practicable, could contribute importantly to reducing mortality and disability after TBI.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nSY, BT, NP and PL designed, analysed and wrote the paper. WK and PP suggested the design, collected the data and conducted analyses. SY wrote the paper with input from all other authors. All authors had full access to all the data in the study and had final responsibility for the decision to submit for publication. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-227X/13/20/prepub\n\n**Question:** Compared to Placebo what was the result of Tranexamic acid on Risk of thromboembolic events?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1033
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How do patients with severe mental diagnosis cope in everyday life - a qualitative study comparing patients’ experiences of self-referral inpatient treatment with treatment as usual?\n\n ABSTRACT.BACKGROUND:\nSeveral hospitals in Norway provide short self-referral inpatient treatment to patients with severe mental diagnosis. No studies have compared the experiences of patients who have had the opportunity to self-refer to inpatient treatment with patients who have received treatment as usual. This qualitative study was nested within a randomised controlled trial investigating the effect of self-referral to inpatient treatment. The aim was to explore how patients with severe mental diagnosis coped four months after signing a contract for self-referral, as compared to patients receiving treatment as usual.\n\nABSTRACT.METHODS:\nData was collected using qualitative individual interviews with patients with severe mental diagnosis, conducted four months after being randomised either to a contract for self-referral (intervention group) or to treatment as usual (control group).\n\nABSTRACT.RESULTS:\nTwenty-five patients participated in interviews - 11 from the intervention group and 14 from the control group. Results four months after randomisation showed that patients with a contract for self-referral appeared to have more confidence in strategies to cope with mental illness and to apply more active cognitive strategies. Patients with a contract also expressed less resignation, hopelessness and powerlessness than patients without a contract. In addition, patients with a contract seemed to be closer to the ideal of living a \"normal\" life and being a \"normal\" person.\n\nABSTRACT.CONCLUSION:\nThe results indicate that the patients who had a contract for self-referral had come further in the recovery process and should possibly be better off during treatment.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/1472-6963-14-347) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nMental health services have changed considerably during the last decades. In Norway, an increasing number of mental health service users live outside institutions and receive health care from community health services, ambulatory service teams or outpatient treatment in community mental health centers\n[1]. The focus has also shifted from perceiving severe mental illness as a permanent diagnosis towards focusing on what facilitates full or partial recovery\n[2]. Recovery is described as a process where the illness's negative impact is minimized by focusing on the patient's strengths and interests\n[3]. This shift has also moved the delivery of mental health services towards less institutional treatment and more integration in the patient's community\n[4]. Ambulatory health services and outpatient treatment are considered to be important and cost-efficient means of strengthening the service user's independence and responsibility for the treatment\n[4]. Outreach mental health services via community teams are widespread in the US and in Europe, especially for severe and chronic mental problems such as psychosis and schizophrenia\n[5], and are also provided in crisis\n[6]. It has been argued that institutional hospitalization should be reduced, and that when inpatient treatment is necessary, restraints should be minimized and patient control and self-determination strengthened\n[4]. Health service users have described that services at home, for instance from ambulatory acute treatment teams, have increased feelings of being in control and being seen and heard, making the users feel more responsible for the treatment\n[7]. Several qualitative studies have shown that the experience of control, autonomy and power to make choices is connected to improved quality of life\n[8], thereby underlining the importance of such experiences. Although crisis resolution is increasingly provided in the service user's home\n[6], periodically inpatient treatment is still necessary for patients with severe mental illness\n[9]. Mental health service users have emphasized that services should be flexible and provide safe and predictable care in phases with strong symptoms, as well as facilitating the user's responsibility and ability to cope in phases with fewer symptoms\n[10]. This emphasis suggests a combination of services such as home-based care to strengthen coping in good phases, and, periodically, inpatient care in phases with more symptoms. Even when inpatient treatment is necessary, it is essential to encourage and maintain the patient's control and responsibility. One way to achieve this is to give patients the opportunity to self-refer to inpatient treatment. The intention behind self-referral inpatient treatment is, in addition to reduce health service costs, to empower the service user and thereby improve symptoms as well as increase responsibility, coping, and quality of life. Giving service users the opportunity to self-refer provides the users with decision-making power during severe mental illness, and should therefore transfer responsibility and power to the service users. Several mental health hospitals in Norway provide self-referral to short inpatient treatment for some service users\n[11–14]. Patients can refer themselves to short inpatient treatment when they feel that this is necessary. Such hospitalization is meant to be in line with the service user's wishes and needs, and the service user is considered the best judge of when inpatient treatment is necessary. The proposed effects of self-referral inpatient treatment include reduced numbers of inpatient hospitalizations, fewer acute hospitalizations in crisis, more appropriate use of inpatient treatment, improved quality of life, and improved coping\n[11–14]. So far, there is no robust evidence for these effects. Internal evaluations have shown that the total amount of inpatient treatment can be reduced when service users have the opportunity to self-refer. One of these evaluations (published in Norway) indicated that the opportunity to self-refer for up to five days of inpatient treatment increased the total frequency of hospitalizations but decreased the number of hospitalization days by over 30%. In addition, the total amount of coerced hospitalizations was reduced by 50% six months after signing the contract\n[13]. Service users have also reported satisfaction with the opportunity to self-refer and have emphasized that services became more predictable, increased the feeling of safety and reduced symptoms\n[11]. Moreover, patients have reported that self-referral gave them more choices and a stronger feeling of control and autonomy\n[11, 12] A randomised controlled study is currently investigating the effect of self-referral inpatient treatment in a community health center in Mid-Norway. This paper describes a qualitative study nested into this randomised controlled trial. The aim was to explore the patients' experiences with the intervention. It is reasonable to believe that self-referral to inpatient treatment will influence the patients' experience of control and autonomy. Exploring the patients' experiences would provide valuable knowledge about a new and growing treatment approach in mental health services. No studies thus far have compared the experiences of patients who have the opportunity to self-refer with patients who receive treatment as usual.\n\nBODY.BACKGROUND.AIM:\nThe aim of the study was to compare the experiences of service users who had signed a contract for self-referral to inpatient treatment with the experiences of users who received treatment as usual, four months after randomization.\n\nBODY.METHODS.DESIGN AND SETTING:\nThis was a qualitative interview study involving patients at a community mental health center in Mid-Norway. The study was nested into a randomised controlled trial (RCT) investigating the effect of a contract for self-referral inpatient treatment on the numbers of hospitalizations and hospitalization days, mental health symptoms and functioning, quality of life and coping (trial registration no. NCT01133587). Patients randomised to the intervention group in the RCT signed a contract with the hospital. This contract gave the patients authority to refer themselves to up to five days of inpatient hospitalization when they felt that such treatment was necessary. In addition to any hospitalizations initiated by health personnel, patients with a contract could thus admit themselves without contacting their doctor, duty doctor or emergency department. The contract was an internal written agreement between the patient and the mental health center and had legal status as such. The contract was an addition to the regular treatment offered and did not replace the patients' regular treatment plan. Patients randomised to the control group received treatment as usual and were offered a contract for self-referral after taking part in the RCT.\n\nBODY.METHODS.PARTICIPANTS AND RECRUITMENT:\nInclusion criteria for the RCT were a diagnosis of psychosis or bipolar disorders with or without abuse problems. All participants were in need of long-term treatment in the hospital unit and the psychiatric community service. Health professionals with treatment responsibilities suggested participants for the randomized trial. All participants in the RCT were asked to participate in individual qualitative interviews.\n\nBODY.METHODS.DATA COLLECTION:\nTwenty-five patients were interviewed four months after randomisation - 11 in the intervention group and 14 in the control group. The interviews were conducted between November 2010 and December 2012, either in the hospital or in the patient's home. Three of the authors (MR, IEOM and GHE) conducted the interviews. All are psychiatric nurses and have extensive clinical experience. A research associate with extensive user experience participated in some of the interviews as well. The interviews lasted between 20 and 60 minutes. Themes in the interviews were taken from a pre-defined, semi-structured interview guide with open-ended questions (Additional file\n1). The main questions in the interviews related to how the patients experienced their current mental health problems and everyday life. Typical questions were: \"How is your life at present?\" and \"How is it for you when you are feeling ill?\" All participants were invited to elaborate on what they did to prevent symptoms and maintain stability, on their experiences of previous and current symptoms and on current or previous experiences with mental health services. To avoid influencing the participants during the RCT and to ask the same questions of patients in both groups, no participants were asked directly about their experiences with the interventions. The interview questions were therefore general and focused on the participants' overall situation at the time. The interventions thus became topics in the interviews only if the participants brought it up themselves. All interviews were transcribed verbatim, although redundant words and pauses were deleted and local dialect was translated to written language. The quotes used in the result section were translated into English by the first author (MBR) and verified by the rest of the author group.\n\nBODY.METHODS.ANALYSIS:\nThe analysis was conducted by a group consisting of a clinical psychologist, a researcher with extensive user experience, three psychiatric nurses, and an academic researcher on public health. The aim of this study emerged during an initial reading of the transcripts with a different research question in mind and with the intention to analyze the interviews across group allocation. Since there seemed to be several differences between the descriptions of the participants in the two groups, we chose to pursue these differences further. The authors were at that point already aware of the participants' group allocation, and we did not add new or \"blinded\" authors to the group. The interviews with patients from the control group (treatment as usual) were read and coded first, from which a list of codes (meaning units) and main themes were made. The coding process was conducted during several meetings in the author group. Then, the interviews with the patients from the intervention group (contract for self-referral) were coded. The code list and main themes from these interviews were compared with the codes and themes from the control patients. The main themes common for both groups were: Main themes from interviews: Experiences of disease and symptoms Strategies of copingShieldingStructure (food, sleep, medication)DistractionsSocial lifeThinking strategies Identity as a person with an illness Feeling safe Relationships (family, friends) Work and studies Dignity and power Values in life A normal life Four differences between the intervention group and the control group were found, which were then outlined and discussed in the author group. After the group analysis and discussion the first author (MBR) recoded all interviews and cross-checked the results with the transcripts to challenge the findings and look for different interpretations. The process confirmed the findings from the group's analysis. Quotes from the interviews were chosen to illustrate and substantiate the results. Quotes used in the result presentation are marked with the speaker's gender and study group (intervention or control). The final results were discussed and approved by the author group during the writing process.\n\nBODY.METHODS.ETHICS:\nThe regional committee for medical ethics in Central Norway approved of the study, which was registered with the Norwegian Data Inspectorate. All participants were given verbal and written information about the study, and signed a consent form before taking part in interviews. All participants were given the opportunity to make contact with the staff at the department after the interview.\n\nBODY.RESULTS:\nTwenty-five patients participated in this study, 11 had a contract for self-referral to treatment (intervention group) and 14 received treatment as usual (control group). Nine of the participants were female. The mean age was 41 years (range 21-60 years). The participants had one or several diagnoses, including psychosis, bipolar disorders and substance use. Eight of the participants had a job or were studying. The sample is described in Table \n1.Table 1\nCharacteristics of participants\nVariablesTotal sampleIntervention (contract for self-referral)Control (treatment as usual)No. of participants251114GenderFemales963Age (mean (range))41 (21-60)44 (31- 58)33 (21-60)Diagnosis (some have more than one)Psychosis (ICD-10; F 20-29)19712Bipolar disorders (ICD-10; F 30-39)642Substance use (ICD-10; F 10-19)633Others (ICD-10; F 60-61)211No. of years since receiving diagnosis (mean (range))9 (2-23)10 (4-19)8 (2-23)EmploymentPaid work321Disability benefits1477Sick leave3-3StudiesStudies or serious plans to study523 The results present four aspects where the experiences of patients with a contract for self-referral appeared different than the experiences of patients who received treatment as usual. Patients with a contract expressed stronger confidence in strategies to cope with mental illness and seemed to apply more active cognitive strategies. Patients with a contract also had fewer expressions of resignation, hopelessness and powerlessness than patients who received treatment as usual. In addition, patients with a contract appeared to be closer to the ideal of living a \"normal\" life and being a \"normal\" person. The differences between the groups are described in light of what were common for all participants.\n\nBODY.RESULTS.STRONGER CONFIDENCE IN COPING STRATEGIES:\nAll participants described several strategies to prevent and cope with symptom aggravation. They emphasized the importance of living a structured life with regular meals, sleep and medication, as well as avoiding smoking, excessive alcohol and drugs. Participants also talked about strategies to shield and distract themselves. Common shielding strategies were to be quiet, to be alone, to relax, and to avoid an excess of impressions, such as too many people or TV programs. Common distraction strategies were to watch TV, play computer games, read books, exercise and work, all of which could help curb anxiety, voices and visions. The participants also described that being with people or animals could both act as distraction, and provide meaningful relationships and experiences. These interactions could help to prevent and diminish symptoms. Overall, the participants described that a balance between activity and rest was necessary to cope with the mental illness. P: I have discovered that the more I talk to other people... the more stable I am. It is a sort of self-treatment or self-medication. To actively seek contact with other people every day. Some days, when I can't get in touch with anybody I walk down to the city and look at people. I do that because I don't want to be alone. That's a bad idea. Other days I have a strong need to be alone. Then I withdraw, turn off the phone and don't meet a single person during the whole day. I sit all day and watch TV-series or DVDs or play Playstation games or read a book. And these days come and go. There is no pattern – some days are like this and some days are like that.[...] I: So books help you?P: Yes, it is good to have something where things happen. [...] When I read I disappear a little. I disappear a little from this world. I just dream myself into the book. It is very relaxing. And it is a good way to ignore the things I don't want to think about... things that are stressful to think about.(Male, Control group) Although all participants identified coping strategies, there were some differences between the ways patients with and without a contract for self-referral described them. Patients with a contract voiced stronger self-confidence in applying coping strategies, both to prevent symptoms and to handle symptom aggravation. These patients also expressed greater confidence in keeping a good balance between activity and rest. Their descriptions appeared to include more experiences in applying and mastering various coping strategies. While patients without a contract described fairly common strategies to cope and prevent symptoms, patients with a contract appeared to have stronger and more independent creativity when describing how they prevented and coped with poorer phases. Some of the patients with a contract also described new and different ways to cope with their illness. Some of these strategies contradicted the advice usually given by professionals, such as preferring to be alone instead of having an extended social life. One patient, for example, expressed a deep satisfaction with having a certain distance from other people: P: I like to eat alone. I do. I have discovered that I like to do some things on my own. [...]I have two friends at present and we have a good distance... a good distance. We text each other and meet maybe once a week, and I really appreciate that.I: The friendship?P: Yes, the distance when we meet so rarely. We are in touch on the phone... to state that we are fine. [...] I read somewhere that friendship is what you think about the other person [...] having a presence even though they are not present.(Male, Intervention group)\n\nBODY.RESULTS.COGNITIVE STRATEGIES TO INFLUENCE THINKING PATTERNS:\nAll participants reported that they experienced anxiety, mood swings and negative thoughts. Almost all participants also talked about disturbing voices and visions, describing cognitive techniques to cope with these symptoms. In addition, all patients relayed that their thinking was associated with control over their situation. Despite these commonalities, comparison showed that the participants used these techniques differently. Patients without a contract talked mostly about cognitive strategies as a means of adapting to the illness and helping them to do their best. They used words such as \"observing\", \"understanding\", and \"recognizing\" symptoms and emphasized accepting the illness. Their main focus centered on being conscious about their thoughts and views and on achieving more knowledge about the nature of the symptoms in order to decrease anxiety when those symptoms rose. One participant described cognitive strategies such as observing and recording the physical surroundings during strong anxiety and claustrophobia: P: I am supposed to observe and notice details in rooms and such... when these things happen. I don't know if it has helped so much. It has become worse and worse. And the feeling is very strong when I am in the midst of it. I almost have to vomit. It affects me physically.(Female, Control group) Meanwhile, patients with a contract talked less about recognizing, understanding and accepting. Instead, they used words such as \"power\", \"control\" and \"strength\". They also displayed stronger ability to influence their thinking patterns through the use of cognitive strategies. Furthermore, these patients talked more about being active and working with themselves in order to grow, to break barriers and to take opportunities. Some of them questioned if the voices and views were true, or whether there were other ways to interpret and understand them: P: [...] I used to think that everybody knew more than me about certain things. I don't anymore. [...] It's just confusing... and that's a difficult state. You become unsure of yourself and that's heavy. It's not that I know so much, but what do other people know? [...] I read for instance in a book about what happens after death. And I said to myself: How can he [the author] know more about that than me? Has he been there - or is he sitting on this side and imagining what happens afterwards? These simple things... I use tactics like these. Then it is not so bad. [...]It still annoys me when someone comes and tells me I can never get rid of it [the illness]. It will probably never be completely over, but... [...] There are limits to what I don't know [laughs]. It is important to believe in the experiences you have. Others might have similar or opposite experiences, but they are not necessarily stronger or more correct.(Male, Intervention group) The results also indicated that patients with a contract had more experience and more control over the cognitive techniques, as well as a deeper understanding of the relationship between thoughts and feelings. They seemed to be more focused on the possibility of choosing cognitive strategies to influence their thinking. While patients without a contract described some efforts to manage being in a painful situation, patients with a contract presented a wider variety of techniques to influence their mental state. This enabled them to change their ways of thinking and thus influenced their overall situation. P: Nowadays, I read a lot of self-help books. I used to be very skeptical... they had very simple solutions, but now I find them developing.I: What do you get from them?P: I guess they speak to my own problems. You work with yourself while you read. For example to shut out the past... that is described in all self-help books. To have bulkheads forwards and backwards... To think less about worries for the future and self-reproach about the past. That resonates with me... and then the brain works with it while I read.(Male, Intervention group)\n\nBODY.RESULTS.RESIGNATION AND POWERLESSNESS:\nAll participants talked about the experience of being ill. Most of them described experiences of psychosis, explaining how emotions were experienced as bodily phenomena and how the symptoms were placed in their lives. Most of the participants also talked about negative and painful experiences, both with previous and current illness and with previous and current treatment. There seemed, however, to be a difference in the way that participants in the two groups talked about these experiences. Patients in the control group expressed more resignation and passivity, giving the impression that they were stuck in these experiences. They used phrases such as \"yielding\", \"enduring\", \"resigning\", \"giving in\" and \"getting used to\". Their descriptions included being powerless when facing their lives and problems, a stronger need for care and being more dependent on health services. These patients gave several examples of experiencing hopelessness and powerlessness. Although there were examples of interviews where feelings of helplessness and resignation were less prominent, the following examples represent very common descriptions for the control group participants: P: I have accepted the situation I am in. I think I have resigned a little. I have withdrawn from the things that I will never experience.I: Like what?P: Well... [sighs]... my life situation, for instance. I have for instance not had a partner in years... and it is quite stressful because you get very lonely. And after some time you come to a place where you realize that the hopelessness has come to stay.(Male, Control group) Patients who had a contract for self-referral, on the other hand, discussed courage and strength and talked about conscious choices and opinions. Their descriptions reflected being in a better place in their lives, and they seemed more active and less dependent on others. These patients expressed that they had influence over their overall situation. P: Well... how do you move on? It is to work... work with yourself. To locate the problems. Some would say to accept the problems, but that sounds a bit defensive to me. Maybe you get there... when you work with your problems they become more distant. You have to learn to like yourself until you get so sure that you are not put off by any little comment... like I used to be. It was... I had a word that emerged in my head during the last hospitalization: \"Show off\". That I was a show off. Like I knew it all, and wanted to have a piece of everyone else. But then I asked a friend of mine who was admitted during the same period: Do you think I am a show off? And she laughed a little. That is a way to get your problems more distant. Is it really like this? Is it possible? To be confident... to be sure... that you throw it around instead.(Male, Intervention group) There were also a few examples of patients with a contract who seemed to have achieved some distance from their previous bad experiences, as if they had success in moving on with their lives. P: And time heals all wounds. That's how it is. That's hard to say when you are ill, but when you are recovering you see that it is a one way street. Even though you might be temporarily dumped or feel bad and upset.I: Like a crisis?P: Yes, the crisis lasts around two hours. It's like that. I don't know how other people feel, but when I am ill I feel it could last forever. I am scared that it will be a permanent death row, but it doesn't. It takes shorter and shorter time before I am well again. And that is reassuring.[...] The problems I have aren't smaller or steeper, only shorter. It feels like I can put them behind me. Previously I could have days and weeks with the same problems and feel thrown around like a ball. Now it takes three or four hours.(Male, Intervention group)\n\nBODY.RESULTS.BEING NORMAL AND LIVING A NORMAL LIFE:\nParticipants in both groups talked about the importance of having a normal life, doing normal things, and feeling like everyone else. A common theme was being identified with the illness, and many emphasized that being ill was not their sole identity. The patients reflected on their own status as persons with mental illness, expressing a desire for an identity beyond the role of a patient. P: I have a diagnosis, and that makes me ill – in medical terms. But it is a difference between that and perceiving yourself as an ill person. Cause I am only a guy. I'm just me.[...] I would like to build a different fundament to base the rest of my life on. To build my whole life on the foundation that I am an ill person in a treatment process... that I have an illness that defines me... I feel that it will be a very unstable life. It will fall apart. So I have decided to find the most stable foundation possible... that's what I'm trying to do.(Male, Control group) An important difference between the patient groups was that patients with a contract for self-referral had several descriptions which seemed much closer to the ideal of living a normal life and being a normal person. They used active phrases when describing their everyday life, such as \"I know\", \"I will\", and \"I can\". They also acknowledged \"the good life\" and seemed to experience more closeness to a normal everyday life. Patients who had a contract talked about the normal life with more confidence and seemed more independent of social expectations and acceptable values. I: What is the point with having an ambulatory supervisor?P: You press an extra button and get feedback on things that might... that you do well or that people notice... that there are some positive things too. They emphasize it to strengthen your self-confidence. The self-confidence isn't very strong. You get set back from being admitted. I was away for nine months. I was admitted for nine months. Afterwards you are handicapped. While I was admitted I received my meals every day... some places they helped me get up in the morning. I was helped and supported all the time. And that was difficult to give up. But it is very good to have control over your own life. Cooking and having a nice time... to light candles and... to enjoy living.(Female, Intervention group) As part of a normal life, most participants wanted to study or work, and work was associated with identity and a good life. Many of the patients who received treatment as usual highlighted studies and education as a goal which might be difficult to reach, and some of them expressed a great deal of uncertainty regarding the possibility of coping with work on an everyday basis. P: I had my job until I realized that it was enough and then I quit.I: You quit?P: Yes. My contract expired... and at the same time I told them I had to quit.I: How was that for you?P: I stayed on the sofa for a week doing nothing. And then I woke up again. It felt like... like I recharged my strength.(Female, Control group) In contrast, many of the patients who had a contract for self-referral seemed to have progressed beyond having education or work as a distant goal. Several had taken the initiative to commence studies and had concrete plans for completion, and they did not mention having to interrupt the progress of their studies due to illness as a problem. They focused instead on the courage and strength necessary to continue and complete. P: I found out that the deadline for applications was expired, but I called today and they told me it was fine after all. I have a week to make an application for a [...] degree to finish the last two years... [...] If I can manage to work with art in some way, or with doing odd jobs. It is very exciting and I look forward to writing things this week and to look through my previous work.(Female, Intervention group)\n\nBODY.RESULTS.EXPERIENCES WITH A CONTRACT FOR SELF-REFERRAL OR TREATMENT AS USUAL:\nAlthough the patients were not asked directly about their experiences with the intervention (contract for self-referral or treatment as usual) some of the patients raised the subject on their own accord. Patients who had a contract for self-referral expressed a great deal of authority and strength when talking about the contract, their contact with the health services and what they needed in the future. Some said that the contract gave them more influence and that this gave them a sense of control. Some also described that they felt safe since they had the option to self-refer at any time: P: I believe in the project with self-referral. It is a pillar... where you know that if something happens which is really bad... You know that you can come here and get a break. It feels safe.(Female, Intervention group) Meanwhile, a few of the patients who did not have a contract for self-referral were very dissatisfied: P: I am very disappointed to be in the control-group [receiving treatment as usual]. I am done with the ordinary mental health services. I just want the self-referral inpatient contract.(Male, Control group)\n\nBODY.DISCUSSION.STRENGTHS AND LIMITATIONS:\nTwo researchers with extensive user experience in mental health services participated in different parts of this study. One participated in the data collection and took part in the initial analysis, while the other (DB) participated in the analysis process and in writing the manuscript. User participation during the research process strengthens the authenticity of the results and helps ensure that the users' voices are heard. Since this study was nested within a randomised controlled trial the interviews had to avoid direct questions about the intervention and focus instead on the participants' general perceptions of their current situation. This made the data material rich and focused on everyday life, which strengthens the quality of the findings. Although the interviewers were advised to avoid direct questions about the intervention, several of the participants talked about having or not having a contract. Some participants who had been randomised to treatment as usual were disappointed, and the interviewers had to respond to this. This feeling of disappointment may have led to negative expressions from those participants. Effort was made to not lead the participants to overstate the positive experiences with the intervention so as to please the interviewers, but the effectiveness of this effort is unclear. Although the sample is fair-sized for a qualitative interview study, the results might not be valid for other samples in other areas of health services. The interviews were mostly conducted by researchers who have worked as psychiatric nurses at the hospital. The nurses were chosen as interviewers because they were very competent health professionals and familiar to the patients. They were thus able to ensure that patients were taken good care of if they had adverse reactions during or after the interview. In addition, some participants stated that they would not have participated in interviews with unfamiliar interviewers. However, this familiarity might also have led to more favourable evaluations of the hospital's services and the intervention than if the interviews had been conducted by independent researchers. Moreover, since the participants in the two different groups were treated in the same health service organisation, they had the possibility to interact and discuss their experiences. Making comparisons based on qualitative interview data is a challenging undertaking. Since the aim of the study emerged during an initial reading of the transcripts, across group allocation, the authors were not \"blind\" before the more \"comparative\" analysis. We consider it a strength to pursue the initial surprising findings in the data material. However, at that point we should have included new and \"blind\" authors in the following analysis, as that would have strengthened the analysis process and the results. We analysed the interviews with patients in the control group before comparing the main themes from the control patients with the interviews with those of the intervention group. The differences between the groups were cross-checked with the data material and discussed in the analysis group to reduce the risk of overstating the differences. The differences presented in the results section are based on general impressions when reading and analysing the interview transcripts and must be interpreted with caution. Considering that several specific words and phrases were more present in either of the two groups, we could have used a computer-based system to compare the occurrences, which could have in turn supported the findings. Not all participants in the RCT participated in interviews after four months. Despite the initial randomisation, those agreeing to participate in interviews were self-selected. Direct comparison is therefore difficult. In addition, it is not clear whether the differences were due to the intervention (self-referral to treatment) or to other factors. A comparison of the groups on demographic variables (Table \n1) also shows that there are some differences between the groups: the participants in the control group were younger, had a higher rate of psychosis diagnosis, and had a higher proportion of males. The differences between the groups in terms of age, diagnosis and gender may have influenced the results of this study.\n\nBODY.DISCUSSION.DISCUSSION:\nThe results showed that many of the patients with a contract for self-referral expressed more confidence in strategies to cope with mental illness, and they seemed to apply more active cognitive strategies. Patients with a contract also expressed less resignation, hopelessness and powerlessness than patients without a contract, appearing to be closer to the ideal of living a \"normal\" life and being a \"normal\" person. Coping strategies are important in the management of mental illnesses\n[15, 16]. Such strategies are used to cope with life stressors, to ameliorate symptoms and to prevent periods of symptom aggravation. Most persons who suffer from psychosis apply one or several strategies to cope with mental symptoms\n[17]. Although there is little knowledge about the usefulness of specific coping strategies, research shows that having several strategies at hand is more effective than relying on just one, although there is not one specific coping strategy which is more effective than the others\n[17]. In the present study, the patients who had a contract for self-referral described coping strategies which appeared more advanced, especially the cognitive strategies they used to influence their thinking patterns. Strategies to manage mental health symptoms are often taught by professionals during treatment\n[18]. In addition to these, most persons suffering from mental health illnesses develop personal coping strategies\n[18]. In the present study, all patients described coping strategies that they had learned, and some of the patients described self-developed strategies. Compared to patients who received treatment as usual, patients with a contract for self-referral described new and different coping mechanisms. Previous evaluations of self-referral have shown that although the total frequency of hospitalizations increased, the number of hospitalization days in fact decreased by more than 30%. Furthermore, the patients reported that a contract for self-referral lessened the burden of having to \"convince\" professionals that hospitalization was necessary and to be \"sick enough\" to need inpatient treatment\n[11, 13]. In other evaluations, patients have also expressed that the opportunity to self-refer gave them safety and confidence to make new efforts and to challenge old behavioral patterns\n[12]. This might help account for the differences in coping strategies we found in the present study. Using coping strategies is in line with the recovery model. According to this model, a person with long-term mental illness takes an increasingly active role in managing the impact of the mental illness as they move towards recovery\n[19]. Coping strategies can be divided into reactive, anticipatory, preventive and proactive strategies\n[16], and can be placed on a recovery continuum, where proactive strategies are used by persons who have come farthest in the recovery process. One of the findings in the present study was that patients with a contract for self-referral described more confidence in applying coping strategies than patients without a contract. This indicates that the patients with a contract had more experience in successfully using coping strategies and thus had come further in the recovery process. They talked more about being active, growing, breaking barriers and taking advantage of opportunities. This finding is in line with previous research, where empowerment has been described as the most important factor for recovery\n[18]. The patients who had a contract for self-referral also described some coping strategies which sometimes ran counter to the advice typically given by professionals. This indicates that patients who have had the opportunity to self-refer to treatment felt more in control and more able to contradict or neglect advice from the mental health services. This might be due to the opportunity to be in charge of decisions regarding treatment which could increase autonomy and the feeling of being in control. In the present study, the participants contrasted the identity of being an ill person with that of being a normal person. The term \"illness identity\" has been described as a set of roles and attitudes that persons with a mental illness have developed\n[2]. The participants in the present study related the perception of being a person with an illness to the possibility of being a normal person capable of leading a normal life. Perceiving oneself as an ill person is closely linked to feelings of stigmatization, which is connected to low self-esteem and low self-efficacy\n[20, 21]. The perception of an illness identity has been described as an important part of the recovery process\n[2]. Some have linked illness identity to hope and self-esteem, which is in turn linked to coping, engagement, and social interaction\n[2]. Coping, engagement and social interaction subsequently have an impact on vocational outcomes and symptom severity. Authors have argued that changing illness identity is an important step in improving mental health illness and functioning\n[2]. The results from the present study indicate that patients with a contract for self-referral might have a more clarified relationship towards their illness identity than patients without a contract, therefore suggesting that the former group may have come further in the recovery process. That said, it is not self-evident that having a contract for self-referral to treatment necessarily impacts illness identity, especially since the present study is based on interviews only four months after signing a contract for self-referral. It would be important to further investigate whether an opportunity to self-refer leads to increased autonomy, sense of control and decision-making power, all of which might subsequently impact the perception of illness identity. Another important finding in the present study was that the patients who had a contract for self-referral tended to express less powerlessness, resignation and hopelessness than those who received treatment as usual. The patients who received treatment as usual referred to their problems as something that they were stuck with, and they described themselves as being dependent on care from the health services. This difference in focus might be related to the finding that patients in the intervention group had stronger confidence in using coping strategies. Some have argued that stronger hope and self-esteem might lead to less avoidance, more problem-oriented strategies and more social interaction\n[2]. Previous research has shown that persons with more hope and insight in their illness use more problem-oriented and less avoidant coping strategies\n[22] and that poor confidence is connected to using avoidant strategies\n[23]. These findings are in line with the present study, which also illustrates Bandura's idea of self-efficacy\n[24]. Self-efficacy is explained as a personal capability and distinguishes one's belief in capability from self-esteem and whether you like yourself\n[25]. In the present study, we found that patients with a contract for self-referral talked more about living a normal life with work and studies. They seemed to have a closer relationship to what they called a \"normal life\" and a somewhat more active attitude towards coping, thereby possibly exhibiting a greater sense of control. Davidson and Roe argue that recovery has to be pursued actively by the patient, and that not everybody does this\n[3]. The findings in the present study indicate that patients who have had the opportunity to decide whether and when they needed referral to inpatient treatment are more actively pursuing recovery, both from and in mental illness. In this context signing a contract for self-referral seems to be a rehabilitation approach that encourages and supports activity, self-determination, control, and participation.\n\nBODY.CONCLUSION:\nPatients with a contract for self-referral to treatment appeared to have a wider range of coping mechanisms, more self-developed and advanced strategies, and stronger confidence in applying the strategies. The findings indicate that having the opportunity to self-refer might help strengthen autonomy, sense of control, and self-determination. According to research on coping strategies, these patients could be better off both during treatment and in the recovery process. In a similar vein, patients with a contract voiced less resignation, hopelessness and powerlessness and appeared to be closer to experiencing a \"normal\" life.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nAdditional file 1:\nInterview guide - Self-referral inpatient treatment.\n(DOCX 29 KB)\n\n**Question:** Compared to Inpatient treatment what was the result of Self-referral treatment on Negative feelings?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1036
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Nicotine delivery, tolerability and reduction of smoking urge in smokers following short-term use of one brand of electronic cigarettes\n\n ABSTRACT.BACKGROUND:\nThis randomized, partially single-blinded, 6-period crossover clinical study of adult smokers compared the nicotine pharmacokinetics, impacts on smoking urge and tolerability of various formulations of one brand of e-cigarettes with that of a tobacco cigarette.\n\nABSTRACT.METHODS:\nFive e-cigarettes with different e-liquid formulations containing 1.6 % and 2.4 % nicotine and a conventional tobacco cigarette were randomized among 24 subjects under two exposure sessions consisting of a 30-min controlled and a one-hour ad lib use period to assess plasma nicotine levels, impacts on smoking urge and adverse events. The 30-min controlled use session comprised an intensive use of the e-cigarettes with a total of 50 puffs taken every 30 s for comparison to a single conventional cigarette having a typical machine-measured nicotine yield (~0.8 mg). Ad lib product use conditions provided insight into more naturalistic product use behaviors and their accompanying smoking urge reductions. Adverse events (AEs) were assessed by the Principal Investigator.\n\nABSTRACT.RESULTS:\nSignificant (p < 0.05) increases in plasma nicotine concentrations occurred within 10 min of controlled e-cigarette use and significant (p < 0.001) reductions from baseline smoking urge were observed within 5 min. E-cigarette and cigarette nicotine plasma levels were comparable for up to one hour of use. After both sessions (90 min), nicotine exposure was the highest for the cigarette, with all e-cigarettes showing 23 % to 53 % lower plasma concentrations. During controlled use, peak reduction in smoking urge for e-cigs occurred later than for the cigarette. After completion of both sessions, significant smoking urge reduction persisted for most of the tested e-cigarettes, albeit at levels lower than that provided by the tobacco cigarette. Nicotine content, vehicle differences, and the presence of menthol did not significantly affect smoking urge reduction by the e-cigarettes. No subjects were discontinued due to AEs. The most frequently reported AEs events included cough, throat irritation, headache, and dizziness.\n\nABSTRACT.CONCLUSIONS:\nBlood plasma nicotine levels obtained from short-term use of e-cigarettes containing 1.6 % and 2.4 % nicotine were significant, but lower than those of conventional tobacco cigarettes, yet the reduction in craving symptoms were broadly comparable. The types of AEs were consistent with other research studies of longer duration that have reported that use of e-cigarettes by adult smokers is well-tolerated.\n\nABSTRACT.TRIAL REGISTRATION:\nhttp://ClinicalTrials.gov identifier: NCT02210754. Registered 8 August 2014.\n\nBODY.BACKGROUND:\nSmoking remains the leading preventable cause of premature death in the United States [1]. Electronic cigarettes (e-cigarettes) are rapidly becoming a popular alternative to cigarette smoking worldwide and are garnering significant attention as potentially reduced-exposure replacements for conventional cigarettes (cigarettes) [2–6]. E-cigarettes consist of a battery, heating component, and a cartridge containing tobacco-derived nicotine in a solution composed of glycerin and/or propylene glycol (PG), and flavorings. Upon activation, the heating element heats the solution to generate an aerosol that is inhaled by the consumer in a manner that mimics smoking. Pharmacokinetic studies with early-generation e-cigarettes found that they delivered markedly lower levels of plasma nicotine than conventional cigarettes [7, 8]. More recent studies have reported that experienced subjects using later-generation e-cigarettes containing 9 to 24 mg/mL of nicotine attain significant increases in plasma nicotine concentrations over baseline values that can be similar to those obtained from conventional cigarette smoking [9–11]. These results suggest that an acclimation to the product may be necessary for naïve smokers to become familiar with the characteristics of the specific e-cigarettes in order to effectively use them in a subjectively enjoyable manner. Moreover, these studies also demonstrated a reduction in smoking urge or abstinence symptoms following e-cigarette use, even for products that deliver less nicotine than conventional tobacco cigarettes. This is quite consistent with a perspective that the substantial sensory and behavioral aspects of cigarette smoking that are mimicked by e-cigarettes may provide meaningful relief of the cigarette cravings commonly reported by abstinent smokers. Surveys and clinical studies evaluating the impacts, tolerability and adverse events (AEs) associated with e-cigarette use suggest that they are generally well-tolerated following short-term use [1, 9, 12–16]. Commonly reported AEs include symptoms such as mouth and throat irritation, light-headedness, dizziness and dry cough. The primary objective of this study was to examine the nicotine blood plasma levels and smoking urge impacts of various formulations of one brand of e-cigarette with that of a tobacco cigarette. The tested products contained different flavorings, aerosol forming excipients (i.e., propylene glycol, glycerin) and nicotine levels. The secondary objectives were to assess the tolerability and any adverse events associated with the study products following short-term use under intensive controlled use conditions, as well as under more natural ad-lib use conditions.\n\nBODY.METHODS.PARTICIPANTS:\nThe study protocol and the informed consent forms were approved by Chesapeake IRB, Columbia, MD. A total of 107 potential subjects were recruited from the Lincoln, NE area using standard advertising methods (i.e., print and radio advertisements) and from a database of subjects who had previously participated a clinical research study or who had expressed interest in participating in a study. All potential subjects were provided details regarding the study and written informed consent was obtained prior to completion of any study procedures. Sixty-one subjects were excluded for not meeting the predefined inclusion/exclusion criteria, while five subjects declined to participate prior to enrollment and three subjects were excluded because the study had reached the recruitment target of 38 eligible subjects. The 38 subjects meeting the eligibility criteria participated in a 7-day at-home lead-in period during which they used and became familiar with two of e-cigarette study products (non-menthol and menthol, 2.4 % nicotine with glycerin). All subjects who participated in the at-home lead-in period were allowed to check in at the start of clinical conduct. Two subjects chose not to participate further after the lead-in period and one subject failed further screening requirements at check-in. A final study population of 24 subjects were enrolled into the study and randomized into one of six product usage sequences with four subjects per sequence. One subject withdrew consent after completion of the first product use on Day 1 of the trial due to a personal reason (family emergency). All subjects participating in the study from the time of the lead-in period were paid for their participation. The main criteria for inclusion were as follows: healthy adult male and female smokers, 21 to 65 years of age, inclusive; smoker for at least 12 months and currently smoked an average of 10 or more manufactured cigarettes per day (no restriction on brand-style); positive urine cotinine at screening (≥ 500 ng/mL); and exhaled carbon monoxide CO > 10 ppm at screening. Exclusion criteria included: history or presence of clinically significant mental or physical health conditions; females who were pregnant or breastfeeding; high blood pressure; body mass index < 18 kg/m2 or > 40 kg/m2; acute illnesses (e.g., upper respiratory infection, viral infection) requiring treatment within 2 weeks prior to check-in; use of prescription smoking cessation treatments, anti-diabetic or insulin drugs or medications known to interact with cytochrome P450 2A6; positive urine screen for alcohol or drugs of abuse; and self-reported puffers (i.e., smokers who draw smoke from the cigarette into the mouth and throat but do not inhale). Subjects who had used any tobacco- or nicotine-containing products other than manufactured cigarettes or e-cigarettes within 28 days of in-clinic product use were also excluded.\n\nBODY.METHODS.PRODUCTS TESTED:\nA rechargeable version of an e-cigarette that is currently sold in retail outlets throughout the United States was used in this study. The rechargeable e-cigarette consists of a battery segment and a cartomizer segment comprising the heating unit and a liquid reservoir which can be separated from the battery for recharging or replacement when the e-liquid is depleted. The battery operates at a voltage of 3.7 volts (nominal) and the resistance of the heating element is approximately 3 ohms. The maximum operating temperature is dependent on both the state of reservoir fluid fill and on the manner of use and was not recorded in this study. Two commercial e-cigarette products that contained 16 mg/mL (1.6 %) USP grade nicotine were used in this study. As well, three non-commercial products that contained 24 mg/mL (2.4 %) USP grade nicotine were used in this study to evaluate various product characteristics considered important to further product development. In addition to nicotine, the e-cigarettes used in this study contained USP grade glycerin and/or propylene glycol (as described below), distilled water (<20 %), citric acid (<1.0 %) and natural or artificial flavors (<10 %). The nicotine yield of the conventional cigarette used in the study was approximately 0.8 mg per cigarette [17]. The study products included:Product A: Classic Tobacco flavored e-cigarette (2.4 % nicotine, ~75 % glycerin)Product B: Classic Tobacco flavored e-cigarette (2.4 % nicotine, ~50 % glycerin/~20 % propylene glycol)Product C: Menthol flavored e-cigarette (2.4 % nicotine, ~75 % glycerin)Product D: Classic Tobacco flavored e-cigarette (1.6 % nicotine, ~75 % glycerin)Product E: Classic Tobacco flavored e-cigarette (1.6 % nicotine, ~50 % glycerin/~20 % propylene glycol)Product F: Tobacco Cigarette\n\nBODY.METHODS.STUDY DESIGN:\nThis was a randomized, partially single-blinded, six-period crossover study conducted at a single independent research center (Celerion, Lincoln, NE). Twenty-four subjects were randomly selected from the pool of 38 subjects who participated in the at-home lead-in period were enrolled into the testing phase of the trial and randomized to assigned product sequences. The subjects checked into the clinic on Day −2 and abstained from using of nicotine-containing products until product use on Day 1. Days 1, 3, 5, 7, 9, and 11 were designated product use days while Days −1, 2, 4, 6, 8, and 10 were designated wash-out days. The subjects were housed at the test site from the time of check-in through completion of study events on Day 11. The clinic staff monitored the subjects during the confinement period to ensure that no illicit nicotine or tobacco products were used. Study investigators were not blinded, but the subjects were as the all the e-cigarette products appeared the same. The menthol product, however, was easily discernible due to taste. Previous studies have demonstrated that a lack of familiarity with e-cigarette products may result in low nicotine intake with their use [7, 18]. In order to allow study participants to become familiar with the e-cigarette devices, subjects were instructed on the appropriate use of the products and were required to demonstrate the appropriate use to clinic staff. Each subject was then provided with two units of the menthol and non-menthol 2.4 % nicotine products containing glycerin for at-home use prior to the start of the study and instructed to use the products each day during the 7-day lead-in period. Two types of exposures were utilized in the study on each product use day: a controlled use period followed by an ad lib use period. Similar 2-stage designs have been informative in prior studies evaluating the nicotine delivery and subjective effects of e-cigarettes [5–9, 19]. Enrolled subjects were randomized into one of six product usage sequences (Table 1) with four subjects per sequence.Table 1Summary of study demographics and FTCD scores by study product sequence and overallTrait/TestCategory/StatisticStudy product sequenceABFCED (N = 4)BCADFE (N = 4)CDBEAF (N = 4)DECFBA (N = 4)EFDACB (N = 4)FAEBDC (N = 4)Overall (N = 24)SexFemale2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)12 (50 %)Male2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)2 (50 %)12 (50 %)RaceAmerican Indian/Alaska Native1 (25 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)1 (4 %)Black or African American0 (0 %)1 (25 %)0 (0 %)2 (50 %)0 (0 %)0 (0 %)3 (13 %)White3 (75 %)3 (75 %)4 (100 %)2 (50 %)4 (100 %)4 (100 %)20 (83 %)EthnicityHispanic or Latino1 (25 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)1 (4 %)Not Hispanic or Latino3 (75 %)4 (100 %)4 (100 %)4 (100 %)4 (100 %)4 (100 %)23 (96 %)Age (yrs)Mean38.340.538.335.334.045.838.7SD12.0415.7015.444.039.496.1310.77Height (cm)Mean167.25175.75168.75172.75167.00171.75170.54SD3.40310.2436.1313.7758.2069.8117.331Weight (kg)Mean84.179.270.874.780.083.978.8SD16.5225.896.1513.1312.9314.1414.91BMI (kg/m2)Mean30.05025.19824.79825.01528.94828.37527.064SD6.04395.76220.68543.84646.38104.32554.8512Fagerström Test for Cigarette Dependence ScoreMean4.05.35.06.04.07.85.3SD2.160.51.412.162.162.632.18Median3.55.04.55.54.58.55.0Minimum2544141Maximum767961010 Given the wide variability in smoking behaviors, the controlled use period of the study was intended to provide some degree of standardization of the nicotine \"dose\" associated with each of the study products to better understand potential influences of different vehicles, flavor characteristics and nicotine content on measured parameters. As such, the controlled use period of the study consisted of 50 puffs of the assigned e-cigarette product (5-s puffs at 30-s intervals, approximately 24.5 min of use) and smoking one conventional tobacco cigarette (30-s intervals with the subjects' normal puff duration, approximately 4.5 min of use). Fifty e-cigarette puffs was selected as an optimal controlled \"dose\" as it approximated the dose of nicotine delivered by the cigarette used in the study (~0.8 mg) based on machine yields of the e-cigarettes determined previously using a standardized Canadian Intense puffing protocol [20, 21]. Thus, this controlled \"dose\" period was intended to reflect intensive usage of the e-cigarettes for comparison to the tobacco cigarette used in the study. During the controlled use period, subject puff counts were monitored by the clinical staff and all e-cigarettes were weighed before and after use in order determine the amount of solution consumed. Further, evaluation under the ad lib use conditions provided plasma nicotine levels under uncontrolled, natural use conditions and insights into product use behaviors accompanied by subjective self-assessments of smoking urge. During ad lib use, subjects assigned to an e-cigarette or tobacco cigarette product were allowed to use the product as desired for the entire hour (use of the assigned e-cigarette products in an unrestricted manner or smoking as many tobacco cigarettes as they chose) and subjects were responsible for maintaining their own puff counts. All e-cigarette products were weighed before and after use. The ad lib use period was conducted immediately following the end of the controlled product use session. Blood samples for plasma nicotine determinations, smoking urge assessments, blood pressure, pulse rate, and exhaled CO measurements were also obtained at scheduled time points on each product use day.\n\nBODY.METHODS.PHARMACOKINETICS (PK) - PLASMA NICOTINE:\nBlood samples for the measurement of plasma nicotine concentrations were taken by direct venipuncture at 10 min prior to, and at 5, 10, 15, 20, 25, 30, 45, 60, 75, and 90 min following the start of the controlled product usage on Days 1, 3, 5, 7, 9, and 11. Plasma nicotine was analyzed by LC-MS/MS using validated analytical methods with appropriate quality controls in accordance with applicable FDA Good Laboratory Practice regulations (Title 21 CFR Part 58). The limit of quantification for nicotine was 0.200 ng/mL. The following PK parameters were calculated:Cmax0–30 - maximum observed concentration from time zero to 30 min.AUC0–30 - area under the concentration-time curve from time zero to 30 min.tmax0–30 - time of the maximum concentration from time zero to 30 min.AUC30–90 - area under the concentration-time curve from 30 to 90 min.C90 - maximum observed concentration at 90 min.\n\nBODY.METHODS.PHARMACODYNAMICS – SMOKING URGE:\nSmoking urge was assessed using a simple and subjective 100 mm visual analog scale (VAS). Various forms of VAS have been used in e-cigarette studies as a tool for obtaining various subjective effects data associated with e-cigarette use and measuring nicotine and smoking abstinence symptom suppression [6–10]. Participants were asked to rate \"how strong is your urge to smoke right now?\" by placing a line through a 100 mm line where far left indicated: 'not at all' and far right indicated: 'extremely'. Assessments occurred within 1 min prior to the −10 (pre-product use), 5, 15, 25, 30, 60, and 90-min PK blood draws. The smoking urge change-from-baseline was calculated as the difference between the pre-use and post-use smoking urge result for each tested product. The following pharmacodynamic (PD) parameters were calculated:Emax0–30 - Maximum smoking urge change-from-baseline from time zero to 30 min.Emaxreduction0–30 - Maximum smoking urge reduction from baseline from time zero to 30 min.AUEC0–30 - Area under the effect curve (smoking urge change-from-baseline) from time zero to 30 min.tEmax0–30 - Time of the maximum smoking urge change-from-baseline from time zero to 30 min.AUEC30–90 - Area under the effect curve (smoking urge change-from-baseline) from 30 to 90 min.E90 - Observed smoking urge change-from-baseline at 90 min.\n\nBODY.METHODS.TOLERABILITY AND ADVERSE EVENTS (AES):\nTolerability evaluations included assessments of AEs, vital signs and concomitant medications. AEs reported by the subjects or observed by the clinic staff were assessed for severity (mild, moderate, or severe), as serious or not serious, and relationship to the study products (unrelated, unlikely, possible, probable, or definite) by the Principal Investigator. A study product use-emergent AE was defined as an AE that started or intensified at the time of or after study product usage. An AE that occurred during the washout period between study products was considered study product use-emergent for the last study product given. All reported AEs were coded with Medical Dictionary for Regulatory Activities (MedDRA®), Version 17.0. AEs were recorded by frequency by study product and the number of subjects experiencing product use-emergent AEs.\n\nBODY.METHODS.DATA ANALYSES:\nNon-compartmental PK and smoking urge parameters were calculated using Phoenix® WinNonlin® Version 6.3 (Certara, Princeton, NJ) from the individual nicotine concentration and smoking urge change-from-baseline data. All statistical summarizations and comparisons were calculated using SAS® Version 9.3 (SAS, Cary, NC). Analyses of variance were performed on the plasma nicotine AUC0–30, AUC30–90, Cmax0–30, and C90 parameters, and a Wilcoxon Signed Rank Test was used for the tmax0–30 parameter to assess differences between the e-cigarettes and the tobacco cigarette. Usual brand cigarette flavor (menthol or non-menthol) was included in each of the analyses as a covariate to account for any impact that flavor preference might have had on the results. Repeated measures ANOVA with simple contrasts were used to compare the maximum observed concentrations of plasma nicotine within the first 30 min and at 90 min (Cmax0–30 and C90) to the pre-product use concentration. The smoking urge parameters AUEC0–30, AUEC30–90, Emax0–30, E90, and tmax0–30 were compared using methods similar to those used in the PK analyses. Differences were considered statistically significant at an alpha level of 5 %.\n\nBODY.RESULTS AND DISCUSSION.PARTICIPANT CHARACTERISTICS:\nOf the 24 subjects who were enrolled into the testing phase of the study and randomized to study product sequences, 23 subjects completed the study and one subject withdrew due to a family emergency. A summary of the subjects' demographics together with the results of the Fagerström Test for Cigarette Dependence (FTCD) scores [22, 23] for all study participants by study product sequence and overall is presented in Table 1.\n\nBODY.RESULTS AND DISCUSSION.PRODUCT USE:\nThe mean pre-to-post use product weight differences for the e-cigarettes and the mean puffs taken from all products during each period of the study are provided in Table 2. During controlled product administration, all subjects were to take the same number of puffs (50) from the e-cigarette products with a defined inhalation pattern (5-s puffs every 30 s) while monitored by the clinical staff in an attempt to standardize nicotine intake. The resulting mean weight differences were comparable across study products (0.2238 - 0.2570 g), with the highest and lowest estimated nicotine delivery from the 1.6 % nicotine products. During the ad lib product use period, subjects were free to puff as often as they chose with no limitation on puff duration. As a result, the number of puffs from each of the e-cigarette products varied widely across subjects during the ad lib use period. However, the mean puff counts were fairly comparable across products, averaging between 49.5 and 60.3, with the highest and lowest mean puff counts again noted with the 1.6 % nicotine products. Product weight differences followed the same pattern. Hence, there was no clear indication in either period that subjects compensated during use of the lower nicotine-containing products by taking more or deeper puffs in an attempt to self-administer more nicotine.Table 2Product use summaryControlled Use Period\nAd Lib Use PeriodE-CigarettePuffsPre-to-Post Use Weight Difference (g)PuffsPre-to-Post Use Weight Difference (g)2.4 % nic + Gly50.0 (50–51)0.2251 ± 0.040852.3 (5–128)0.1467 ± 0.0843\nN = 232.4 % nic + Gly/PG50.0 (50–51)0.2349 ± 0.063355.4 (4–136)0.1445 ± 0.0880\nN = 232.4 % nic + Gly + menthol50.0 (50–51)0.2421 ± 0.047758.0 (8–140)0.1604 ± 0.0827\nN = 231.6 % nic + Gly50.0 (50–51)0.2238 ± 0.039960.3 (8–112)0.1738 ± 0.0937\nN = 231.6 % nic + Gly/PG50.0 (50–51)0.2570 ± 0.043349.5 (3–118)0.1194 ± 0.0675\nN = 23PuffsCigarettes SmokedPuffsCigarettes SmokedTobacco Cigarette10.5 (9–13)1.0 (NA)38.6 (16–103)3.6 (2–7)\nN = 24Values for puff count and cigarettes smoked are presented as mean (range). Values for the pre-to-post weight differences are presented as mean ± SD\nnic nicotine, Gly glycerine, PG propylene glycol The mean differences in product weights were observed to be smaller for each e-cigarette (~22 % - 54 %) following ad lib product use compared to the controlled product administration despite the subjects having taken similar numbers of puffs, or more, during ad lib product use. This was likely due to shorter puff durations during ad lib product use. For the tobacco cigarettes, subjects smoked only a single cigarette during the controlled product administration, with an average of 10.5 puffs per cigarette taken. During the ad lib phase subjects smoked an average of 3.6 cigarettes, with approximately the same number of puffs taken per cigarette.\n\nBODY.RESULTS AND DISCUSSION.NICOTINE PHARMACOKINETICS AND BLOOD PLASMA LEVELS:\nThe mean plasma nicotine concentration-time profiles are presented in Fig. 1. Baseline nicotine concentrations were below the limit of quantification in the majority of subjects and mean baseline nicotine concentrations were comparable across study products, all representing less than half the limit of quantification of 0.200 ng/mL. During the controlled use period, mean concentrations increased peaked immediately after use of the single tobacco cigarette (p < 0.05 compared to baseline) and then steadily decreased until reaching a minimum at the end of the period. For all e-cigarettes, nicotine plasma absorption was slower, with mean concentrations steadily increasing with continued product use and peaking at the end of the period. Usage of the study e-cigarettes resulted in statistically significant (p < 0.05) increases from baseline in nicotine concentration as soon as 5 min following the start of product use with the exception of the 1.6 % nicotine product with glycerin, which also reached statistical significance after 10 min of use. Plasma nicotine concentrations steadily increased during the one-hour ad lib use of the tobacco cigarettes, whereas ad lib use of the e-cigarettes resulted in plasma nicotine concentrations that increased during the first 45 min and then plateaued during the last 15 min.Fig. 1Mean Plasma Nicotine Concentration Versus Time The summary of nicotine plasma PK parameters is presented in Table 3. The controlled e-cigarette puffing regimen provided peak (Cmax30) and overall (AUC0-30) nicotine exposures comparable to the tobacco cigarette for the two non-menthol 2.4 % nicotine products and the 1.6 % nicotine product containing glycerin and PG (all differences within 15 % of the tobacco cigarette). However, use of 1.6 % nicotine product containing only glycerin provided significantly lower peak (35 %) and overall (39 %) exposures despite having the highest mean puff counts of any of the e-cigarette products. Use of the menthol product (2.4 % nicotine with glycerin) also yielded a significantly lower overall exposure (16 %) compared to the tobacco cigarette, but a comparable peak concentration. Not surprisingly, the use pattern of the e-cigarette during the controlled use period resulted in a significantly longer time to peak nicotine concentration (Tmax0–30) than for the single tobacco cigarette.Table 3Summary of plasma nicotine pharmacokinetic parameters and statistical comparisons to the tobacco cigaretteE-CigarettesPharmacokinetic Parameters2.4 % nic + Gly2.4 % nic + Gly/PG2.4 % nic + Gly + menthol1.6 % nic + Gly1.6 % nic + Gly/PGTobacco Cigarette\nN = 23\nN = 23\nN = 23\nN = 23\nN = 23\nN = 24Cmax0–30 (ng/mL)17.4 ± 5.9718.1 ± 6.4715.3 ± 5.1610.3 ± 3.70*15.1 ± 4.6115.8 ± 8.64\np-value0.12420.05080.88580.00020.7776Tmax0–30 (hr)0.50 (0.33, 0.55)0.50 (0.33, 0.52)0.50 (0.33, 0.52)0.50 (0.33, 0.61)0.50 (0.33, 0.62)0.09 (0.08, 0.42)\np-value<0.0001<0.0001<0.0001<0.0001<0.0001AUC0–30 (ng*hr/mL)4.7 ± 1.844.9 ± 1.754.1 ± 1.633.0 ± 1.184.3 ± 1.394.9 ± 1.79\np-value0.76360.84430.0406<0.00010.0928AUC30–90 (ng*hr/mL)22.4 ± 5.6124.6 ± 7.9921.6 ± 5.0814.7 ± 5.1519.8 ± 4.7222.1 ± 5.98\np-value0.84630.06020.6319<0.00010.0756C90 (ng/mL)19.7 ± 5.7222.4 ± 7.6519.4 ± 5.8013.7 ± 5.9816.8 ± 4.4429.2 ± 10.86\np-value<0.0001<0.0001<0.0001<0.0001<0.0001Tmax0–30 is presented as median (minimum, maximum), all other values are presented as arithmetic mean ± SD. Statistical significance is based on the differences in least squares means between groups. P-values represent the comparison to the tobacco cigarette\nnic nicotine, Gly glycerine, PG propylene glycol During the ad lib use period, overall nicotine exposure (AUC30–90) was comparable (differences within 11 %) between the tobacco cigarette and all e-cigarette products with the exception of the 1.6 % nicotine product with only glycerin, which provided a 33 % lower overall exposure. However, the nicotine concentration at the end of the ad lib use period was significantly higher following use of the tobacco cigarette compared to the e-cigarettes, with concentrations ranging from 23 % to 53 % lower after 60 min of ad lib use. Overall, the exposure parameters among the e-cigarette products tended to be higher for the 2.4 % nicotine products, higher for the products containing both glycerin and PG, and lower for the product containing menthol compared to the other 2.4 % nicotine products.\n\nBODY.RESULTS AND DISCUSSION.PHARMACODYNAMICS: EFFECTS ON URGE TO SMOKE:\nThe mean smoking urge change from baseline-time profiles are presented in Fig. 2. Mean baseline smoking urge values across all test products were comparable, with the mean visual assessment scale (VAS) responses ranging from 62 to 68 out of 100 (mm).Fig. 2Mean Smoking Urge Change from Baseline versus Time (Note: More Negative Values Indicate a Stronger Urge Reduction) An immediate statistically significant (p < 0.001) reduction in smoking urge was observed at 5 min following the start of controlled use of each study product. Following use of a single tobacco cigarette, the maximum reduction in smoking urge was observed at 5 min, approximately corresponding to the end of use of that study product, and then increased during the remainder of the controlled use period. Continuous use of the e-cigarette products throughout the controlled use period resulted in a maximal reduction in smoking urge also approximately corresponding to the end of product use. During the ad lib use period, use of the tobacco cigarette resulted in a steady decrease in smoking urge, peaking at the final time point, while the response with the e-cigarettes peaked 30 min into ad lib use and stabilized during the final 30 min. Table 4 summarizes the change in smoking urge PD parameters. No statistically significant differences were found in the maximal smoking urge reduction parameters Emax0–30 and Emaxreduction0–30 (all differences within 11 %) or the overall smoking urge reduction parameter AUEC0-30 (all differences within 21 %) between the use of a single tobacco cigarette and each of the e-cigarettes during the controlled use period. The mean time to the maximal reduction in smoking urge was shorter for the tobacco cigarette (15 min) than each of the e-cigarettes (25 – 30 min), but only significantly so for both of the 1.6 % nicotine products and the 2.4 % nicotine product with menthol.Table 4Summary of change in smoking urge parameters and statistical comparisons to the tobacco cigaretteE-CigarettesSmoking Urge Parameters2.4 % nic + Gly2.4 % nic + Gly/PG2.4 % nic + Gly + menthol1.6 % nic + Gly1.6 % nic + Gly/PGTobacco Cigarette\nN = 23\nN = 23\nN = 23\nN = 23\nN = 23\nN = 24Emax0–30 (mm)−41.9 ± 26.56−44.7 ± 26.98−37.0 ± 30.25−40.2 ± 32.47−40.7 ± 24.25−41.5 ± 27.43\np-value0.92850.64270.30230.69720.7450Emaxreduction0–30 (mm)−42.2 ± 26.01−44.7 ± 26.98−39.1 ± 25.86−41.4 ± 30.40−40.7 ± 24.25−41.7 ± 27.08\np-value0.94890.65050.50560.84800.7079AUEC0–30 (hr*mm)−14.1 ± 10.56−15.5 ± 10.39−12.2 ± 10.75−13.9 ± 12.09−13.3 ± 8.29−15.4 ± 12.67\np-value0.40540.89120.08820.35900.2225TEmax0–30 (hr)0.41 (0.07, 0.50)0.41 (0.07, 0.50)0.41 (0.08, 0.50)0.42 (0.08, 0.50)0.49 (0.24, 0.50)0.25 (0.07, 0.51)\np-value0.17640.09480.00140.00120.0009AUEC30–90 (hr*mm)−44.3 ± 35.39−50.9 ± 33.64−42.7 ± 38.24−46.2 ± 40.23−47.7 ± 34.16−46.9 ± 30.14\np-value0.66790.52750.50460.89190.9395E90 (mm)−37.0 ± 30.69−42.5 ± 29.85−34.3 ± 34.03−37.4 ± 35.08−38.5 ± 30.12−52.2 ± 26.33\np-value0.00600.09210.00100.00660.0122TEmax0–30 is presented as median (minimum, maximum), all other values are presented as arithmetic mean ± SD. Statistical significance is based on the differences in least squares means between groups. P-values represent the comparison to the tobacco cigarette\nnic nicotine, Gly glycerine, PG propylene glycol During the ad lib use period, the overall smoking urge reduction (AUEC30-90) achieved with use of the e-cigarettes were comparable to that of the tobacco cigarette, with all differences less than 10 %. However, at the end of the ad lib period, use of the tobacco cigarette resulted in a 19 % to 34 % greater relief of smoking urge (E90) when compared to the e-cigarettes, with statistically significant differences for all except the 2.4 % nicotine product containing both glycerin and PG. The smoking urge reduction appeared comparable among the e-cigarette products with the exception of the 2.4 % nicotine product with menthol which tended to provide a somewhat lower level of relief compared to the other four products.\n\nBODY.RESULTS AND DISCUSSION.TOLERABILITY AND REPORTED ADVERSE EVENTS:\nDuring the course of the study, a total of 38 subjects were exposed to one or more of the study products. There were no serious adverse events reported and no subjects were discontinued due to AEs. Mild product-use-emergent AEs were reported by 18 of 38 subjects provided a study product (including the lead-in period). The number of subjects reporting AEs was similar among products, ranging from 3 to 10 subjects each, with slightly fewer subjects experiencing AEs following use of the menthol-flavored product. The most frequent AE was cough, reported 20 times by 11 subjects (more commonly with use of an e-cigarette product than the tobacco cigarette), followed by throat irritation (8 reports by 5 subjects) and headache (6 reports by 5 subjects), and dizziness (5 reports by 4 subjects). All AEs resolved without sequelae. The observed acute effects of the study products on blood pressure, heart rate and CO levels were previously reported by the authors under a separate publication [24] where it was noted that heart rate and systolic and diastolic blood pressure were significantly elevated after the use of the tobacco cigarette, but the elevation was less after use of most of the e-cigarettes. Furthermore, it was also observed that the use of the e-cigarettes produced no increase the exhaled CO levels, whereas the cigarette significantly increased the exhaled CO more than eight (8) times above the baseline.\n\nBODY.CONCLUSIONS:\nThe key objectives of this study were to examine the nicotine blood plasma levels and smoking urge impacts of various formulations of one brand of e-cigarette for comparison to a conventional tobacco cigarette, and to assess the tolerability and adverse events associated with the study products following short-term use under intensive and naturalistic use conditions. The study design and results are not intended to support the potential for e-cigarettes as harm-reduction products. While not all puffing parameters can be controlled across all subjects, a product use that includes the same number of puffs, puff duration, and puff interval as utilized in this study allows for standardization of nicotine intake to the extent possible in order to accurately characterize uptake and reduction in smoking urge. The small difference in pre-to-post use weight differences across the e-cigarette products supports this. When compared to a tobacco cigarette, the nicotine PK following a controlled, 50-puff use of the e-cigarettes was characterized by slower absorption than from a single tobacco cigarette, but comparable maximal and overall nicotine exposures for all but the 1.6 % nicotine product with glycerin. Analysis of the mean maximum plasma concentrations attained during the controlled use period showed that the non-menthol e-cigarettes containing 2.4 % nicotine exceeded that of the tobacco cigarette (15.8 ng/mL at approximately 5 min) with approximately 25 min of use (17.8 to 18.1 ng/mL). This observation is most likely attributable to the rather intensive e-cigarette puffing regimen (one puff every 30 s) imposed during the controlled use interval when compared to the single tobacco cigarette regimen (average 10.5 puffs total). However, it also suggests that under an intensive puffing scenario, e-cigarettes are capable of delivering similar amounts of nicotine as a tobacco cigarette. The continuous rise in nicotine concentrations achieved with use of the e-cigarettes during the controlled use period appeared to be matched by a consistent decrease in smoking urge, though at the end of the 30-min controlled use period the urge reduction was comparable across those products. Smoking a single cigarette during the controlled use period produced a predictable nicotine concentration-time curve, with a rapid peak nicotine concentration followed by a gradual elimination. The rise and fall of nicotine concentration was matched by the smoking urge response, with a rapid decrease in urge followed by a gradual return toward baseline. While the time to maximum urge reductions was significantly shorter for the tobacco cigarette during the controlled use period compared to each of the e-cigarettes, there were no significant differences in the maximal or overall urge reduction achieved between the two types of products. Ad lib use of the study products can provide insight into use patterns and nicotine concentrations that will allow consumers to satisfy their smoking urge and provide an overall satisfying product use experience. As should be expected, individual use of the products as measured by puff counts varied widely and followed the amount of nicotine solution that was consumed during use as measured by pre-to-post use product weight difference. Use of the e-cigarettes during the ad lib use period resulted in a relatively small increase in nicotine concentration during the first 45 min, followed by a plateau during the last 15 min. In contrast, use of the cigarette during the ad lib use period resulted in a continuous increase in nicotine concentration and continued reduction in smoking urge through the final time point. Following ad lib use of all products for 60 min, the tobacco cigarette yielded a significantly higher nicotine concentration (C90) compared to each of the e-cigarettes. The higher nicotine concentration coincided with a greater reduction in smoking urge at the end of the ad lib use period (E90) for the tobacco cigarette compared to all e-cigarettes, and significantly greater than all except the 2.4 % nicotine product containing glycerin and PG. However, there was no overall impact on urge as assessed by AUEC30–90. This was likely due to the relatively short duration of the ad lib use. Indeed, as the smoking urge response appeared to have reached a plateau with the e-cigarettes and continued to increase with the cigarette following 60 min of ad lib use, a longer evaluation period may have resulted in a difference in the AUEC parameter as well. Among the e-cigarettes, use of the e-cigarettes containing 1.6 % nicotine resulted in lower nicotine exposure compared to the e-cigarettes containing 2.4 % nicotine. However, this did not translate into significant differences in the smoking urge response. Further, while the suppression of urge to smoke appeared comparable between the 1.6 % and the 2.4 % test formulations in the present study, evaluation of the mean puff counts and the amount of solution consumed did not suggest that the subjects compensated by puffing on the lower-nicotine products consciously or subconsciously more so than the higher-nicotine products to achieve a similar level of satisfaction. In addition, higher nicotine content, the presence of PG in the vehicle, and the absence of menthol in the e-cigarettes were found to increase plasma nicotine levels during both controlled and ad lib use. Such factors, however, did not appear to significantly affect smoking urge. However, consistent with prior research documenting the prominent role of conditioned behavior and nicotine addiction, in addition to the concentration of nicotine, the respiratory tract sensory cues and manipulation of smoking materials that are associated with smoking, and mimicked by e-cigarettes, may have had a role in relieving smoking urges [25]. Thus, the present findings further suggest that some of the substantial sensory and behavioral aspects of smoking that are mirrored by e-cigarettes, may be essential elements in the reduction of craving or abstinence symptoms. To the extent that e-cigarettes may provide some measure of reduction in smoking urges, they may serve as cigarette substitutes for smokers who would otherwise seek to smoke a conventional tobacco cigarette. Despite the aggressive puff regime employed in the controlled use period, there were no SAEs in this study and no subjects were discontinued due to AEs. The incidence of AEs was similar among products, with slightly fewer subjects experiencing AEs following use of the menthol-flavored product. The most frequently reported minor AEs were cough, followed by throat irritation, headache, and dizziness. These findings are consistent with other research studies of longer duration that have reported that use of e-cigarettes by adult smokers is well-tolerated as compared to tobacco cigarette use [2, 26]. This study was not without limitations. It was performed at a single site with a small number of subjects, which might lead to the conclusion that the data is not generalizable to a broader population. However, many initial PK evaluations of tobacco and pharmaceutical products are performed using a similar approach. Further, the comparison to the e-cigarettes was made based on a single product use and a short-term ad lib product use to a single brand of tobacco cigarette. Indeed, the trend we noted at the end of the ad lib use period did show that nicotine concentrations appeared to be continuing to rise and smoking urge appeared to be continuing to decrease. Hence, a longer term comparison in future studies would be beneficial. Finally, while the product use data did not provide a clear indication that subjects compensated by using the products with less nicotine more intensively, use of topography measurements providing insight into puff volume and inhalation rate coupled with product evaluation questionnaires in future studies might lend additional information regarding product use behaviors leading to PK and smoking urge responses. Overall, the findings of this study indicate that nicotine uptake from short-term use of e-cigarettes containing 1.6 % and 2.4 % nicotine are significant, but lower than those of tobacco cigarettes, yet the reduction in urge-to-smoke or craving symptoms are broadly comparable. Moreover, it was also found that the short-term use of e-cigarettes by adult smokers is well-tolerated as compared to tobacco cigarette use.\n\n**Question:** Compared to Conventional tobacco cigarette what was the result of E-cigarettes on Baseline nicotine concentrations?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1063
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Physiological effects of invasive ventilation with neurally adjusted ventilatory assist (NAVA) in a crossover study\n\n ABSTRACT.BACKGROUND:\nNeurally Adjusted Ventilatory Assist (NAVA) is a mode of assisted mechanical ventilation that delivers inspiratory pressure proportionally to the electrical activity of the diaphragm. To date, no pediatric study has focused on the effects of NAVA on hemodynamic parameters. This physiologic study with a randomized cross-over design compared hemodynamic parameters when NAVA or conventional ventilation (CV) was applied.\n\nABSTRACT.METHODS:\nAfter a baseline period, infants received NAVA and CV in a randomized order during two consecutive 30-min periods. During the last 10 min of each period, respiratory and hemodynamic parameters were collected. No changes in PEEP, FiO2, sedation or inotropic doses were allowed during these two periods. The challenge was to keep minute volumes constant, with no changes in blood CO2 levels and in pH that may affect the results.\n\nABSTRACT.RESULTS:\nSix infants who had undergone cardiac surgery (mean age 7.8 ± 4.1 months) were studied after parental consent. Four of them had low central venous oxygen saturation (ScvO2 < 65 %). The ventilatory settings resulted in similar minute volumes (1.7 ± 0.4 vs. 1.6 ± 0.6 ml/kg, P = 0.67) and in similar tidal volumes respectively with NAVA and with CV. There were no statistically significant differences on blood pH levels between the two modes of ventilation (7.32 ± 0.02 vs. 7.32 ± 0.04, P = 0.34). Ventilation with NAVA delivered lower peak inspiratory pressures than with CV: -32.7 % (95 % CI: -48.2 to –17.1 %, P = 0.04). With regard to hemodynamics, systolic arterial pressures were higher using NAVA: +8.4 % (95 % CI: +3.3 to +13.6 %, P = 0.03). There were no statistically significant differences on cardiac index between the two modes of ventilation. However, all children with a low baseline ScvO2 (<65 %) tended to increase their cardiac index with NAVA compared to CV: 2.03 ± 0.30 vs. 1.91 ± 0.39 L/min.m2 (median ± interquartile, P = 0.07).\n\nABSTRACT.CONCLUSIONS:\nThis pilot study raises the hypothesis that NAVA could have beneficial effects on hemodynamics in children when compared to a conventional ventilatory mode that delivered identical PEEP and similar minute volumes.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov Identifier: NCT01490710. Date of registration: December 7, 2011.\n\nBODY.BACKGROUND:\nTo better match the level of ventilator assistance to the patient's needs, manufacturers have developed new modes that deliver a level of assistance proportional either to the patient's inspiratory muscle effort, with proportional assist ventilation (PAV); or to the diaphragmatic electrical activity (EAdi), with neurally adjusted ventilatory assist (NAVA) [1]. For now, only NAVA is usable for infants. With this mode of mechanical ventilation, the collected electrical signal allows synchronization of ventilation to spontaneous breathing efforts of the child, as well as permitting pressure assistance proportional to the electrical signal. NAVA provides both fine synchronization of respiratory support and pressure assistance varying with the needs of the child. To our knowledge, no data have been published on the impact of NAVA on the hemodynamics in children mechanically ventilated. The impact of mechanical ventilation on the circulatory system remains a concern in some infants following surgery for repair of congenital cardiac defects [2]. After the completion of the operation, when there is consensus regarding a good surgical result, many children nowadays are rapidly weaned and successfully separated from mechanical ventilation shortly after the procedure. However, duration of mechanical ventilation is inversely proportional to the patient's age and directly related to the duration of cardiopulmonary bypass and complexity of the surgical procedure [3]. This physiologic study with a randomized cross-over design compared hemodynamic parameters when neurally adjusted ventilatory assist (NAVA) or conventional ventilation (CV) was applied.\n\nBODY.METHODS:\nThis study was performed in the 12-bed PICU of the university hospital center of Nantes (France), between June 2012 and March 2013. Last year, 712 children were admitted in this unit. Among them, 182 underwent a cardiac surgery with cardiopulmonary bypass (55 % were extubated within 8 h). This trial was registered (ClinicalTrials.gov Identifier: NCT01490710). Research has been performed in accordance with the Declaration of Helsinki and has been approved by an appropriate ethics committee (Comité de Protection des Personnes Ouest III de POITIERS, Réference comité: Protocole n°11.07.20). The parents of all the children gave their written consent.\n\nBODY.METHODS.STUDY DESIGN: PROSPECTIVE RANDOMIZED CROSS-OVER STUDY:\nAfter a baseline period, children received conventional ventilation (CV) and NAVA during two consecutive periods of 30 min, in random order. This randomization was performed by contacting a server to define the order in which the two modes of ventilation were administered. There was no washout time between these two periods. During the last 10 min of each period at a steady state, hemodynamic parameters, respiratory parameters, blood gas and cardiac index were collected. No changes in PEEP, FiO2, sedation or inotropic doses were allowed during these two periods. The study protocol is depicted in Fig. 1.Fig. 1Study design. Baseline measures were collected when patient met stability conditions. No changes in PEEP, FiO2, sedation nor inotropic doses were allowed during the two 30-min study periods. NAVA = Neurally Adjusted Ventilatory Assist; CV = Conventional Ventilation\n\n\nBODY.METHODS.PARTICIPANTS:\nChildren hospitalized for postoperative care after cardiac surgery were considered for inclusion. Inclusion criteria were: admission weight >5 kg; invasive ventilatory support ongoing; NAVA ventilation available; sedation status allowing the NAVA ventilation functioning. Non inclusion criteria were: parental refusal; oesophageal disease prohibiting gastric tube; brain damage conflicting with spontaneous ventilation; clinical instability requiring treatment and/or management inconsistent with research.\n\nBODY.METHODS.INTERVENTION:\nChildren were ventilated with a Servo-i ventilator (Maquet Critical Care, Solna, Sweden). During a baseline period, conventional ventilation was applied and adjusted according to blood gases. After the baseline period in which NAVA probe and Doppler monitoring were introduced, all children received conventional ventilation (CV) and NAVA during two consecutive periods of 30 min, in random order. The ventilation settings were supposed to provide the same minute volumes.\n\nBODY.METHODS.INTERVENTION.CONVENTIONAL VENTILATION (CV):\nWith the goal of providing the lowest peak pressures, we used the Volume Control Intermittent Mandatory Ventilation (VC-IMV). Ventilatory parameters were as follows: tidal volume (between 5 and 8 ml/kg), respiratory rate (between 20 and 30 cycles/min). Pressure Support was set at 12 cmH2O above the PEEP. A flow pneumatic trigger was used to synchronize the support to spontaneous breaths. The flow pattern during the volume control ventilation was constant flow. The inspiratory time was initially set at 0.5 s and adjusted according to the inspiratory flow waveforms if needed. The respiratory rate of VC-IMV was set at a high level (between 20 and 30 cycles/min), not allowing many spontaneous cycles. The CV settings were adjusted during the baseline period with no changes allowed during the study period.\n\nBODY.METHODS.INTERVENTION.NEURALLY ADJUSTED VENTILATORY ASSIST (NAVA):\nDuring NAVA, the ventilator delivers a pressure proportional to the electrical activity of the diaphragm (EAdi). The settings in NAVA mode included PEEP, FiO2 and level of NAVA assistance. The NAVA level was set to 1 cmH2O/μV. We had previously observed that children ventilated with NAVA usually adjust their EAdi to maintain a normal blood pH by preserving physiologic tidal volumes and minute volumes. This is no longer true when children are highly sedated. Backup ventilation was Pressure Control set at 12 cmH2O above PEEP in case of failure EAdi signal detection. During NAVA, the positive pressure is triggered when the deflection in the EAdi curve exceeds 0.5 μV, and is cycled-off when the EAdi drops to 70 % of its peak value.\n\nBODY.METHODS.OUTCOMES MEASUREMENTS.RESPIRATORY MEASUREMENTS:\nPeak inspiratory pressures (PIP), PEEP, FiO2, Tidal volume (VT), Minute volume, EADI max, Respiratory rate, were recorded for all infants during the last 10 min of each period. Measurements of each parameter were repeated at steady state a total of six times to guarantee reliability and reproducibility. We collected the most representative values rather than gathering data at constant interval because of the variability of the respiratory measurements in NAVA. Hemodynamic and respiratory parameters were collected always in the same time. Respiratory system compliance (Crs) according to the ventilation mode was estimated by the ratio VT/(PIP-PEEP) in ml/kg.cmH2O.\n\nBODY.METHODS.OUTCOMES MEASUREMENTS.HEMODYNAMIC MEASUREMENTS:\nCardiac index (CI, L/min.m2) was assessed by transesophageal Doppler ultrasonography (CardioQP, Gamida, France) [4]. Other hemodynamic parameters included heart rate, arterial pressures (by means of an arterial catheter placed before the surgery), and central venous pressure (by means of a central venous catheter inserted into the right internal jugular vein). Cerebral Near InfraRed Spectroscopy (NIRS) was measured with a non invasive optical monitor of regional cerebral oxygen saturation (Invos Oximeter, Covidien, Boulder, CO). The pulse oximeter perfusion index was provided by the monitors (Intellivue MP70 monitor, Philips Medical Systems) [5]. Hemodynamic parameters were collected always in the same time than respiratory parameters. Left cardiac work index [6] was calculated as: LCWI (kg.min/m2) = CI × MAP × 0.0136, with CI for cardiac index and MAP for mean arterial pressure.\n\nBODY.METHODS.OUTCOMES MEASUREMENTS.BIOLOGICAL MEASUREMENTS:\nAt the end of each period (CV and NAVA), a blood sample was taken through the internal jugular catheter to measure pH, PCO2, and central venous oxygen saturation (ScvO2). These measurements were performed in the PICU by a blood gases analyser (GEM Premiere 4000, Instrumentation Laboratory UK Ltd).\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nEach parameter collected in the last 10 min of each study period was averaged, and compared by a Wilcoxon test for paired samples. Measurements in NAVA and CV are reported as median (interquartile). Changes in parameters during NAVA compared to CV are expressed as mean percentage (95 % CI, P). The P-value taken to indicate significance was P < 0.05. For this pilot study no sample size was calculated.\n\nBODY.RESULTS:\nThe study was concluded after one year. Technical problems were greater than expected: a measure of cardiac output sufficiently accurate, and especially stable in time, was very difficult to reach in children who already had a stomach tube for the NAVA ventilation. We asked the research office of our institution for continuing recruiting more patients. In view of the technical difficulties we had, this request was not accepted. Nine children were randomized. The quality of the surgical repair was always checked by a cardiologist prior to inclusion in the study. Analysis was performed on only six of them. We were unable to start NAVA on one child due to a lack of EAdi signal capture. Thereafter, when the NAVA signal was not stable enough, children were not included. In two other children we were unable to record a Doppler signal likely due to interference between the CardioQP probe and the EAdi probe both intra oesophageal placed. Patient characteristics are provided in Table 1. The mean (± SD) age was 7.8 ± 4.1 months, and weight was 6.7 ± 1.2 kg. Four children had low central venous saturation (ScvO2 < 65 %) at the baseline period: the children 1, 2, 3, 6 in the Table 1.Table 1Children characteristicsChild123456GenderMMMFFFAge (mo)313410611Weight (kg)5.78.97.06.05.87.0CardiopathyAVSDVSD with pulmonary stenosistetralogy of Fallotatrial septal defectcoarctation of the aortapulmonary atresia with VSDCardiopulmonary bypass duration (min)9683130480130Time between PICU entry and study (hours)20720457Baseline cardiac index (L/min.m2)2.002.851.932.602.751.85Milrinone (mcg/kg/min)0.90.50.30.500.5Adrenaline (mcg/kg/min)000000.05ScvO2 (%)646352826852Perfusion index (%)0.313.650.603.621.900.87Random orderCV then NAVANAVA then CVCV then NAVACV then NAVANAVA then CVNAVA then CVAll children were receiving morphine 0.5 mg per kilo per day. All, except the 2 first, were receiving midazolam 40 micrograms per kilo per hour\nAVSD atrioventricular septal defect, VSD ventricular septal defect, NAVA neurally adjusted ventilatory assist, CV conventional ventilation, ScvO\n2: central venous oxygen saturation\n Respiratory parameters, hemodynamic parameters, and blood gases are provided in Table 2. The ventilatory settings resulted in similar minute volumes and in similar tidal volumes in NAVA compared to CV. These similar minute volumes in each periods of the study were supposed when the study was designed. Nevertheless, significant lower peak inspiratory pressures (PIP) were observed during NAVA compared to CV. Apparent respiratory system compliance improved with NAVA in all children. We observed higher systolic arterial pressures during NAVA compared to CV. Pulse oximeter perfusion index were also higher during NAVA.Table 2Respiratory parameters, hemodynamics and biological dataRespiratory parametersCVNAVANAVA versus CV\nP\nRespiratory rate (/min)30 (7)35 (13)+26.4 (-3.6, +56.4)0.17VT (ml/kg)6.6 (0.7)6.9 (0.3)+5.3 (-5.5, +16.1)0.46Minute volume (L/min)1.7 (0.4)1.6 (0.6)+5.4 (-4.3, +15.1)0.67PIP (cm H2O)21 (6)11 (4)-32.7 (-48.2, -17.1)0.04PEEP (cm H2O)4 (2)4 (2)-(a)EAdi max (microVolt)3.9 (3.8)6.3 (1.4)+61.2 (-4.6, +126.9)0.34SpO2 (%)98 (3)96 (3)-1.2 (-2.8, +0.5)0.14FiO2\n30 (4)30 (4)-(a)Crs (ml/kg.cm H2O)0.37 (0.19)0.87 (0.32)+98.4 (+43.8, +153.0)0.04Hemodynamic parametersHeart rate156 (15)156 (22)+2.1 (-0.7, +4.5)0.17Systolic arterial pressure (mmHg)93 (6)99 (13)+8.4 (+3.3, +13.6)0.03Diastolic arterial pressure (mmHg)54 (12)57 (6)+3.6 (-3.0, +10.1)0.46Central venous pressure (mmHg)11 (5)10 (5)+3.9 (-5.3, +13.1)0.92Cerebral NIRS (%)62 (5)61 (3)+1.6 (-2.6, +5.8)0.34Cardiac index (L/min.m2)2.33 (0.84)2.26 (0.70)+1.4 (-3.4, +6.2)0.17Pulse oximeter perfusion index (%)1.50 (2.45)1.78 (2.29)+18.8 (+3.0, +34.7)0.04Venous blood gasespH7.32 (0.04)7.32 (0.02)-0.1 (-0.4, +0.2)0.34PCO2 (mm Hg)47.3 (5.1)45.8 (8.1)+0.6 (-5.1, +6.3)0.50ScvO2 (%)60.1 (20.9)58.4 (15.4)+3.3 (-5.7, +12.2)0.60Measurements in NAVA and CV are reported as median (interquartile). Variations between NAVA versus CV are reported as mean percentage (95 % CI). Statistical analyses between NAVA versus CV were performed by a Wilcoxon test for paired samples(a) No changes in PEEP, FiO2, neither in sedation or inotropic doses were allowed during these study periods\nNAVA Neurally adjusted Ventilatory Assist, CV Conventional Ventilation, V\nT tidal volume, PIP peak inspiratory pressure, Crs respiratory system compliance, EAdi electrical activity of the diaphragm, ScvO\n2 central venous oxygen saturation\n Cardiac index during NAVA compared to CV did not statistically differ. However, all four children with ScvO2 < 65 % tended to have higher cardiac index after 30 min of ventilation with NAVA compared to CV: 2.03 ± 0.30 vs. 1.91 ± 0.39 L/min.m2 (median ± interquartile, P = 0.07). Individual values of cardiac index and ScvO2 of children with low baseline ScvO2 are shown in Fig. 2. Upper values of pulse oximeter perfusion index were also observed. Turning now to left cardiac work index, the increase is +12.4 % (95 % CI: +3.8 % to +20.9 %, P = 0.07).Fig. 2Cardiac index and ScvO2 of the low baseline ScvO2 infants, according to the ventilation mode. White bars are cardiac index when CV is applied; back bars are cardiac index when NAVA is applied. The triangles above the bars indicate the values of the corresponding ScvO2 (%). ScvO2 = central venous oxygen saturation; NAVA = Neurally adjusted Ventilatory Assist; CV = Conventional Ventilation\n\n\nBODY.DISCUSSION:\nNAVA can be used in infants receiving postoperative mechanical ventilation after congenital heart surgery, as previously reported [7–10]. Infants included in this study all weighed less than 10 kg and had been operated less than 24 h before. To deliver a same tidal volume, NAVA required lower inspiratory pressures than conventional ventilation and had some beneficial effects on hemodynamics. Several data validate that the children were not over assisted when ventilated in CV: the delivered volumes, EAdi max, PCO2 and pH are similar in CV compared to NAVA. We suggest that lower EAdi peaks observed in CV are related to the normalization of the blood pH by CV ventilation. If children were over-assisted in CV, EAdi would have been missing, and minute volume would have been high, with a blood alkalosis. In a large randomized controlled trial where NAVA was used as a primary mode of ventilation, lower peak inspiratory pressures were found in the NAVA group [11]. It looks as if gas volume delivery, and probably alveolar expansion, required less pressure. This could be enhanced by an improvement in patient–ventilator interactions with a greater respiratory variability as reported in previous studies [12–16]. The effectiveness of this mode of mechanical ventilation was associated with some beneficial effects on the hemodynamics. Systolic arterial pressures and pulse oximeter perfusion index were significantly higher in NAVA. This pulse oximeter perfusion index provides a monitoring of illness severity in neonates [5]. In fact, we observed that the children with a low cardiac index had also a low perfusion index. This index was significantly higher when the children were ventilated with NAVA compared to CV. The overall values of the cardiac index and of the central venous oxygen saturation during NAVA compared to CV were not significantly higher, but there are possibly no reasons to observe an increase in these two parameters when tissue perfusion is already effective. And actually, the four children with a low ScvO2 trend to have a higher cardiac index after 30 min of ventilation with NAVA (Fig. 2). Moreover, this increase in cardiac index is associated with higher systemic pressures and then higher left cardiac work index. Lower oxygen content in venous blood usually reflects an oxygen balance disrupted between oxygen supply and demand. Ventilation with NAVA could allow the body to increase the cardiac output to ensure adequate oxygen availability. These children had congenital heart disease affecting the right heart: VSD with pulmonary stenosis, tetralogy of Fallot, and pulmonary atresia with VSD. Positive pressure ventilation and PEEP often result in increased right ventricular afterload due to capillary compression [3, 17]. Lower intrathoracic pressures in NAVA than in conventional ventilation could improve the right ventricular function during inspiration by reducing the right ventricular afterload. An improvement in apparent respiratory system compliance was observed when NAVA is applied. Cyclic intrathoracic pressure changes, characteristic of spontaneous breathing, could be preserved. Both fine synchronization of respiratory support and pressure assistance varying with the needs and the spontaneous breathing of the child could improve the pulmonary compliance. Nonetheless, it is important to note that we did not record the transpulmonary pressure. It is likely that, while the airway pressure decreased during NAVA, an increase in esophageal pressure swings occurred because of the patient respiratory efforts. This could have participated in the decrease in apparent compliance improvement [18]. This study has some limitations. Firstly, few children were studied. Many children dropped out due to early extubation in the operation room or immediately on arrival in the intensive care unit. Technical problems were greater than expected: a measure of cardiac output sufficiently accurate, and especially stable in time, was very difficult to reach in children who already had a stomach tube for the NAVA ventilation. Nevertheless the cross-over design of this study has two advantages over both a parallel study and a non-crossover longitudinal study [19]. The influence of confounding covariates is reduced because each cross-over patient serves as his or her own control. Cross-over designs are also statistically efficient and require fewer subjects than do non-crossover designs (even other repeated measures designs). Secondly, the agreement of the cardiac output measurement between the transoesophageal Doppler probe using CardioQP and the thermodilution technique during heart catheterisation in paediatric patients was described as weak [20]. However the CardioQP seems to be capable of detecting slight changes in cardiac output for critically ill children [21, 22]. This bed-side device for cardiac output measurement is minimally invasive, and provides better results in monitoring the slight variations rather than in measuring absolute values. As we were performing a cross-over study, we were interested primarily by changes. Finally, only comparison with CV-SIMV with relatively high preset respiratory rate preventing many spontaneous cycles was made, not with other ventilation strategies.\n\nBODY.CONCLUSION:\nIn conclusion, this cross-over study provides new data on the NAVA ventilation. This pilot study raises the hypothesis that a ventilatory assistance with NAVA could provide improvements in hemodynamics when compared to a conventional ventilatory mode that delivered identical PEEP and similar minute volumes. Thus, because minute volumes were not different between the two modes of ventilation, hemodynamic effects are linked to NAVA mode itself and NOT to incorrect settings in conventional ventilation. Further studies with larger population are needed to confirm these promising results.\n\n**Question:** Compared to Conventional Ventilation what was the result of Neurally Adjusted Ventilatory Assist on Peak inspiratory pressures?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1069
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Core Decompression and Autologous Bone Marrow Concentrate for Treatment of Femoral Head Osteonecrosis: A Randomized Prospective Study\n\n ABSTRACT:\nThe aim of this study was to investigate the safety of injection of bone marrow aspirate concentrate during core decompression and to study its clinical (visual analogue scale; Harris-Hip-score) and radiological outcomes (magnetic resonance imaging). In this prospective and randomized clinical trial we evaluated 24 consecutive patients with non-traumatic femoral head necrosis (FHN) during a period of two years after intervention. In vitro analysis of mesenchymal stem cells was performed by evaluating the fibroblast colony forming units (CFU-Fs). Postoperatively, significant decrease in pain associated with a functional benefit lasting was observed. However, there was no difference in the clinical outcome between the two study groups. Over the period of two years there was no significant difference between the head survival rate between both groups. In contrast to that, we could not perceive any significant change in the volume of FHN in both treatment groups related to the longitudinal course after treating. The number of CFU showed a significant increase after centrifugation. This trial could not detect a benefit from the additional injection of bone marrow concentrate with regard to bone regeneration and clinical outcome in the short term.\n\nBODY.INTRODUCTION:\nAseptic non-traumatic avascular osteonecrosis of the femoral head is a multi-factorial disease, the etiology of which is not entirely clear. Femoral head necrosis (FHN) most commonly affects young patients, often leading to femoral head collapse and resulting secondary osteoarthritis.1 Around 10% of all THA are performed due to FHN.2 In the early stages of the disease [Association Research Circulation Osseous (ARCO) stage I - II] the treatment aims to preserve the joint and to prevent the collapse of the femoral head. One of the most widely used treatment options is the core decompression by retrograde drilling into the necrotic zone. Systematic reviews verified the significantly better head survival rates compared to non-operative treatment options.2 Based on the data by Mont et al., core decompression leads to clinical success in 53-71% of treated patients leaving room for further therapy improvement. Mesenchymal stem cells and osteoblasts that could potentially induce bone formation have been shown to be decreased in both number and activity in afflicted bone. Therefore the local application of autologous mesenchymal stem cells (MSC) into the necrotic region could stimulate the regeneration of the affected bone. MSCs can be isolated from the mononuclear cell fraction of bone marrow and expanded in-vitro to high-cell numbers using laboratory equipment and then injected into the necrotic zone.3 Bone marrow expansion is subject to restricted regulatory laws in many countries.Introduction of a single-use closed system would allow the surgical team to carry out the concentration procedure in the operating room.4 Hernigou et al. introduced the technique of intra-osseous injection of autologous concentrated bone marrow for the treatment of femoral osteonecrosis and pseudarthrosis.5 They reported on the results of core decompression with additional bone marrow grafting in 145 hips with early stage osteonecrosis (ARCO I - II). At 5 years after surgery, the head survival rate in their study was 93%, which would appear to be a considerable improvement over previous studies. Although this prospective study included an impressive number of patients, a control group and a randomized study protocol were missing. The aim of this study was i) to investigate the safety of additional injection of bone marrow aspirate concentrate (BMAC) during core decompression and ii) to study the clinical (visual analogue scale, VAS; Harris-Hip-Score, HHS) and radiological outcome (magnetic resonance tomography, MRI) in comparison to core decompression only. In addition, in vitro analysis of MSCs was performed by evaluating the fibroblast colony forming units (CFU-F's).\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN:\nIn this prospective and randomized clinical trial we evaluated 24 consecutive patients (25 hips) with non-traumatic FHN during a period of two years after intervention. All patients signed the informed consent, the institutional review board of the hospital approved the study; all procedures were performed in accordance with the Declaration of Helsinki. The inclusion criteria for this study were age over 18 years and the presence of stage II femoral head osteonecrosis according to the Association of Research Circulation Osseous (ARCO) classification. Before inclusion into the study all patients were evaluated with two plane radiographs and an MRI of the affected hip. At the day of the procedure patients were randomized in two groups using a simple randomization method based upon sequential patient allocation: core decompression only vs. core decompression with the application of BMAC. For clinical outcome the VAS and HHS were measured. For radiological outcome the volume of the necrotic zone and the stage according to ARCO classification was evaluated with an MRI. The subsequent course of osteonecrosis was evaluated at 12 and 24 months postoperatively. This study was approved by the ethics committee of University of Heidelberg, Germany.\n\nBODY.MATERIALS AND METHODS.CORE DECOMPRESSION PROCEDURE:\nIn all patients a core decompression procedure was performed under general anesthesia. Under the guidance of an image intensifier in two planes, three 2.0 mm K-wires (Kirschner wires) were drilled through the major trochanter and along the femoral neck axis into the femoral head, reaching the subchondral necrotic area (2-3 mm from the articular cartilage). The most centrally placed K-wire in the necrotic zone was over drilled using 5 mm trephine as has been previously described in the literature.\n\nBODY.MATERIALS AND METHODS.BONE MARROW ASPIRATION, PROCESSING AND INSTILLATION:\nThe harvesting of autologous bone marrow was performed by percutaneous aspiration from the ventral iliac crest (both sides) using a bone marrow biopsy device. The initial volume of harvested marrow was 200 to 220 mL. Intra-operative processing and concentration of stem cells was performed using a Sepax centrifugation device from Biosafe SA (Eysins, Switzerland) with a volume reduction protocol that isolates the nucleated cell (NC) fraction in the buffy coat. After cell segregation, the erythrocytes, nucleated cells and plasma were automatically decanted. 12 mL of bone marrow concentrate suspension was isolated. 2 mL of the final cell concentrate volume (BMAC) was collected for further experimental investigations. 10 mL of BMAC was instilled into the necrotic zone through the canulated trephine drill after removal of the central k-wire.\n\nBODY.MATERIALS AND METHODS:\nThe number of MSCs within the native bone marrow and the bone marrow concentrate was analyzed using the CFU-F's. Assuming that each adherent MSC is able to form a fibroblast colony, the number of MSCs within a cell culture could be estimated. Per donor, 2×106 mononucleated cells (MNC) out of the native bone marrow and the BMAC were seeded into three T25 tissue culture flasks each and cultured in standard medium consisting of high glucose DMEM, 20% fetal calf serum (FCS), 1% penicillin/streptomycin, 10 ng/mL recombinant human epidermal growth factor (EGF) and 10 ng/mL recombinant human platelet-derived growth factor (PDGF). After seven days of culturing, cells were washed with phosphate buffered saline (PBS), fixed with 70 % ethanol stained with toluidin blue. Fibroblast colonies were counted manually using a microscope at 25th magnification. (Modified protocol from Castro-Malaspina et al; 1980).6\n\nBODY.STATISTICS:\nData were expressed as mean or mean ± SEM. The nominal data of patients' characteristics were tested using the test and χ2 Fisher exact test. The distribution of continuous data was determined using the Kolmogorov-Smirnov test and the appropriate parametric or non-parametric test was selected. Differences within the groups for radiological and clinical outcome data were compared with the Wilcoxon test and between the groups – with Mann-Whitney test. A P<0.05 was considered statistically significant. For radiological evaluation MRI scans of 25 hips before surgery, 20 hips at 1 year and 15 hips at 2 years after operation were available due to the presence of endoprosthesis and other reasons. For longitudinal and transverse comparison of the volume of the osteonecrosis, only data of patients with full radiological observation was analyzed. Calculation of FHN volume was performed using GE Healthcare RIS/PACS software (Milan, Italy). SPSS (SPSS Inc., Chicago, IL, USA) and Graph Pad Prism software (Graph Pad, La Jolla, CA, USA) were used for statistical analysis.\n\nBODY.RESULTS:\nTwenty-five hips of 24 patients with femoral head osteonecrosis were included into this study between 2008 and 2010 and assigned by random to the treatment method (14 in control group, 11 in verum group). All 25 hips were fully evaluated clinically (VAS: preoperatively, 3 months postoperatively, 1 and 2 years after treatment; HHP: preoperatively, 3 months, 1 and 2 years after treatment) and if feasible evaluated radiographically (MRI preoperatively, 1 and 2 years after treatment). Both study populations did not significantly differ in age, gender, side of FHN and risks factors for osteonecrosis (steroid therapy in the patients' history) as shown in Table 1. According to the inclusion criteria all patients suffered from an ARCO II FHN and had no prior trauma. No side effects (hematoma, infection, nerve injury and others) either from the bone marrow aspiration from the pelvic rim or from the injection into the femoral head were observed during the entire study period.\n\nBODY.RESULTS.CLINICAL AND FUNCTIONAL OUTCOME:\nThe clinical and functional outcome was measured using standardized scores (VAS; HHS). At the time of inclusion into the study there was no significant difference between the two groups regarding pain and function. Postoperatively, significant decrease in pain associated with a functional benefit lasting the entire observation period was observed. However, there was no difference in the clinical outcome between the two study groups (Figures 1 and 2).\n\nBODY.RESULTS.RADIOLOGICAL OUTCOME:\nBefore treatment there was no significant difference between the mean volumes of the osteonecrosis in both groups. In contrast to the clinical outcome, we could not perceive any significant change in the volume of FHN in both treatment groups related to the longitudinal course after treating (Figure 3). Again no statistically significant difference was detected between the groups. FHN progressing to ARCO III or IV was defined as failure of the treatment. This includes all patients with joint replacement (total hip arthroplasty) due to the FHN within two years in both groups. Over the period of two years there was no significant difference between the head survival rate of 8/14 (57%) in the control group, and 7/11 (64%) in the verum group. There was no difference between the two groups with regard to the interval between core decompression with or without BMAC application and THA.\n\nBODY.RESULTS.BONE MARROW:\nThe bone marrow was analyzed before and after the centrifugation procedure using the Sepax centrifugation device. Table 2 shows the significant increase in the number of nucleated cells due to the centrifugation step. Additionally, the number of CFU that best represents the number of MSC shows a significant increase (Table 2).\n\nBODY.DISCUSSION:\nFemoral head necrosis is a painful disease usually afflicting young male patients and the natural course of this disease tends to be progressive and ends in secondary osteoarthritis.7 In late stage osteonecrosis (ARCO III and IV), total hip arthroplasty (THA) currently seems to be the best treatment with good functional restoration. However, the young patient age and subsequent expected life span is substantially longer than the currently expected longevity of primary THA. Early recognition of FHN and lastly established less invasive treatment modalities open new possibilities to treat this disease for preservation of the spherical femoral head shape indicating a better outcome and finally,8 avoiding THA. One of these operations is the core decompression procedure9-11 and the literature indicates that it is superior to non-operative therapy.12,13 It has been shown that core decompression leads to significant post-operative pain reduction in early stage FHN.2,14 Nevertheless, in postoperative MRI and patho-morphologic trials it could be shown that no significant repair process in the necrotic area of FHN could be achieved with core decompression alone,15 making it a controversial procedure and calling for additional alternative treatment options. The long term results of this procedure are mainly influenced from the initial stage of the osteonecrosis with best prognosis for ARCO I and worst for ARCO III and IV. The outcome of the non-reversible early stage ARCO II is still under discussion. Therefore we included ARCO II hips in this study only. The injection of mesenchymal progenitor cells into the osteonecrosis seems to improve the outcome of FHN as shown by Hernigou et al.16 The application of ex vivo cultivated stem cells for treatment of diseases of the musculoskeletal system is possible, but currently requires advanced laboratory and technical effort. In addition, this two-step procedure is associated with a higher risk of complications and higher costs. On the other hand, cell therapy systems using centrifugation steps were recently developed which permit intra-operative enrichment of mesenchymal progenitor cells from BMAC simultaneously at the time of core decompression followed by application of the BMAC into the necrotic area of FHN at the end of the same procedure. Simultaneous application of BMAC is praised as an encouraging approach. Hernigou et al. (2002, 2005) inaugurated this treatment option and published a large patient series, that demonstrates an obvious improvement in the head survival rate compared to other literature data with core decompression only.5,16 Lastly, Civinini et al. showed good results of treating early stage femoral head necrosis using injectable calcium and sulphate/calcium bioceramic with bone marrow concentrate.17 The first aim of our investigation was to evaluate the safety of the additional aspiration and injection of autologous bone marrow. In our study we did not encounter any complications resulting from this method of bone marrow transplantation. In particular, there were no infections, excessive new bone formation or tumor induction and there were no local complications at the harvesting side. These findings are in line with published data showing the very low risk of this one step procedure.18 The second aim of our study is the effect of the additional injection of BMAC into the femoral head on the clinical and radiological outcome. We found that significant pain relief and functional improvement could be achieved in both groups at the 2 year follow up independent of the operative procedure (core decompression or core decompression with BMAC application). There are a limited number of studies evaluating pain and functional gain after core decompression, and a direct comparison with these studies is difficult. Rajagopa et al. performed a systematic review of four studies evaluating core decompression of femoral head necrosis and its effect on pain and function of the hip over a minimum of two years postoperatively.19 The results of their analysis showed a variable gain of function after core decompression procedure. One of the articles quoted demonstrated a significant improvement in hip function,20 a second one just moderate improvement21 and two other studies only minor, or no improvement.22,23 Our study supports the latter. We did not note any significant differences relating to pain or functional state of patients between both groups, supporting the hypothesis, that additional BMAC application after core decompression provides no benefit to patients with FHN at this time. This finding seems to be in contrast to the studies of Hernigou et al. However, the study presented here was set up as a randomized controlled trial with restrictive inclusion criteria (ARCO II only). The rejection of ARCO I patients with a better prognosis might, at least in part, explain differences to the results of Hernigou et al. and other studies. Table 3 shows a comparison of head survival rate of this study and recently published studies using BMAC or similar bone marrow treatment options in femoral head necrosis. Because of different study protocols, meaningful comparison of these research findings is limited. A study with a protocol similar to ours was published by Gangji et al.24 They presented the results of a randomized and prospective clinical trial of ARCO I-II patients treated with core decompression only vs. core decompression with autologous bone marrow application. Femoral head survival rate of this study is also mentioned in Table 3. Gangji et al. also describe an increased head survival rate in the BMAC group. Although this is a randomized controlled prospective trial, there are some principal limitations to this study: there is a short follow-up period (two years) and there are a small number of patients in both study groups. The small study population is a result of the prospective design and very restrictive inclusion criteria of FHN patients (ARCO II only). In comparison to the study presented by Hernigou et al.,16 the major advantage of this study is the comparison of a BMAC group with a control group, thereby increasing explanatory power.\n\nBODY.CONCLUSIONS:\nFemoral head necrosis with a spherical head and irreversible necrosis of the bone (ARCO II) profits from core decompression. In contrast to previous series the current study excluded ARCO I stages of FHN which can regenerate on its own, and inclusion of ARCO I makes interpretation of drilling of this stage difficult. This trial of 25 hips could not detect a benefit from the additional injection of bone marrow concentrate with regard to bone regeneration and clinical outcome in the short term. Further studies of BMAC properties with a larger sample size and longer follow-up are needed to better validate our results and possibly modify our procedure.\n\n**Question:** Compared to core decompression only what was the result of core decompression with the application of bone marrow aspirate concentrate on clinical outcome?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1077
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Attenuating effect of \n\n ABSTRACT.BACKGROUND/OBJECTIVES:\nLactobacillus brevis G101 suppresses the absorption of monosodium glutamate (MSG) from the intestine into the blood in mice. Therefore, the attenuating effect of orally administered G101 on monosodium glutamate (MSG) symptom complex was investigated in humans.\n\nABSTRACT.MATERIALS/METHODS:\nCapsules (300 mg) containing Lactobacillus brevis G101 (1×1010 CFU/individual) or maltodextrin (placebo) was orally administered in 30 respondents with self-recognized monosodium glutamate (MSG) symptom complex for 5 days and the rice with black soybean sauce containing 6 g MSG (RBSM) was ingested 30 min after the final administration. Thereafter, the MSG symptom complex (rated on a 5-point scale: 1, none; 5, strong) was investigated in a double blind placebo controlled study. The intensity of the MSG symptom complex was significantly reduced in respondents of the G101 intake group (2.87 ± 0.73) compared to that in those treated with the placebo (3.63 ± 1.03) (P = 0.0016). Respondents in the placebo group exhibited more of the various major conditions of the MSG symptom complex than in the G101 intake group. Although there was no significant difference in the appearance time of the MSG symptom complex between subjects orally administered G101 and those administered the placebo, its disappearance in < 3 h was observed in 69.9% of subjects in the G101 treatment group and in 38.0% of subjects in the placebo group (P = 0.0841).\n\nABSTRACT.CONCLUSIONS:\nOral administration of Lactobacillus brevis G101 may be able to reduce the intensity of the MSG symptom complex.\n\nBODY.INTRODUCTION:\nMonosodium glutamate (MSG), the sodium salt of glutamate, is used worldwide as a flavor enhancer in various processed foods [123]. MSG was extracted from seafood and has been as the source of the umami taste [4]. The Food and Drug Administration (FDA) has classified MSG as a generally recognized as safe (GRAS) substance and no limitation is placed on its use as a food additive. Nevertheless, Kwok first described a complex of symptoms, including numbness at the back of the neck and arms, weakness, and palpitation, after ingestion of a Chinese meal and suggested that MSG was one of the possible causes [5]. These symptoms were referred to collectively as Chinese Restaurant Syndrome (CRS). In 1995, the Federation of American Societies for Experimental Biology (FASEB) proposed the term MSG symptom complex to denote the reactions that occur following ingestion of MSG [6]. When > 5 g may be ingested, it is characterized by unpleasant reactions such as numbness, tingling, headaches, muscle tightness, general weakness, and flushing [7]. Lactic acid bacteria are found in yogurt, cheese, kimchi, and in the gut microbiota [8]. Upon ingestion, lactic acid bacteria confer beneficial effects on the health of the host [9]. Of these lactic acid bacteria, some convert MSG to gamma-aminobutyric acid (GABA), which has been reported to possess various physiological functions that act via the decarboxylation of glutamic acid by glutamate decarboxylase [1011]. Therefore, lactic acid bacteria have been used as initiators to produce GABA-rich foods [12]. We previously isolated Lactobacillus brevis G-101, a kimchi-derived lactic acid bacterium that potently converted MSG to GABA [13]. Lactobacillus brevis G101 reduces the concentration of MSG in the blood of mice fed oral administration in mice: G101 suppresses the absorption of MSG from the intestine into the blood rather than mediating the biotransformation of MSG to GABA [14]. In the present study, to investigate the ability of orally administered Lactobacillus brevis G101 to attenuate the MSG symptom complex, we orally administered G101 or a placebo (maltodextrin) to respondents presenting with self-recognized MSG symptom complex, following ingestion of rice with black soybean sauce (Jajangbap) containing 6 g MSG (RBSM), and then investigated the intensity of the MSG complex in them. This experiment was performed as a double-blind, placebo-controlled study.\n\nBODY.SUBJECTS AND METHODS.SUBJECTS:\nThe study subjects consisted of 30 healthy Korean individuals (average age, 23.57 ± 2.36 yrs; 10 male subjects, 23.00 ± 3.46 yrs; 20 female subjects, 23.85 ± 1.60 yrs) who self-identified as MSG-sensitive. Exclusion criteria included consumption of any current medication. The recruitment of subjects was approved by the Committee for the Care and Use of Clinical Study at Kyung Hee University (KHSIRB-14-004) and performed according to CONSORT 2010 guidelines.\n\nBODY.SUBJECTS AND METHODS.EXPERIMENTAL PLAN:\nAll 30 subjects participated in an orientation session for this study. The subjects were evenly divided into 2 groups (A and B). In the first week, group A subjects were orally administered one capsule (Lactobacillus brevis G101) at 11:00 am for 5 days and group B subjects took the placebo capsule every day (Table 1). RBSM was ingested 30 min after the final administration of G101, and the intensity and variety of the MSG symptom complex over 6 h were described. In the second week, group A switched with group B and the test was performed for 5 days. The intensity of the MSG symptom complex was rated according to a 5-point scale (1 = none, 5 = strong). Capsules (300 mg) containing 1 × 1010 CFU Lactobacillus brevis G101 or maltodextrin (placebo) were prepared by CTCBIO Inc. RBSM was prepared by mixing steamed rice (180 g) and MSG-contained black soybean sauce. The ingredients of the MSG-contained black soybean sauce (one serving) were as follows: 40 g pork, 40 g onion, 40 g zucchini, 30 g carrot, 50 g cabbage, 1 Tsp black soybean sauce without MSG, 1/2 Tsp soybean oil, 6 g MSG, 1 tsp starch, and 1 2/3 Tsp water. MSG was purchased from a local market.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSIS:\nThe results obtained through this experiment were analyzed using the statistical analysis system (SAS) 9.3 program. The significance of each sample was established using χ2-test and t-test, as appropriate.\n\nBODY.RESULTS.EFFECT OF LACTOBACILLUS BREVIS G101 ON THE INTENSITY OF RBSM-INDUCED MSG SYMPTOM COMPLEX:\nTo evaluate the attenuating effect of Lactobacillus brevis G101 against the MSG symptom complex, we orally administered G101 to volunteers who then ingested RBSM, and after which we evaluated the MSG symptom complex for 6 h in a double-blind, placebo-controlled study (Table 2). The intensity of MSG symptom complex was measured using a 5-point scale (1 = none, 5 = strong). Oral administration of G101 significantly attenuated the intensity of the MSG symptom complex (P = 0.0016). When treated with the placebo, 43.3% of respondents presented with more than a slightly strong MSG symptom complex, but when treated with G101, only 20.0% of respondents exhibited more than a slightly strong response. A significant difference was observed among female subjects, but not among male subjects, indicating that female respondents were more sensitive to the MSG symptom complex than male (Table 3).\n\nBODY.RESULTS.EFFECT OF LACTOBACILLUS BREVIS G101 ON RBSS-INDUCED MSG SYMPTOM VARIETY:\nThe effects of orally administered G101 on RBSM-induced self-recognized MSG symptoms (variety) experienced by the respondents were investigated (Table 4). After ingestion of RBSM, the most frequently observed self-recognized symptoms associated with the MSG symptom complex were thirstiness (81.7%) and drowsiness (70.0%), followed by weakness (26.7%), tightness (11.7%), headache (10.0%), nausea (10.0%), dizziness (8.3%), palpitation (1.7%), flushing (1.7%), and indigestion (1.7%). No significant difference was observed between the varieties of self-recognized symptoms experienced by respondents who were orally administered G101 and those administered the placebo.\n\nBODY.RESULTS.EFFECT OF LACTOBACILLUS BREVIS G101 ON THE APPEARANCE AND DISAPPEARANCE TIME OF RBSS-INDUCED MSG SYMPTOM COMPLEX:\nThe effect of orally administered G101 on the time at which the RBSM-induced MSG symptom complex appeared was also evaluated (Table 5). There was no significant difference in the appearance time of the MSG symptom complex observed between subjects orally administered G101 and those administered the placebo. When subjects were treated with G101, the appearance for the MSG symptoms complex was less than 1 h for 70.0% of the respondents in the group, and in subjects treated with the placebo, the appearance time was for 76.6% of the group. Next, we investigated the effect of orally administered G101 on the disappearance time for the RBSM-induced MSG symptoms complex (Table 6). There was no significant difference in the appearance time for the MSG symptom complex between subjects orally administered G101 and those administered the placebo. Disappearance of the MSG symptom complex in less than 3 h was observed in 69.9% of subjects in the G101 treatment group and in 38.0% of subjects in the placebo group (P = 0.0841).\n\nBODY.DISCUSSION:\nMSG is frequently used in various canned foods and in food from Chinese restaurant. MSG has been associated with various side effects, including numbness, headaches, migraines, palpitations, tightness, weakness, aching, flushing, sweating, fasciculation, lacrimation, syncope, dizziness, shudder attacks, paresthesia, arrhythmia and tachycardia [1]. Although the oral threshold range for minimal MSG-induced symptoms in humans is considerably variable, in the previous study, the oral threshold range for minimal MSG-induced symptoms in 36 subjects was found to be 1.5 to 12 g, and increased intensity of the MSG symptom response was related to an increase in the oral dose [15]. In this study, we found that oral administration of RBSM, which is containing 6 g of MSG, showed various side effects including thirstiness (56.7%), drowsiness (23.3%), weakness (6.7%), indigestion (6.7%), headache (3.3%), and nausea (3.3%). It was previously shown that the elimination of all food sources containing MSG resulted in a decreased frequency of headaches [3]. These results suggest that the side effects of MSG are dependent on the daily intake amount of MSG, and the oral intake of more than 6 g of MSG may cause its side effect. Therefore, Xiong et al. suggested possible ways to reduce the side effects of MSG, such as taking vitamin C or pre-exposure to a low dose of MSG that may either prevent or reduce the side effects of MSG [3]. Nevertheless, it is not sufficient to reduce only the side effects of MSG. Therefore, we investigated the ability of Lactobacillus brevis G101 to reduce the side effects of MSG. Oral administration of Lactobacillus brevis G101 significantly attenuated the MSG symptom complex. Although G101 did not reduce the variety of side effects, it shortened the disappearance time of the MSG symptom complex. In conclusion, this study suggests that orally administration of Lactobacillus brevis G101 may be able to reduce the intensity of the MSG symptom complex and shorten its disappearance time.\n\n**Question:** Compared to maltodextrin (placebo) what was the result of Capsules (300 mg) containing Lactobacillus brevis G101 (1×1010 CFU/individual) on The intensity of the MSG symptom complex (characterized by unpleasant reactions such as numbness, tingling, headaches, muscle tightness, general weakness, and flushing )?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1085
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Laser Peripheral Iridotomy versus Trabeculectomy as an Initial Treatment for Primary Angle-Closure Glaucoma\n\n ABSTRACT.PURPOSE:\n To compare laser peripheral iridotomy (LPI) with trabeculectomy as an initial treatment for primary angle-closure glaucoma (PACG) with peripheral anterior synechiae (PAS) ≥ 6 clock hours. \n\nABSTRACT.METHODS:\n Patients were drawn from two randomized controlled trials. 38 eyes of 38 patients (PAS ≥ 6 clock hours) were treated with LPI (group 1) while 111 eyes of 111 PACG patients (PAS ≥ 6 clock hours) underwent primary trabeculectomy (group 2). All patients underwent a comprehensive ophthalmic examination at baseline and at postoperative visits and were followed up for a minimum of one year. \n\nABSTRACT.RESULTS:\n Group 2 had higher baseline IOP (45.7 ± 14.8 mmHg versus 34.3 ± 14.3 mmHg) than group 1 and more clock hours of PAS (10.4 ± 1.9 versus 9.0 ± 2.2). IOPs at all postoperative visits were significantly lower in group 2 than in group 1 (p = 0.000). Five eyes in group 1 required trabeculectomy. 17 of the 38 eyes in group 1 (44.7%) required IOP-lowering medications as compared to seven of the 111 eyes in group 2 (6.3%). Cataract progression was documented in 2 eyes (5.3%) in group 1 and 16 eyes (14.4%) in group 2. \n\nABSTRACT.CONCLUSIONS:\n Primary trabeculectomy for PACG (PAS ≥ 6 clock hours) is more effective than LPI in lowering IOP.\n\nBODY.1. INTRODUCTION:\nPrimary angle-closure glaucoma (PACG) is the major type of primary glaucoma in China [1–4]. The management of PACG in China is different from the recommendations of the American Academy of Ophthalmology Preferred Practice Patterns (PPP) and other published guidelines [5–7]. For primary angle closure (PAC) and PACG, the PPP recommends laser peripheral iridotomy (LPI) to eliminate pupillary block followed by a treatment strategy \"similar to that for POAG\" [8]. Accordingly, LPI is used as the first-line treatment for all patients with PAC or PACG, medication is added as needed, and surgery is considered when the intraocular pressure (IOP) cannot be controlled with maximum tolerated medications. In China, however, trabeculectomy is considered a primary option for PACG and is generally undertaken if peripheral anterior synechiae (PAS) are greater than 6 clock hours [9, 10]. To the best of our knowledge, there is no comparative study of LPI versus primary trabeculectomy in PACG to support such an approach. Using data from patients enrolled in two separate randomized controlled trials [11, 12], we compared the IOP-lowering efficacy and safety of LPI versus trabeculectomy as an initial treatment for PACG with PAS ≥ 6 clock hours.\n\nBODY.2. METHODS:\nPatients included in this study had participated in 2 randomized clinical trials (RCTs) for PACG. Both trials were conducted in accordance with the tenets of the Declaration of Helsinki and approved by the ethics committee of the Tongren Eye Centre, Capital Medical University. Written informed consent was obtained from all subjects for participation in the original trials. Patients undergoing primary trabeculectomy for PACG had participated in a multicenter RCT (registration number: ChiCTR-TCR-00000218) [12]. This RCT was conducted in four clinical collaborative centers of Beijing Tongren Hospital: Handan 3rd Hospital (Hebei Province, China), Anyang Eye Hospital (Henan Province, China), Fushun Eye Hospital (Liaoning Province, China), and the Chenzhou Eye and Optometry Center (Hunan Province, China). PACG was defined as primary angle closure with glaucomatous optic neuropathy and corresponding visual field defects, and patients were recruited from the four centers between April 2006 and November 2007. The primary purpose of the trial was to report the efficacy and complications of trabeculectomy with or without releasable sutures in PACG. Patients who underwent LPI as an initial treatment were part of another RCT (registration number: ChiCTR-TRC-00000034, http://www.chictr.org.cn) conducted at the Handan 3rd Hospital [11]. The purpose of this RCT was to investigate the role of laser iridotomy (with or without iridoplasty) in patients with synechial PAC or PACG. The definition of PACG was the same as that used in the trial mentioned above; consecutive cases of PAC and PACG presenting to the hospital between October 1, 2005, and October 31, 2006, were recruited for this trial.\n\nBODY.2. METHODS.2.1. PATIENT SELECTION:\nThe inclusion criteria for the current study were as follows:\nPACG: defined as primary angle closure with glaucomatous optic neuropathy and visual field defects [13]Age 40 years or morePAS ≥ 6 clock hoursMinimum follow-up of one year. Patients with acute angle-closure glaucoma were excluded. As part of the original clinical trial, all patients had undergone a comprehensive ophthalmic examination including refraction, Goldmann applanation tonometry, static and dynamic gonioscopy (manipulation) using a one-mirror Goldmann gonioscope [14, 15], slit-lamp examination, fundus examination, and automated perimetry (Humphrey Field Analyzer 750i, Carl Zeiss Meditec; Sita Fast strategy; and 24–2 threshold test). These examinations were performed at baseline and at postoperative visits scheduled at month 1, month 3, month 6, month 12, and month 18. Postoperative visits were scheduled on day 1, day 3, week 1, week 2, month 1, month 3, month 6, month 12, and month 18 following the LPI or trabeculectomy.\n\nBODY.2. METHODS.2.2. LASER PERIPHERAL IRIDOTOMY:\nAll laser procedures were performed by one of two senior glaucoma specialists. 2% pilocarpine was applied, and iridotomy was performed under topical anesthesia using a Nd:YAG laser (YL-1600; NIDEK Co. Ltd., Japan) using an Abraham contact lens (Ocular Instruments Inc., Bellevue, USA). A treatment site was selected in the superior nasal iris or in a crypt where present. The treatment was initiated with a single 4 mJ pulse, the power was adjusted, and the treatment was continued to obtain a 0.2 mm opening; patency was determined by direct visualization of the posterior chamber. In accordance with local practice, IOP-lowering medication was initiated if the IOP was greater than 21 mmHg following laser and confirmed by a repeat reading on the same day [11].\n\nBODY.2. METHODS.2.3. TRABECULECTOMY:\nSurgery was performed under topical or peribulbar anesthesia using a standard surgical technique. The eye was prepared using a standard aseptic technique and draped to isolate the lashes. A lid speculum was inserted and a 7/0 superior rectus muscle traction suture was placed. A limbus-based conjunctival flap was created using a 10 mm incision through the conjunctiva and Tenon's capsule approximately 8–10 mm from the limbus. The flap was dissected forwards and hemostasis achieved with monopolar diathermy. A half-thickness 4 × 3 mm2 rectangular scleral flap was fashioned, and cellulose sponges soaked in MMC (0.3 mg/ml) were applied under the scleral flap, conjunctiva, and Tenon's capsule for a duration determined by the surgeon based on an assessment of risk factors. Irrigation with balanced salt solution was performed to wash out residual MMC solution. A paracentesis was created, a 2 × 1.5 mm trabeculectomy block excised, and an iridectomy performed. The scleral flap was sutured with 10-0 monofilament, BSS was injected into the anterior chamber to assess flow, and the conjunctiva was closed with a single running 8/0 vicryl suture [12]. Visual acuity was recorded with a decimal chart and converted to the logarithm of minimum angle of resolution (LogMAR) format. Finger counting, hand movement, and light perception were recorded as 1.5, 2.0, and 2.5 on the LogMAR scale. At each postoperative visit, a trained technician measured the IOP twice using an applanation tonometer and recorded the average. If the difference between the two measurements was more than 2 mmHg, a third measurement was performed, and the average of the two closer results was recorded. All IOP measurements were performed between 8 am and 12 am. At each site, trained observers used standard LOCS III photographs to assess the lens. Cataract progression was defined as an increase of 2 or more units greater than baseline in any LOCS III category (nuclear, cortical, or posterior capsular opacity). Progression was analyzed in all eyes with a minimum follow-up of 12 months; for those whose longer follow-up was available, it was determined at the 18-month visit. The shallow anterior chamber (AC) was categorized as grade I = peripheral iris-cornea touch; grade II = midiris-cornea touch; and grade III = central cornea-lens touch [16]. Hypotony was defined as IOP ≤ 5 mmHg [17]. At each visit, other complications and interventions, if undertaken, were recorded.\n\nBODY.2. METHODS.2.4. STATISTICAL ANALYSIS:\nOne eye of each patient was randomly selected for the analysis. Statistical analysis was carried out using SPSS 15.0 (SPSS Inc., Chicago, USA). Independent t-test was used to compare the difference between the groups, and the chi-square test was used to compare the difference in IOP-lowing medication and cataract progression between the groups. A p value < 0.05 was considered significant.\n\nBODY.3. RESULTS:\n38 eyes of the 38 patients with PACG that underwent primary iridotomy comprised group 1 while group 2 consisted of 111 eyes of the 111 PACG patients who had undergone trabeculectomy. The baseline characteristics of the two groups are summarized in Table 1. Patients in group 2 were slightly younger and had higher baseline IOP and more clock hours of PAS. \nTable 2 shows the postoperative IOP and visual acuity. At each visit, the IOP was significantly lower in group 2 (p < 0.001). Five eyes in group 1 required trabeculectomy: 1 eye at 6 months, 2 eyes at 12 months, and 2 eyes at 18 months. 17 of the 38 eyes (44.7%, CI: 28.9%–60.5%) in group 1 required a mean of 1.8 IOP-lowering medications while seven of the 111 eyes (6.3%, CI: 1.8%–10.8%) in group 2 required a mean of 1.1 medications (Pearson's chi-square value = 30.940, p < 0.001). 19 eyes in group 2 developed a transient shallow anterior chamber (AC), but there was no instance of lens-cornea touch. 16 eyes (including 13 eyes with shallow AC) experienced transient hypotony. Hypotony lasted one day in 14 eyes and one week in 2 eyes and all recovered spontaneously. One eye developed hypotony maculopathy at one month after surgery that was resolved after IOP increased 3 weeks later. In one eye, the IOP increased to 60 mmHg at 3 months postsurgery and required cyclophotocoagulation. Best-corrected visual acuity at 18 months was 0.5 ± 0.5 in group 1 and 0.6 ± 0.7 in group 2. Four eyes (10.5%) in group 1 lost one line of vision, 3 eyes (7.9%) lost 2 lines, and 3 (7.9%) lost more than 2 lines. In group 2, thirteen eyes (11.7%) lost 1 line, 9 eyes (8.1%) lost 2 lines, and 11 eyes (9.9%) lost more than 2 lines. Six eyes in group 1 and 20 eyes in group 2 lost ≥2 lines (Pearson's chi-square value 0.098, p = 0.755). The absolute risk for ≥2 lost lines of vision in the trabeculectomy group was 18% (20 of 111) compared to 16% (6 of 38) in the LPI group. The absolute risk increased with trabeculectomy is 2% and the number needed to harm (NNH) is 50. Cataract progression as defined was documented in 2 eyes (5.3%, CI: −1.8%–12.4%) in group 1 and 16 eyes (14.4%, CI: 7.9%–20.9%) in group 2 (Pearson's chi-square value = 2.232, p = 0.135). The absolute risk increased in cataract formation with trabeculectomy is 9% and translates into a NNH of 11.\n\nBODY.4. DISCUSSION:\nThe PACG treatment strategy formulated by the glaucoma group of the Chinese Ophthalmology Society is different from that formulated by other published guidelines. As per Chinese guidelines, LPI is suggested for \"patients with PAS < 180° without optic disc and visual field damage\" (an indication endorsed without data by a recent publication), while primary trabeculectomy is recommended for PACG with PAS ≥ 180° [9, 18]. A 2005 survey of clinical practice in PACG diagnosis and treatment found that 73% of Chinese glaucoma specialists preferred trabeculectomy as the initial treatment for PACG with PAS ≥ 180° [10]. Evidence for such an approach is however lacking, and to the best of our knowledge, our study is the first study to compare primary LPI with primary trabeculectomy for PACG. LPI is considered a noninvasive, simple, and safe intervention in PACS, PAC, and PACG [19]. The effect of a LPI would depend on the amount of angle that is available for filtration, and this can be estimated clinically by the extent of PAS [18, 20]. IOP control following LPI as reported in the literature varies considerably, and the therapeutic effect seems to decrease over time [21, 22]. In a retrospective analysis of 131 cases (251 eyes) in China, 18 eyes with PACS, 98 eyes with PAC, and 129 eyes with PACG that underwent LPI were followed up for 9.2 ± 3.7 years [23]. Eyes with PAC and PACG had PAS < 6 clock hours. At the final follow up, 16/18 PACS, 38/98 PAC, and 814/129 of PACG eyes were controlled (defined as IOP less than 21 mmHg) without medications. 60% of cases required additional medications and 13% needed filtering surgery. A study from India evaluated the long-term outcome of PACG following laser iridotomy [24]. 70 consecutive patients with PACG whose IOP remained >21 mmHg despite a patent iridotomy had their IOP controlled by medications or trabeculectomy and were followed up over a 6-year period. A trabeculectomy without antimetabolites was performed if IOP was >21 mmHg on maximal tolerable medical therapy, if there was evidence of progression of the disc/visual field, or if the patient was noncompliant. 46 (65.7%) eyes were controlled medically with 26 (57%) eyes requiring two topical medications. 24 (34.3%) eyes required trabeculectomy at various times during follow-up. The extent of PAS was not reported in this study, but reports suggest that the therapeutic effect of LPI decreases over time and many cases need filtering surgery. The results seem consistent with the biologically plausible hypothesis that LPI is more successful in those with relatively undamaged trabecular meshwork [18]. The literature seems to suggest that, despite the presence of a patent LPI, eyes with established PACG and a certain (currently undefined) extent of PAS require further treatment to control IOP and that medical therapy fails in a significant number of cases necessitating filtering surgery [18, 25]. In the current investigation, the IOP during follow-up was significantly lower in the trabeculectomy group. Furthermore, a larger number of patients who underwent LPI needed medical treatment as compared to those who underwent trabeculectomy. In this developing country, setting patients may not be able to afford medical treatment or are generally not adherent or persistent with it. To simulate a real-world condition, we did not provide free IOP-lowering medication and found that 24% of patients could not afford long-term medication. Moreover, 5 eyes (13.2%) of patients who underwent LPI with baseline PAS ≥ 6 clock hours (group 1) had to undergo trabeculectomy during the follow-up period. The poor effectiveness of IOP lowering following LPI for PACG with ≥6 hours of PAS would seem important in selecting treatment within the well-known constraints of a developing country, including a possible one shot at treatment. We therefore suggest that 6 clock hours of PAS can be considered a provisional clinical threshold for LPI in PACG, as this is likely to change with further research and will be different for individual cases. Our findings lend credence to a recently published algorithm for the management of angle-closure disease that recommended a ≥6 clock hour threshold for PAS; that algorithm was based mainly on biological plausibility [18]. Considering the problems of medical treatment in developing countries, a policy of primary trabeculectomy for PACG with ≥6 clock hours of PAS might merit consideration. In fact, even eyes with <6 clock hours of PAS may be considered for a trabeculectomy if the socioeconomic situation so dictates [23]. The choice of initial treatment must also consider the issue of complications. Consistent with most other publications, we did not encounter any significant complications in the LPI group [11]. While complications with trabeculectomy are more frequent, the incidence of severe sight-threatening complications is now less than what it was 20 years ago; we encountered mainly transient shallow chamber and hypotony. It must be kept in mind that in our study, trabeculectomy was undertaken by glaucoma specialists in the context of a clinical trial and the potential for sight-threatening complications remains. Importantly, even in the context of a clinical trial, 18% of eyes undergoing trabeculectomy lost 2 or more lines of vision. However, the number needed to harm for ≥2 lost lines of vision is an acceptable 50. Incision-induced astigmatism as well as progression of cataract was the major reason for visual loss, but both would be considered treatable. 19% of patients in undergoing trabeculectomy had astigmatism of >1D at 6-month visit, and 14.4% had obvious progression of lens opacity. The NNH for progression of cataract with trabeculectomy was 11. While successful intervention is available for cataracts, this low NNH that could lead to an increase in the burden of cataract is a disadvantage with trabeculectomy. The choice for the initial intervention for PACG in China and other developing countries must weigh up effectiveness versus complications as well as the need for additional medications and further interventions. While LPI is safe and easy, there is a more need for intensive medications and surgical intervention. The role of cataract extraction in primary angle-closure disease is evolving but is again more likely to be useful for those with angles less compromised by PAS [18, 26]. Trabeculectomy is better than LPI for lowering the IOP, but it does have the potential for more loss of visual acuity and for serious complications. Accordingly, it is probably best to reserve primary surgical intervention for established PACG with ≥6 clock hours as these are unlikely to do well with laser alone. While not statistically significant in this study, the higher chances for progression of cataract following trabeculectomy raise the question of combining filtration with cataract surgery, an issue not addressed in this study. In addition to clinical factors, any decision will also depend on the availability of lasers and surgeons and costs. One of the limitations in our study is the small sample size in group 1 and the relatively short follow-up time. As few patients progressed during this time, we could not analyze the relationship between IOP and the visual field progression, but IOP would usually be considered an acceptable surrogate. Also, our results are derived from patients enrolled in RCTs, and the effectiveness in the real world is likely to be different. Furthermore, as it was not part of this study, we cannot comment on the increasingly popular role of phacoemulsification or its combination with trabeculectomy in the management of PACG [18]. The low NNH for cataracts with trabeculectomy could however be used as an argument for combining the two surgical procedures. Finally, while the definition of PACG used in both trials was the same, patients for this study were drawn from two separate trials that asked different questions. This could introduce significant bias. The original LPI RCT (registration number: ChiCTR-TRC-00000034, http://www.chictr.org.cn) [11] that randomized patients into two different treatment groups (iridotomy or iridotomy plus iridoplasty) found no significant difference in IOP, medications, need for surgery, or visual function between groups at the 1-year visit. This RCT showed that in eyes with PACG, both iridotomy alone and iridotomy combined with iridoplasty provide a significant and similar reduction in IOP. For the trabeculectomy RCT (registration number: ChiCTR-TCR-00000218) [12], patients were randomly allocated to permanent or releasable sutures. This study too did not demonstrate any significant differences between the groups. Accordingly, we felt that analyzing patients from either arm of the two trials for the current study was acceptable but acknowledge that the possibility of unknown differences does exist and could affect interpretation. However, as the baseline IOP was higher and PAS more extensive in the trabeculectomy group, the conclusion in favor of trabeculectomy is likely to stand. In conclusion, our results support the Chinese guidelines for management of PACG. In PACG with ≥6 clock hours of PAS, primary trabeculectomy is more effective than LPI in lowering IOP and significantly requires less medication but with more progression of cataract and more loss visual acuity. In developing countries, such as China, trabeculectomy can be considered a primary option for PACG with this degree of PAS.\n\n**Question:** Compared to primary trabeculectomy group 2 what was the result of laser peripheral iridotomy (LPI) group 1 on Cataract progression?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1092
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized, double-blind comparison of OROS\n\n ABSTRACT.BACKGROUND:\nLong-acting opioid formulations are advocated for maintaining pain control in chronic cancer pain. OROS® hydromorphone is a sustained-release formulation of hydromorphone that requires dosing once daily to maintain therapeutic concentrations. The objective of this study was to demonstrate the clinical equivalence of immediate-release and sustained-release formulations of hydromorphone and morphine for chronic cancer pain.\n\nABSTRACT.METHODS:\n200 patients with cancer pain (requiring ≤ 540 mg/d of oral morphine) participated in this double-blind, parallel-group trial. Patients were randomized to receive hydromorphone or morphine (immediate-release for 2–9 days, sustained-release for 10–15 days). Efficacy was assessed with the Brief Pain Inventory (BPI), investigator and patient global evaluations, Eastern Cooperative Oncology Group performance status, and the Mini-Mental State Examination. The primary endpoint was the 'worst pain in the past 24 hours' item of the BPI, in both the immediate-release and sustained-release study phases, with treatments deemed equivalent if the 95% confidence intervals (CI) of the between-group differences at endpoint were between -1.5 and 1.5. No equivalence limits were defined for secondary endpoints.\n\nABSTRACT.RESULTS:\nLeast-squares mean differences (95% CI) between groups were 0.2 (-0.4, 0.9) in the immediate-release phase and -0.8 (-1.6, -0.01) in the sustained-release phase (intent-to-treat population), indicating that the immediate-release formulations met the pre-specified equivalence criteria, but that the lower limit of the 95% CI (-1.6) was outside the boundary (-1.5) for the sustained-release formulations. BPI 'pain now PM' was significantly lower with OROS® hydromorphone compared with controlled-release morphine (least-squares mean difference [95% CI], -0.77 [-1.49, -0.05]; p = 0.0372). Scores for other secondary efficacy variables were similar between the two sustained-release treatments. At endpoint, > 70% of investigators and patients rated both treatments as good to excellent. The safety profiles of hydromorphone and morphine were similar and typical of opioid analgesics.\n\nABSTRACT.CONCLUSION:\nEquivalence was demonstrated for immediate-release formulations of hydromorphone and morphine, but not for the sustained-release formulations of OROS® hydromorphone and controlled-release morphine. The direction of the mean difference between the treatments (-0.8) and the out-of-range lower limit of the 95% CI (-1.6) were in favor of OROS® hydromorphone.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT0041054\n\nBODY.BACKGROUND:\nOpioid analgesics are highly effective for the treatment of pain, enabling 85% to 95% of patients to gain functional control of their lives [1,2]. Opioid therapy is typically initiated after the failure of maximum doses of non-opioid analgesics [3] and is titrated to attain the best balance between pain relief and side effects for each patient. The European Association for Palliative Care [4] and the American Pain Society [5] advocate the use of long-acting oral agents for maintaining analgesia once individual dose requirements have been established. For these reasons, long-acting opioids have become the mainstay of chronic cancer pain therapy. OROS® hydromorphone is a unique long-acting opioid formulation that utilizes Push-PullTM active osmotic technology developed by ALZA Corporation (Mountain View, CA, USA). The Push-PullTM system maintains consistent hydromorphone plasma concentrations throughout the 24-hour dosing interval, providing long-lasting analgesia [6-8]. Release of the drug from the system is actively controlled by the dosage form itself, and is not significantly influenced by environmental factors such as the surrounding pH or gastric motility [9,10]. There is a minimal effect of food on the rate and extent of absorption of hydromorphone from OROS® hydromorphone; in one study the mean geometric ratios of fed and fasted subjects for peak plasma concentration (Cmax) and area under the concentration-time curve (AUC) were within 20%; the median time to peak plasma concentration (Tmax) was lower under fed conditions (12 versus 16 hours), but mean plasma concentration profiles generally overlapped, especially up to 6 hours after dosing [11]. The pharmacokinetics of OROS® hydromorphone are also minimally affected by alcohol; one study found that plasma hydromorphone concentrations were slightly higher after alcohol (240 mL solutions of 4%, 20%, and 40% alcohol and orange juice) compared with no alcohol, but there was no clear alcohol dose-response relationship and no dose dumping of hydromorphone occurred [12]. The primary objective of the current study was to demonstrate the clinical equivalence of hydromorphone and morphine (immediate-release [IR] and sustained-release [SR] formulations) using the 'worst pain in the past 24 hours' item of the Brief Pain Inventory (BPI). Morphine was selected as the active comparator since it is the gold standard for pain control. Controlled-release (CR) morphine is available in twice-daily and once-daily formulations and is widely used to alleviate cancer pain; in this study the twice-daily formulation (dosing every 12 hours) was used.\n\nBODY.METHODS.PATIENTS:\nThis study was a multicenter, phase III, randomized, double-blind (double-dummy), active-controlled, parallel-group, equivalence trial. It was conducted at 37 centers in Belgium, Canada, France, Germany, The Netherlands, Spain, Sweden, and the United Kingdom. The study enrolled inpatients, outpatients, and day-patients ≥ 18 years of age who had moderate to severe chronic cancer pain requiring 60 to 540 mg of oral morphine (or equivalent) every 24 hours. The criteria used for patient selection are listed in Table 1. Concomitant chemotherapy or radiotherapy was permitted. All patients who entered the trial were informed of the nature of the study, and provided written informed consent for participation. The study was conducted in accordance with the principles of the Declaration of Helsinki. Table 1 Criteria for patient selection Inclusion criteria Exclusion criteria • Age ≥ 18 years • Pain not considered potentially responsive to opioids • Presence of chronic cancer pain: • Pain present only upon movement  ◦ currently receiving strong oral or transdermal opioid analgesics (60–540 mg oral morphine or equivalent every 24 hours) • Need for other opioid analgesics (except study medication and breakthrough pain medication) after randomization  ◦ appropriate candidate for strong oral or transdermal opioid analgesics (anticipated requirement, 60–540 mg oral morphine or equivalent every 24 hours) • Current or recent (within 6 months) history of drug and/or alcohol abuse • Pain suitable for treatment with a once-daily formulation • Women of childbearing potential who are pregnant or lactating, seeking pregnancy, or failing to take adequate contraceptive precautions • Intolerance of, or hypersensitivity to, hydromorphone or other opioids • Presence of GI disease of sufficient severity to likely interfere with oral analgesia (e.g., dysphagia, vomiting, no bowel movement or bowel obstruction due to impaction within 5 days of study entry, severe gut narrowing that may affect analgesic absorption or transit) • Use of monoamine oxidase inhibitors within 2 weeks prior to study entry • Investigational drug use within 4 weeks of study entry • Presence of conditions for which risks of opioid use outweigh potential benefits (e.g., raised intracranial pressure, hypotension, hypothyroidism, asthma, reduced respiratory reserve, prostatic hypertrophy, hepatic impairment, renal impairment, elderly and debilitated, convulsive disorders, Addison's disease) Abbreviation: GI, gastrointestinal \n\nBODY.METHODS.DRUGS AND DOSAGES:\nPatients were randomized 1:1, with a central computer-generated randomization list, to receive hydromorphone or morphine for up to 24 days (Figure 1). This study consisted of 2 phases: an initial IR phase and a subsequent SR phase. In the IR phase, patients received IR formulations of either hydromorphone [Dilaudid®, Abbott Laboratories] or morphine (morphine sulfate IR [Sevredol®, Napp Laboratories]) every 4 hours (6 times daily) for 2 to 9 days. Patients underwent conversion from previous therapy to 1 of 6 possible initial dose levels, based on individual patient characteristics and generally accepted morphine equivalent conversion factors. Doses were selected according to the available tablet strengths and a working conversion ratio of 1:5 (1 hydromorphone:5 morphine equivalents), in the ranges hydromorphone IR 12–108 mg/day and morphine IR 60–540 mg/day. The dose was titrated to the next higher dose level if the patient had more than 3 breakthrough pain episodes requiring breakthrough pain medication within the previous 24 hours. Daily doses were titrated up to the next higher dose level, with no dose levels skipped, at most once a day. Dose titration was continued until dose-stable pain control was achieved. Patients who experienced 2 consecutive days with no more than 3 breakthrough pain episodes requiring rescue medication per day were considered to have achieved dose-stable pain control and could begin the SR phase of the study. Patients who did not achieve dose stable pain control by day 9 were withdrawn from the study, and end-of-study evaluations were carried out. Figure 1Study design. BID, twice daily; CR, controlled-release; IR, immediate-release; QD, once-daily; q4h, every 4 hours. The duration of the SR phase was 10 to 15 days. In this phase, patients continued to receive the same study drug but in an SR formulation: OROS® hydromorphone once-daily or CR morphine (morphine sulfate SR [MST Continus®, Napp Laboratories]) twice-daily. They were started on the same dose level number on which they had achieved dose-stable pain control in the preceding IR phase, which was then adjusted as required, at most every 2 days, in single steps so that no dose level was skipped. Doses were selected according to the available tablet strengths and a working conversion ratio of approximately 1:5 (hydromorphone:morphine), in the ranges OROS® hydromorphone 16–96 mg/day and CR morphine 60–520 mg/day. The SR phase was completed after a minimum of 10 days (maximum of 15 days) on OROS® hydromorphone or CR morphine if pain was controlled and the dose was stable for at least the previous 2 days. In the IR phase, the dosing regimen was the same for both drugs: patients received a dose of study medication 6 times a day, every 4 hours, at 0600, 1000, 1400, 1800, 2200, and 0200. In this phase, all treatments were over-encapsulated resulting in identical capsules. In the SR phase, dosing was done twice each day at 1000 and 2200. To maintain the blind in this phase, matching placebo tablets and capsules were used: patients in the OROS® hydromorphone group took OROS® hydromorphone and placebo CR morphine in the morning and placebo CR morphine in the evening; patients in the CR morphine group took CR morphine and placebo OROS® hydromorphone in the morning and CR morphine in the evening. In both study phases, patients could also receive breakthrough pain medication as needed, either hydromorphone or morphine, supplied as the IR formulation. A single dose of breakthrough pain medication contained approximately one-sixth of the patient's daily dose.\n\nBODY.METHODS.ASSESSMENTS:\nBaseline efficacy assessments were done before dosing on day 1 of the IR phase and included: the BPI (12-item instrument to assess pain intensity [from 0 = no pain to 10 = pain as bad as you can imagine], pain relief [0% to 100%], and pain interference with various aspects of the patient's life [from 0 = no interference to 10 = complete interference]) [13], the Mini-Mental State Examination (MMSE; 0–30, higher scores indicate better cognitive performance) [14], and the Eastern Cooperative Oncology Group (ECOG) performance status score (0–4, higher scores indicate poorer performance) [15]. Primary equivalence of efficacy was assessed using the BPI assessment of 'worst pain in the past 24 hours' on an 11-point scale ranging from 0 (no pain) to 10 (worst pain imaginable). This was selected as the primary efficacy measure because evidence suggests that severely intense pain is what interferes most significantly with activities of daily life [13]. Secondary endpoints included other assessments of pain from the BPI (completed by the investigator in consultation with the patient); investigator and patient global assessments of treatment effectiveness (1 = poor, 2 = fair, 3 = good, 4 = very good, 5 = excellent); MMSE; ECOG; time to dose stabilization; and usage and dose of breakthrough pain medication. Patients rated BPI 'worst pain in the past 24 hours' once daily (just before taking the mid-morning dose of study medication) and 'pain now' twice daily (AM and PM; just before taking the mid-morning and evening doses) throughout the study and recorded responses in daily diaries. The other BPI assessments, plus investigator and patient global evaluations of treatment effectiveness, MMSE, and ECOG, were evaluated at the end of both phases. Adverse events (AEs) and concomitant medications were recorded throughout the study. Physical examinations were carried out at baseline and at the end of each study phase. Blood and urine samples were collected for laboratory analysis (hematology, serum biochemistry, and urinalysis) at baseline and at the end of the SR phase. Patients were contacted daily during the IR phase and every 3 to 4 days during the SR phase (preferably by home or clinic visit, or by telephone if appropriate). Contact was also made 3 days after the last dose of study drug to determine the patient's AE status.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nAssuming variability (standard deviation, SD) in the primary efficacy variable of 2.0 [16] and a 30% dropout/non-evaluable rate, a sample size of approximately 70 patients per treatment group was required (140 total). This would provide 90% power to detect equivalence with an equivalence limit of 1.5. A planned blinded re-estimation of the optimal sample size was done by an appointed independent person after 55 patients had completed the study, the variability for the primary efficacy measure was estimated using their data, and the sample size was increased to 170 patients. Subsequently, an unplanned, blinded re-estimation of the variability of the primary variable for both phases was done after 120 patients had completed the study, and as a result, the sample size was increased to 200 patients. All statistical analyses were pre-specified. All efficacy variables were analyzed using the intent-to-treat (ITT) population, which included all patients who took at least 1 dose of study medication and had at least 1 assessment from each study phase. A per-protocol (PP) analysis, excluding data from patients with major protocol deviations, was also done. All patients taking at least 1 dose of study medication were included in the analysis of safety. All efficacy analyses included a pre-specified adjustment for baseline value and country to control for potentially confounding variables. The primary efficacy analysis used the mean of the last 2 post-baseline recorded values (or the last value if only 1 was available) for BPI 'worst pain in the past 24 hours'. The primary measure was calculated separately for the IR and SR phases (there were therefore two primary outcomes). An analysis of covariance (ANCOVA) was used to calculate the 95% two-sided confidence interval (CI) for the difference between the adjusted means for the 2 treatments at endpoint of each phase. An equivalence test was used to compare the mean scores of the primary efficacy measure at the end of each phase [17-19]. The 2 treatments, within a phase, were considered equivalent if the 95% two-sided CI for the treatment difference was within a -1.5 and 1.5 point difference on the BPI scale. This particular CI range was selected prospectively because it was considered clinically relevant through consultation with physicians who specialize in analgesia. For secondary endpoints (including BPI 'pain at its least in past 24 hours', 'pain on average pain', 'pain now' (AM and PM), 'pain relief in past 24 hours', 'pain interference in past 24 hours', MMSE, and ECOG) an ANCOVA model was used to calculate 95% two-sided CIs for the difference between the adjusted treatment means at the end of the IR and SR phases. The adjusted means were estimated by the same approach used for the primary endpoint, with the baseline score designated as a covariate; however, no equivalence limits were specified for the secondary endpoints. Time to dose stabilization during both study phases was analyzed by the log-rank test. Summary statistics with 95% CIs were produced for investigator and patient global evaluations of treatment effectiveness and to assess the number of medication doses taken for breakthrough pain in the last 2 days of each phase. P values (not adjusted for multiple comparisons) are reported for secondary endpoints. Statistical significance was declared at the α = 0.05 level.\n\nBODY.RESULTS.STUDY POPULATION:\n202 patients were enrolled into the study (Belgium, n = 29; Canada, n = 10; France, n = 7; Germany, n = 15; Netherlands, n = 29; Spain, n = 41; Sweden, n = 11; UK, n = 60). 200 patients were randomized and took study medication. 163 patients (81.5%) completed the IR phase and 133 (66.5%) completed the SR phase. Figure 2 is a flow diagram of patient entry and completion and the reasons for premature discontinuation. Withdrawal from the study owing to lack of efficacy was more common in patients randomized to hydromorphone (n = 11) compared with morphine (n = 4). The mean age of participants overall was 59.8 years. 98.5% of patients were Caucasian and 51% were female. The most common types of cancer were breast (28%), lung (20%), and gastrointestinal (16%). For 133 (67%) patients, the predominant pain location was bone or soft tissue; 34 (17%) others had mixed pain, and 33 (17%) had visceral pain. No patient had neuropathic pain. The most common pain medications used at study entry were morphine (n = 124 [62%]) and tramadol (n = 38 [19%]). The baseline demographics and clinical characteristics were similar for both treatment groups (Table 2). Table 2 Demographic and baseline clinical characteristics Characteristic Hydromorphone (N = 99) Morphine (N = 101) Mean (SD) age, years 60.7 (12.50) 59.0 (11.36) Gender, % male 46.5 51.5 Race, % Caucasian/Black/Asian/Other 100/0/0/0 97/0/2/1 Mean (SD) height, cm 166.7 (9.33) 167.3 (10.47) Mean (SD) weight, kg 66.3 (15.33) 67.4 (13.33) Mean (SD) BMI, kg/m 2 23.8 (5.14) 24.2 (5.06) Cancer type, n (%)  Breast 23 (23.2) 33 (32.7)  Lung 20 (20.2) 19 (18.8)  Genitourinary 18 (18.2) 12 (11.9)  Gastrointestinal 17 (17.2) 15 (14.9)  Oral cavity 3 (3.0) 3 (3.0)  Lymphoma 3 (3.0) 0  Leukemia 1 (1.0) 2 (2.0)  Bone 1 (1.0) 1 (1.0)  Other 13 (13.1) 16 (15.8) Predominant pain type, n (%)  Bone or soft tissue 61 (61.6) 72 (71.3)  Mixed 19 (19.2) 15 (14.9)  Visceral 19 (19.2) 14 (13.9) Mean (SD) MMSE score* 28.5 (2.3) 28.8 (2.0) Mean (SD) ECOG score 1.6 (0.8) 1.6 (0.9) Abbreviations: BMI, body mass index; ECOG, Eastern Cooperative Oncology Group; MMSE, Mini-Mental State Examination; SD, standard deviation *Hydromorphone, n = 75; morphine, n = 73 Figure 2Patient disposition. IR, immediate-release; SR, sustained-release.\n\nBODY.RESULTS.EFFICACY.BPI 'WORST PAIN IN THE PAST 24 HOURS' (PRIMARY ENDPOINT):\nIn each study phase, mean values for BPI 'worst pain in the past 24 hours' decreased with both hydromorphone and morphine treatments (Table 3). The least-squares (LS) mean (95% CI) differences between the hydromorphone and morphine groups were 0.2 points (-0.4, 0.9) in the IR phase and -0.8 points (-1.6, -0.01) in the SR phase (ITT population). In the IR phase, the 95% CI was within the -1.5 to 1.5 equivalency range (-0.4 to 0.9), indicating that IR hydromorphone and IR morphine were equivalent in this primary efficacy outcome measure. In the SR phase, the upper limit of the 95% CI was within the 1.5 boundary (-0.01), but the lower limit was less than -1.5 (-1.6), indicating that the SR formulations were not equivalent but that the negative direction of the mean difference was in favor of OROS® hydromorphone (Figure 3). Results of the analysis of the primary efficacy variable were similar when using data from the PP population. Table 3 BPI scores at baseline, end of IR phase, and end of SR phase (ITT population) Variable Baseline* Mean (SD) End of IR phase † LS mean (SE) End of SR phase ‡ LS mean (SE) Worst pain  Hydromorphone 6.3 (2.7) 5.0 (0.3) 3.5 (0.3)  Morphine 6.2 (2.5) 4.8 (0.3) 4.3 (0.3) Least pain  Hydromorphone 2.7 (2.5) 1.8 (0.2) 1.8 (0.2)  Morphine 2.9 (2.7) 2.2 (0.2) 1.8 (0.2) Average pain  Hydromorphone 5.0 (2.1) 3.6 (0.2) 3.4 (0.3)  Morphine 5.1 (2.1) 3.6 (0.2) 3.2 (0.3) Pain now AM  Hydromorphone 4.0 (2.7) 3.3 (0.3) 2.4 (0.3)  Morphine 4.1 (2.4) 3.4 (0.3) 2.8 (0.3) Pain now PM  Hydromorphone 4.8 (3.0) 3.6 (0.3) 2.6 (0.3) §  Morphine 4.2 (2.5) 3.7 (0.3) 3.4 (0.3) Abbreviations: BPI, Brief Pain Inventory; CR, controlled-release; IR, immediate-release; LS, least-squares; SD, standard deviation; SE, standard error; SR, sustained-release *Hydromorphone, n = 99; morphine, n = 101 †IR hydromorphone, n = 99; IR morphine, n = 101 ‡OROS ® hydromorphone, n = 77; CR morphine, n = 86 §p = 0.0372 versus CR morphine Figure 3Least-squares mean differences and 95% confidence intervals between the OROS® hydromorphone and controlled-release (CR) morphine groups at end of the sustained-release (SR) phase.\n\nBODY.RESULTS.EFFICACY.SECONDARY ENDPOINTS:\nAt the end of the IR phase, mean scores for the secondary efficacy variables (other assessments of pain from the BPI [Table 3 and Figure 4], and MMSE and ECOG scores [data not shown]) were similar for IR hydromorphone and IR morphine, except for BPI 'pain interference with normal work', which favored IR hydromorphone (Figure 4; p = 0.0386). Figure 4Mean Brief Pain Inventory (BPI) interference scores at baseline, end of immediate-release (IR) phase, and end of sustained-release (SR) phase. Values at baseline represent means; values at other time points represent least-squares means. At baseline: hydromorphone, n = 99; morphine, n = 101. In IR phase: IR hydromorphone, n = 99; IR morphine, n = 101. In SR phase: OROS® hydromorphone, n = 77; controlled-release (CR) morphine, n = 86. At the end of the SR phase, scores were similar between treatment groups for BPI items 'least pain', 'average pain', and 'pain now AM'; however, 'pain now PM' was significantly lower in the OROS® hydromorphone group compared with the CR morphine group (LS mean [SE] scores, 2.6 [0.3] versus 3.4 [0.3], respectively; p = 0.0372) (Table 3). LS mean differences (95% CI) between treatment groups were 0.06 (-0.51, 0.64), 0.15 (-0.49, 0.79), -0.38 (-1.03, 0.27), and -0.77 (-1.49, -0.05) for least, average, morning, and evening pain, respectively (Figure 3). Figures 5 and 6 show the consistency of response to both OROS® hydromorphone and morphine throughout the SR phase, during both the morning and evening assessments of BPI 'pain now' scores. Figure 5Mean (standard error, SE) pain now scores in the morning during the sustained-release (SR) phase. Pain was rated using the 11-point Brief Pain Inventory (BPI), ranging from 0 (no pain) to 10 (worst pain imaginable). CR, controlled-release. Figure 6Mean (standard error, SE) pain now scores in the evening during the sustained-release (SR) phase. Pain was rated using the 11-point Brief Pain Inventory (BPI), ranging from 0 (no pain) to 10 (worst pain imaginable). CR, controlled-release. LS mean BPI 'pain relief' at endpoint was 69.7% for the OROS® hydromorphone group and 70.0% for the CR morphine group (95% CI for treatment difference, -8.11, 7.54). BPI 'pain interference' scores were also similar between treatment groups at endpoint, (Figure 4), as were LS mean ECOG performance status scores (1.8 versus 1.7 in the OROS® hydromorphone and CR morphine groups, respectively; 95% CI for treatment difference, -0.13, 0.26), and LS mean MMSE scores (28.9 versus 29.2; 95% CI for treatment difference, -1.19, 0.51). The majority of both investigators and patients (> 70%) rated the treatments as good, very good, or excellent at the end of each study phase (Figure 7). Mean ± SD patient global assessment scores at the end of the SR phase were 3.2 ± 1.14 for the OROS® hydromorphone group and 3.3 ± 0.98 for the CR morphine group (p = 0.6696); mean ± SD investigator global assessment scores were 3.2 ± 1.07 and 3.3 ± 0.91, respectively (p = 0.4760). Figure 7Patient (A) and investigator (B) global evaluations of treatment effectiveness at the end of the sustained-release (SR) phase. CR, controlled-release. In both treatment phases, patients in the hydromorphone group took longer than those in the morphine group to achieve stable doses (p < 0.001). The number of patients requiring dose-level changes during the SR phase was similar for both groups (19 of 77 [24.7%] OROS® hydromorphone recipients and 23 of 86 [26.7%] CR morphine recipients). Comparison of dosage levels at the end of the SR phase generally showed that similar proportions of patients received OROS® hydromorphone or CR morphine at each of the 5 (approximate 1:5 hydromorphone to morphine) lower dosage levels, although more patients in the OROS® hydromorphone group (10% versus 2% in the CR morphine group) received the highest dosage level evaluated in this study (Table 4). Table 4 Dose levels at end of SR phase OROS ® hydromorphone (N = 77) CR morphine (N = 86) Dose level Dose (mg) every 24 hours Number of subjects (%) Dose (mg) every 12 hours Number of subjects (%) 1 16 22 (28.6) 30 25 (29.1) 2 24 14 (18.2) 60 23 (26.7) 3 32 18 (23.4) 90 15 (17.4) 4 48 10 (13.0) 120 13 (15.1) 5 72 5 (6.5) 175 8 (9.3) 6 96 8 (10.4) 260 2 (2.3) Abbreviations: CR, controlled-release; SR, sustained-release In the IR phase, 61.6% of patients taking IR hydromorphone and 52.5% of those receiving IR morphine required medication for breakthrough pain on the last 2 days of the phase. In the SR phase, the corresponding percentages in the OROS® hydromorphone and CR morphine groups were 48.1% and 48.8%, respectively. The mean (SD) numbers of breakthrough pain medication doses taken on the last 2 days were as follows: IR phase, 1.8 (2.2) and 1.3 (1.8) for the IR hydromorphone and IR morphine groups, respectively; SR phase, 1.6 (2.2) and 1.4 (1.9) for the OROS® hydromorphone and CR morphine groups, respectively. The mean total doses of breakthrough pain medication as a percentage of the mean daily consumption were 39.4% and 27.1% for hydromorphone and morphine in the IR phase, and 13.8% and 10.3% for OROS® hydromorphone and CR morphine in the SR phase. There were no statistically significant treatment differences in the numbers of patients who took breakthrough pain medication or in the amount of medication taken in either phase.\n\nBODY.RESULTS.SAFETY:\nThe overall safety profiles of hydromorphone and morphine were comparable, with a similar number of patients in each group reporting AEs. Overall, irrespective of study phase, 81 of 99 (81.8%) hydromorphone-treated patients and 90 of 101 (89.1%) morphine-treated patients reported at least 1 AE. The most commonly reported AEs during the IR and SR phases of the study are listed in Table 5. The types of AEs were typical of those generally associated with opioid use. The incidence of constipation was higher with hydromorphone than morphine in both phases of the study (23.2% versus 10.9% in the IR phase and 39.0% versus 22.1% in the SR phase). In the SR phase, the incidences of vomiting, nausea, and somnolence were higher with CR morphine than with OROS® hydromorphone (22.1% versus 9.1%, 29.1% versus 19.5%, and 14.0% versus 10.4%, respectively). Most AEs were mild or moderate in severity and no clear treatment difference was seen in the relative severity of the most commonly reported AEs. Approximately 30% of patients with AEs reported events that were considered unlikely to be related or not related to study therapy. Table 5 Adverse events reported by at least 5% of patients in any group during the IR or SR phase IR phase SR phase Adverse event Hydromorphone (N = 99) Morphine (N = 101) OROS ® hydromorphone (N = 77) CR morphine (N = 86) Constipation 23 (23.2%) 11 (10.9%) 30 (39.0%) 19 (22.1%) Nausea 18 (18.2%) 23 (22.8%) 15 (19.5%) 25 (29.1%) Vomiting 16 (16.2%) 19 (18.8%) 7 (9.1%) 19 (22.1%) Somnolence 11 (11.1%) 11 (10.9%) 8 (10.4%) 12 (14.0%) Dizziness 8 (8.1%) 6 (5.9%) 4 (5.2%) 8 (9.3%) Headache 8 (8.1%) 6 (5.9%) 3 (3.9%) 2 (2.3%) Diarrhea 8 (8.1%) 1 (1.0%) 7 (9.1%) 2 (2.3%) Pruritus 4 (4.0%) 5 (5.0%) 3 (3.9%) 5 (5.8%) Asthenia 5 (5.1%) 1 (1.0%) 6 (7.8%) 4 (4.7%) Fatigue 3 (3.0%) 3 (3.0%) 4 (5.2%) 6 (7.0%) Confusion 2 (2.0%) 2 (2.0%) 7 (9.1%) 2 (2.3%) Anemia 2 (2.0%) 1 (1.0%) 3 (3.9%) 6 (7.0%) Anorexia 1 (1.0%) 4 (4.0%) 2 (2.6%) 5 (5.8%) Insomnia 1 (1.0%) 2 (2.0%) 5 (6.5%) 4 (4.7%) Peripheral edema 0 3 (3.0%) 1 (1.3%) 8 (9.3%) Pyrexia 2 (2.0%) 1 (1.0%) 4 (5.2%) 2 (2.3%) Anxiety 1 (1.0%) 1 (1.0%) 5 (6.5%) 1 (1.2%) Abbreviations: CR, controlled-release; IR, immediate-release; SR, sustained-release Serious AEs were reported for 24 patients during the double-blind period (by 5 patients in each treatment group in the IR phase, and by 7 patients in the OROS® hydromorphone group and 9 in the CR morphine group in the SR phase [2 patients had serious AEs in both phases]). Many of the serious AEs were associated with the underlying disease, although approximately one third were considered definitely or probably related to study therapy. Three patients in the morphine group died during the study (2 during the IR phase, 1 during the SR phase). None of the deaths was considered related to the study medication. Twenty-six patients withdrew prematurely from the study because of AEs, 16 during the IR phase (9 hydromorphone, 7 morphine) and 10 during the SR phase (6 OROS® hydromorphone, 4 CR morphine). There was no statistically significant difference between treatment groups in the time to withdrawal, irrespective of phase (IR phase, p = 0.6537; SR phase, p = 0.2827). The use of concomitant medications, including the use of non-opioid analgesics, was similar for both treatment groups in both the IR and SR phases.\n\nBODY.DISCUSSION:\nThis was a short-term, randomized, double-blind comparative study testing the clinical equivalence of IR and SR formulations of hydromorphone and morphine in patients with chronic cancer pain. In each study phase (IR and SR), mean values for BPI 'worst pain in the past 24 hours' (the primary efficacy endpoint) decreased with both hydromorphone and morphine. In the IR phase, the LS mean difference between the treatment groups was 0.2 points, and the 95% CI (-0.4, 0.9) was within the pre-specified equivalency range of -1.5 to 1.5. In the SR phase, the LS mean difference was -0.8 points in favor of OROS® hydromorphone. The upper limit of the 95% CI (-0.01) was within the 1.5 boundary, but the lower limit (-1.6) was less than the -1.5 limit. Therefore, according to the pre-specified criteria, equivalence was not demonstrated; the direction of the difference was in favor of OROS® hydromorphone. The BPI item 'worst pain in the past 24 hours' was selected as the primary endpoint of the trial based on previous studies showing it to be the most sensitive BPI measure in the clinical trial setting [13,20-22]. The results of the present study further suggest that OROS® hydromorphone provides consistent pain relief over 24 hours, and that pain levels in the evening were significantly lower after OROS® hydromorphone compared with CR morphine treatment. This finding reflects the 'pain now PM' measure being scored at the end of morphine dosing (i.e. trough levels) but at the midpoint of hydromorphone dosing. The comparable results between groups for the other secondary efficacy variables (including assessments of pain interference with physical activity and social function, performance status, and cognition) support similarity in the treatment efficacy. Withdrawal from the study owing to lack of efficacy was more common in patients randomized to hydromorphone (n = 11) compared with morphine (n = 4). This is a potentially negative finding for hydromorphone; however, the numbers are too low to draw any definite conclusions. Results of the dose analysis were generally consistent with those of previous reports, which indicate that most cancer pain can be controlled with ≤ 240 mg of oral morphine daily [23]. In the present study, 83% of the OROS® hydromorphone group and 88% of the CR morphine group received the equivalent of morphine 240 mg/day or less. The frequency of breakthrough medication use remained stable throughout both phases in the morphine group (52.5% in the IR phase, 48.8% in the SR phase), but decreased from the IR phase to the SR phase in the OROS® hydromorphone group (from 61.6% to 48.1%). At the end of the SR phase, there was no significant difference between the groups in use of breakthrough medication. The occurrence of 3 deaths and 23 serious AEs during the study was not unexpected, given the severity of the patients' conditions and the progressive nature of the disease. In fact, many of the serious AEs were associated with the underlying disease. The participants in this study are representative of the overall population afflicted with chronic cancer pain. The mean baseline BPI score for 'worst pain in the past 24 hours' was 6.3 on an 11-point scale, a score indicative of moderate to severe pain [16]. This is comparable to baseline values in other studies of cancer pain, for which the mean score was 6.99 [24]. Baseline cognition was similar to that of healthy controls (mean MMSE score, 28.3) [25], and the lack of change observed in both treatment groups probably reflects the fact that the MMSE was designed to measure cognition in Alzheimer's disease [14] and was not intended to measure cognition in non-demented subjects. The major positive aspect of this study was the robust design: it was multicenter, randomized, double-blind, and active controlled. It also used a widely utilized and well validated primary outcome measure, the BPI. On the negative side, there was a relatively poor completion rate, with only 60 out of 99 patients (60.6%) randomized to hydromorphone and 73 out of 101 patients (72.3%) randomized to morphine completing both treatment phases. However, this completion rate was reasonably good when taking into consideration that this study was carried out in patients with cancer which was advanced enough to require strong opioid analgesics. In addition, a dropout rate of at least 30% was anticipated. Furthermore, the duration of the study was relatively short (up to 24 days), so it was not possible to make evaluations of the sustained efficacy of the formulations. Two previous randomized, double-blind, crossover studies have compared CR formulations of hydromorphone and morphine in patients with cancer pain [26,27]. Moriarty et al (N = 100) found no significant differences between treatment groups for any outcome measure, and AEs were mild and infrequent [27]. In the other study (N = 87), patients treated with hydromorphone had significantly higher pain scores and required more doses of rescue analgesia than patients receiving morphine [26]. In addition, more patients in the hydromorphone group withdrew owing to AEs or inadequate analgesia (n = 16 versus n = 2 on morphine). Although it is difficult to directly compare the results of these studies with those of the current study, the observed differences may reflect, at least in part, the more consistent delivery of hydromorphone with the OROS® formulation relative to the dosing used in those studies. The consistent release of hydromorphone over 24 hours has been demonstrated in previous studies, in which steady-state plasma concentrations were achieved by 48 hours and sustained throughout the dosing interval [8,28]. Practicing clinicians are acutely aware of the need for multiple pain management alternatives, because it is clear that the response to a particular opioid agent varies among individuals. Although our understanding of the causes of individual differences remains incomplete, receptor and receptor-binding studies suggest that they may result not only from differences in the class of opioid receptors to which each agent binds (morphine binds to mu [primary] and kappa [lesser extent] [29]; hydromorphone binds to mu [primary] and delta [lesser extent] [30]), but also from inter-individual differences in opioid receptors (several receptor subtypes have been identified [mu 1, mu 2, mu 3, delta 1, delta 2, kappa 1, kappa 2, kappa 3]) and the location of these variants within the nervous system [31-33]. Therefore, clinicians may need to utilize different opioids to achieve maximum efficacy for each patient. In addition, some studies have supported the use of opioid rotation, switching from one opioid to another when treatment-limiting toxicity results in reduced responsiveness [34-37]. Based on numerous observations that individual response varies greatly from opioid to opioid, changing to a different drug may provide an improved balance between analgesia and adverse effects; as a result, there is a need for additional long-acting opioids on the market. Long-acting opioid formulations such OROS® hydromorphone have the potential to improve the overall management of chronic pain. Such formulations provide more consistent opioid plasma concentrations, avoiding the peaks and troughs associated with short-acting agents, and result in well-tolerated, around-the-clock pain control with fewer daily doses [38,39].\n\nBODY.CONCLUSION:\nThis study demonstrates that while IR hydromorphone is equivalent to IR morphine for relieving chronic cancer pain, equivalence was not demonstrated for OROS® hydromorphone and CR morphine in the SR phase of the study. The direction of the mean difference between the treatments, and the out-of-range lower limit of the 95% CI was in favor of OROS® hydromorphone on the primary endpoint 'worst pain'. 'Pain now PM' scores were also significantly lower for those taking OROS® hydromorphone at the end of the SR phase (although the timing of this measure coincided with a trough in the dosing levels for the twice-daily morphine compared with the once-daily OROS® hydromorphone), and BPI 'interference with normal work' scores were significantly lower for IR hydromorphone. None of the other secondary efficacy analyses showed significant treatment differences in either phase. The overall safety profiles of the 2 treatments were generally similar and most AEs were typical of opioid analgesic therapy.\n\nBODY.COMPETING INTERESTS:\nMH was a paid advisor for Janssen-Cilag.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nMH was an investigator in this study and was involved in revising this manuscript for important intellectual content. JT contributed to the analysis of the study and reviewed the manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to morphine what was the result of hydromorphone on mean ± SD investigator global assessment scores?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1093
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized, double-blind comparison of OROS\n\n ABSTRACT.BACKGROUND:\nLong-acting opioid formulations are advocated for maintaining pain control in chronic cancer pain. OROS® hydromorphone is a sustained-release formulation of hydromorphone that requires dosing once daily to maintain therapeutic concentrations. The objective of this study was to demonstrate the clinical equivalence of immediate-release and sustained-release formulations of hydromorphone and morphine for chronic cancer pain.\n\nABSTRACT.METHODS:\n200 patients with cancer pain (requiring ≤ 540 mg/d of oral morphine) participated in this double-blind, parallel-group trial. Patients were randomized to receive hydromorphone or morphine (immediate-release for 2–9 days, sustained-release for 10–15 days). Efficacy was assessed with the Brief Pain Inventory (BPI), investigator and patient global evaluations, Eastern Cooperative Oncology Group performance status, and the Mini-Mental State Examination. The primary endpoint was the 'worst pain in the past 24 hours' item of the BPI, in both the immediate-release and sustained-release study phases, with treatments deemed equivalent if the 95% confidence intervals (CI) of the between-group differences at endpoint were between -1.5 and 1.5. No equivalence limits were defined for secondary endpoints.\n\nABSTRACT.RESULTS:\nLeast-squares mean differences (95% CI) between groups were 0.2 (-0.4, 0.9) in the immediate-release phase and -0.8 (-1.6, -0.01) in the sustained-release phase (intent-to-treat population), indicating that the immediate-release formulations met the pre-specified equivalence criteria, but that the lower limit of the 95% CI (-1.6) was outside the boundary (-1.5) for the sustained-release formulations. BPI 'pain now PM' was significantly lower with OROS® hydromorphone compared with controlled-release morphine (least-squares mean difference [95% CI], -0.77 [-1.49, -0.05]; p = 0.0372). Scores for other secondary efficacy variables were similar between the two sustained-release treatments. At endpoint, > 70% of investigators and patients rated both treatments as good to excellent. The safety profiles of hydromorphone and morphine were similar and typical of opioid analgesics.\n\nABSTRACT.CONCLUSION:\nEquivalence was demonstrated for immediate-release formulations of hydromorphone and morphine, but not for the sustained-release formulations of OROS® hydromorphone and controlled-release morphine. The direction of the mean difference between the treatments (-0.8) and the out-of-range lower limit of the 95% CI (-1.6) were in favor of OROS® hydromorphone.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT0041054\n\nBODY.BACKGROUND:\nOpioid analgesics are highly effective for the treatment of pain, enabling 85% to 95% of patients to gain functional control of their lives [1,2]. Opioid therapy is typically initiated after the failure of maximum doses of non-opioid analgesics [3] and is titrated to attain the best balance between pain relief and side effects for each patient. The European Association for Palliative Care [4] and the American Pain Society [5] advocate the use of long-acting oral agents for maintaining analgesia once individual dose requirements have been established. For these reasons, long-acting opioids have become the mainstay of chronic cancer pain therapy. OROS® hydromorphone is a unique long-acting opioid formulation that utilizes Push-PullTM active osmotic technology developed by ALZA Corporation (Mountain View, CA, USA). The Push-PullTM system maintains consistent hydromorphone plasma concentrations throughout the 24-hour dosing interval, providing long-lasting analgesia [6-8]. Release of the drug from the system is actively controlled by the dosage form itself, and is not significantly influenced by environmental factors such as the surrounding pH or gastric motility [9,10]. There is a minimal effect of food on the rate and extent of absorption of hydromorphone from OROS® hydromorphone; in one study the mean geometric ratios of fed and fasted subjects for peak plasma concentration (Cmax) and area under the concentration-time curve (AUC) were within 20%; the median time to peak plasma concentration (Tmax) was lower under fed conditions (12 versus 16 hours), but mean plasma concentration profiles generally overlapped, especially up to 6 hours after dosing [11]. The pharmacokinetics of OROS® hydromorphone are also minimally affected by alcohol; one study found that plasma hydromorphone concentrations were slightly higher after alcohol (240 mL solutions of 4%, 20%, and 40% alcohol and orange juice) compared with no alcohol, but there was no clear alcohol dose-response relationship and no dose dumping of hydromorphone occurred [12]. The primary objective of the current study was to demonstrate the clinical equivalence of hydromorphone and morphine (immediate-release [IR] and sustained-release [SR] formulations) using the 'worst pain in the past 24 hours' item of the Brief Pain Inventory (BPI). Morphine was selected as the active comparator since it is the gold standard for pain control. Controlled-release (CR) morphine is available in twice-daily and once-daily formulations and is widely used to alleviate cancer pain; in this study the twice-daily formulation (dosing every 12 hours) was used.\n\nBODY.METHODS.PATIENTS:\nThis study was a multicenter, phase III, randomized, double-blind (double-dummy), active-controlled, parallel-group, equivalence trial. It was conducted at 37 centers in Belgium, Canada, France, Germany, The Netherlands, Spain, Sweden, and the United Kingdom. The study enrolled inpatients, outpatients, and day-patients ≥ 18 years of age who had moderate to severe chronic cancer pain requiring 60 to 540 mg of oral morphine (or equivalent) every 24 hours. The criteria used for patient selection are listed in Table 1. Concomitant chemotherapy or radiotherapy was permitted. All patients who entered the trial were informed of the nature of the study, and provided written informed consent for participation. The study was conducted in accordance with the principles of the Declaration of Helsinki. Table 1 Criteria for patient selection Inclusion criteria Exclusion criteria • Age ≥ 18 years • Pain not considered potentially responsive to opioids • Presence of chronic cancer pain: • Pain present only upon movement  ◦ currently receiving strong oral or transdermal opioid analgesics (60–540 mg oral morphine or equivalent every 24 hours) • Need for other opioid analgesics (except study medication and breakthrough pain medication) after randomization  ◦ appropriate candidate for strong oral or transdermal opioid analgesics (anticipated requirement, 60–540 mg oral morphine or equivalent every 24 hours) • Current or recent (within 6 months) history of drug and/or alcohol abuse • Pain suitable for treatment with a once-daily formulation • Women of childbearing potential who are pregnant or lactating, seeking pregnancy, or failing to take adequate contraceptive precautions • Intolerance of, or hypersensitivity to, hydromorphone or other opioids • Presence of GI disease of sufficient severity to likely interfere with oral analgesia (e.g., dysphagia, vomiting, no bowel movement or bowel obstruction due to impaction within 5 days of study entry, severe gut narrowing that may affect analgesic absorption or transit) • Use of monoamine oxidase inhibitors within 2 weeks prior to study entry • Investigational drug use within 4 weeks of study entry • Presence of conditions for which risks of opioid use outweigh potential benefits (e.g., raised intracranial pressure, hypotension, hypothyroidism, asthma, reduced respiratory reserve, prostatic hypertrophy, hepatic impairment, renal impairment, elderly and debilitated, convulsive disorders, Addison's disease) Abbreviation: GI, gastrointestinal \n\nBODY.METHODS.DRUGS AND DOSAGES:\nPatients were randomized 1:1, with a central computer-generated randomization list, to receive hydromorphone or morphine for up to 24 days (Figure 1). This study consisted of 2 phases: an initial IR phase and a subsequent SR phase. In the IR phase, patients received IR formulations of either hydromorphone [Dilaudid®, Abbott Laboratories] or morphine (morphine sulfate IR [Sevredol®, Napp Laboratories]) every 4 hours (6 times daily) for 2 to 9 days. Patients underwent conversion from previous therapy to 1 of 6 possible initial dose levels, based on individual patient characteristics and generally accepted morphine equivalent conversion factors. Doses were selected according to the available tablet strengths and a working conversion ratio of 1:5 (1 hydromorphone:5 morphine equivalents), in the ranges hydromorphone IR 12–108 mg/day and morphine IR 60–540 mg/day. The dose was titrated to the next higher dose level if the patient had more than 3 breakthrough pain episodes requiring breakthrough pain medication within the previous 24 hours. Daily doses were titrated up to the next higher dose level, with no dose levels skipped, at most once a day. Dose titration was continued until dose-stable pain control was achieved. Patients who experienced 2 consecutive days with no more than 3 breakthrough pain episodes requiring rescue medication per day were considered to have achieved dose-stable pain control and could begin the SR phase of the study. Patients who did not achieve dose stable pain control by day 9 were withdrawn from the study, and end-of-study evaluations were carried out. Figure 1Study design. BID, twice daily; CR, controlled-release; IR, immediate-release; QD, once-daily; q4h, every 4 hours. The duration of the SR phase was 10 to 15 days. In this phase, patients continued to receive the same study drug but in an SR formulation: OROS® hydromorphone once-daily or CR morphine (morphine sulfate SR [MST Continus®, Napp Laboratories]) twice-daily. They were started on the same dose level number on which they had achieved dose-stable pain control in the preceding IR phase, which was then adjusted as required, at most every 2 days, in single steps so that no dose level was skipped. Doses were selected according to the available tablet strengths and a working conversion ratio of approximately 1:5 (hydromorphone:morphine), in the ranges OROS® hydromorphone 16–96 mg/day and CR morphine 60–520 mg/day. The SR phase was completed after a minimum of 10 days (maximum of 15 days) on OROS® hydromorphone or CR morphine if pain was controlled and the dose was stable for at least the previous 2 days. In the IR phase, the dosing regimen was the same for both drugs: patients received a dose of study medication 6 times a day, every 4 hours, at 0600, 1000, 1400, 1800, 2200, and 0200. In this phase, all treatments were over-encapsulated resulting in identical capsules. In the SR phase, dosing was done twice each day at 1000 and 2200. To maintain the blind in this phase, matching placebo tablets and capsules were used: patients in the OROS® hydromorphone group took OROS® hydromorphone and placebo CR morphine in the morning and placebo CR morphine in the evening; patients in the CR morphine group took CR morphine and placebo OROS® hydromorphone in the morning and CR morphine in the evening. In both study phases, patients could also receive breakthrough pain medication as needed, either hydromorphone or morphine, supplied as the IR formulation. A single dose of breakthrough pain medication contained approximately one-sixth of the patient's daily dose.\n\nBODY.METHODS.ASSESSMENTS:\nBaseline efficacy assessments were done before dosing on day 1 of the IR phase and included: the BPI (12-item instrument to assess pain intensity [from 0 = no pain to 10 = pain as bad as you can imagine], pain relief [0% to 100%], and pain interference with various aspects of the patient's life [from 0 = no interference to 10 = complete interference]) [13], the Mini-Mental State Examination (MMSE; 0–30, higher scores indicate better cognitive performance) [14], and the Eastern Cooperative Oncology Group (ECOG) performance status score (0–4, higher scores indicate poorer performance) [15]. Primary equivalence of efficacy was assessed using the BPI assessment of 'worst pain in the past 24 hours' on an 11-point scale ranging from 0 (no pain) to 10 (worst pain imaginable). This was selected as the primary efficacy measure because evidence suggests that severely intense pain is what interferes most significantly with activities of daily life [13]. Secondary endpoints included other assessments of pain from the BPI (completed by the investigator in consultation with the patient); investigator and patient global assessments of treatment effectiveness (1 = poor, 2 = fair, 3 = good, 4 = very good, 5 = excellent); MMSE; ECOG; time to dose stabilization; and usage and dose of breakthrough pain medication. Patients rated BPI 'worst pain in the past 24 hours' once daily (just before taking the mid-morning dose of study medication) and 'pain now' twice daily (AM and PM; just before taking the mid-morning and evening doses) throughout the study and recorded responses in daily diaries. The other BPI assessments, plus investigator and patient global evaluations of treatment effectiveness, MMSE, and ECOG, were evaluated at the end of both phases. Adverse events (AEs) and concomitant medications were recorded throughout the study. Physical examinations were carried out at baseline and at the end of each study phase. Blood and urine samples were collected for laboratory analysis (hematology, serum biochemistry, and urinalysis) at baseline and at the end of the SR phase. Patients were contacted daily during the IR phase and every 3 to 4 days during the SR phase (preferably by home or clinic visit, or by telephone if appropriate). Contact was also made 3 days after the last dose of study drug to determine the patient's AE status.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nAssuming variability (standard deviation, SD) in the primary efficacy variable of 2.0 [16] and a 30% dropout/non-evaluable rate, a sample size of approximately 70 patients per treatment group was required (140 total). This would provide 90% power to detect equivalence with an equivalence limit of 1.5. A planned blinded re-estimation of the optimal sample size was done by an appointed independent person after 55 patients had completed the study, the variability for the primary efficacy measure was estimated using their data, and the sample size was increased to 170 patients. Subsequently, an unplanned, blinded re-estimation of the variability of the primary variable for both phases was done after 120 patients had completed the study, and as a result, the sample size was increased to 200 patients. All statistical analyses were pre-specified. All efficacy variables were analyzed using the intent-to-treat (ITT) population, which included all patients who took at least 1 dose of study medication and had at least 1 assessment from each study phase. A per-protocol (PP) analysis, excluding data from patients with major protocol deviations, was also done. All patients taking at least 1 dose of study medication were included in the analysis of safety. All efficacy analyses included a pre-specified adjustment for baseline value and country to control for potentially confounding variables. The primary efficacy analysis used the mean of the last 2 post-baseline recorded values (or the last value if only 1 was available) for BPI 'worst pain in the past 24 hours'. The primary measure was calculated separately for the IR and SR phases (there were therefore two primary outcomes). An analysis of covariance (ANCOVA) was used to calculate the 95% two-sided confidence interval (CI) for the difference between the adjusted means for the 2 treatments at endpoint of each phase. An equivalence test was used to compare the mean scores of the primary efficacy measure at the end of each phase [17-19]. The 2 treatments, within a phase, were considered equivalent if the 95% two-sided CI for the treatment difference was within a -1.5 and 1.5 point difference on the BPI scale. This particular CI range was selected prospectively because it was considered clinically relevant through consultation with physicians who specialize in analgesia. For secondary endpoints (including BPI 'pain at its least in past 24 hours', 'pain on average pain', 'pain now' (AM and PM), 'pain relief in past 24 hours', 'pain interference in past 24 hours', MMSE, and ECOG) an ANCOVA model was used to calculate 95% two-sided CIs for the difference between the adjusted treatment means at the end of the IR and SR phases. The adjusted means were estimated by the same approach used for the primary endpoint, with the baseline score designated as a covariate; however, no equivalence limits were specified for the secondary endpoints. Time to dose stabilization during both study phases was analyzed by the log-rank test. Summary statistics with 95% CIs were produced for investigator and patient global evaluations of treatment effectiveness and to assess the number of medication doses taken for breakthrough pain in the last 2 days of each phase. P values (not adjusted for multiple comparisons) are reported for secondary endpoints. Statistical significance was declared at the α = 0.05 level.\n\nBODY.RESULTS.STUDY POPULATION:\n202 patients were enrolled into the study (Belgium, n = 29; Canada, n = 10; France, n = 7; Germany, n = 15; Netherlands, n = 29; Spain, n = 41; Sweden, n = 11; UK, n = 60). 200 patients were randomized and took study medication. 163 patients (81.5%) completed the IR phase and 133 (66.5%) completed the SR phase. Figure 2 is a flow diagram of patient entry and completion and the reasons for premature discontinuation. Withdrawal from the study owing to lack of efficacy was more common in patients randomized to hydromorphone (n = 11) compared with morphine (n = 4). The mean age of participants overall was 59.8 years. 98.5% of patients were Caucasian and 51% were female. The most common types of cancer were breast (28%), lung (20%), and gastrointestinal (16%). For 133 (67%) patients, the predominant pain location was bone or soft tissue; 34 (17%) others had mixed pain, and 33 (17%) had visceral pain. No patient had neuropathic pain. The most common pain medications used at study entry were morphine (n = 124 [62%]) and tramadol (n = 38 [19%]). The baseline demographics and clinical characteristics were similar for both treatment groups (Table 2). Table 2 Demographic and baseline clinical characteristics Characteristic Hydromorphone (N = 99) Morphine (N = 101) Mean (SD) age, years 60.7 (12.50) 59.0 (11.36) Gender, % male 46.5 51.5 Race, % Caucasian/Black/Asian/Other 100/0/0/0 97/0/2/1 Mean (SD) height, cm 166.7 (9.33) 167.3 (10.47) Mean (SD) weight, kg 66.3 (15.33) 67.4 (13.33) Mean (SD) BMI, kg/m 2 23.8 (5.14) 24.2 (5.06) Cancer type, n (%)  Breast 23 (23.2) 33 (32.7)  Lung 20 (20.2) 19 (18.8)  Genitourinary 18 (18.2) 12 (11.9)  Gastrointestinal 17 (17.2) 15 (14.9)  Oral cavity 3 (3.0) 3 (3.0)  Lymphoma 3 (3.0) 0  Leukemia 1 (1.0) 2 (2.0)  Bone 1 (1.0) 1 (1.0)  Other 13 (13.1) 16 (15.8) Predominant pain type, n (%)  Bone or soft tissue 61 (61.6) 72 (71.3)  Mixed 19 (19.2) 15 (14.9)  Visceral 19 (19.2) 14 (13.9) Mean (SD) MMSE score* 28.5 (2.3) 28.8 (2.0) Mean (SD) ECOG score 1.6 (0.8) 1.6 (0.9) Abbreviations: BMI, body mass index; ECOG, Eastern Cooperative Oncology Group; MMSE, Mini-Mental State Examination; SD, standard deviation *Hydromorphone, n = 75; morphine, n = 73 Figure 2Patient disposition. IR, immediate-release; SR, sustained-release.\n\nBODY.RESULTS.EFFICACY.BPI 'WORST PAIN IN THE PAST 24 HOURS' (PRIMARY ENDPOINT):\nIn each study phase, mean values for BPI 'worst pain in the past 24 hours' decreased with both hydromorphone and morphine treatments (Table 3). The least-squares (LS) mean (95% CI) differences between the hydromorphone and morphine groups were 0.2 points (-0.4, 0.9) in the IR phase and -0.8 points (-1.6, -0.01) in the SR phase (ITT population). In the IR phase, the 95% CI was within the -1.5 to 1.5 equivalency range (-0.4 to 0.9), indicating that IR hydromorphone and IR morphine were equivalent in this primary efficacy outcome measure. In the SR phase, the upper limit of the 95% CI was within the 1.5 boundary (-0.01), but the lower limit was less than -1.5 (-1.6), indicating that the SR formulations were not equivalent but that the negative direction of the mean difference was in favor of OROS® hydromorphone (Figure 3). Results of the analysis of the primary efficacy variable were similar when using data from the PP population. Table 3 BPI scores at baseline, end of IR phase, and end of SR phase (ITT population) Variable Baseline* Mean (SD) End of IR phase † LS mean (SE) End of SR phase ‡ LS mean (SE) Worst pain  Hydromorphone 6.3 (2.7) 5.0 (0.3) 3.5 (0.3)  Morphine 6.2 (2.5) 4.8 (0.3) 4.3 (0.3) Least pain  Hydromorphone 2.7 (2.5) 1.8 (0.2) 1.8 (0.2)  Morphine 2.9 (2.7) 2.2 (0.2) 1.8 (0.2) Average pain  Hydromorphone 5.0 (2.1) 3.6 (0.2) 3.4 (0.3)  Morphine 5.1 (2.1) 3.6 (0.2) 3.2 (0.3) Pain now AM  Hydromorphone 4.0 (2.7) 3.3 (0.3) 2.4 (0.3)  Morphine 4.1 (2.4) 3.4 (0.3) 2.8 (0.3) Pain now PM  Hydromorphone 4.8 (3.0) 3.6 (0.3) 2.6 (0.3) §  Morphine 4.2 (2.5) 3.7 (0.3) 3.4 (0.3) Abbreviations: BPI, Brief Pain Inventory; CR, controlled-release; IR, immediate-release; LS, least-squares; SD, standard deviation; SE, standard error; SR, sustained-release *Hydromorphone, n = 99; morphine, n = 101 †IR hydromorphone, n = 99; IR morphine, n = 101 ‡OROS ® hydromorphone, n = 77; CR morphine, n = 86 §p = 0.0372 versus CR morphine Figure 3Least-squares mean differences and 95% confidence intervals between the OROS® hydromorphone and controlled-release (CR) morphine groups at end of the sustained-release (SR) phase.\n\nBODY.RESULTS.EFFICACY.SECONDARY ENDPOINTS:\nAt the end of the IR phase, mean scores for the secondary efficacy variables (other assessments of pain from the BPI [Table 3 and Figure 4], and MMSE and ECOG scores [data not shown]) were similar for IR hydromorphone and IR morphine, except for BPI 'pain interference with normal work', which favored IR hydromorphone (Figure 4; p = 0.0386). Figure 4Mean Brief Pain Inventory (BPI) interference scores at baseline, end of immediate-release (IR) phase, and end of sustained-release (SR) phase. Values at baseline represent means; values at other time points represent least-squares means. At baseline: hydromorphone, n = 99; morphine, n = 101. In IR phase: IR hydromorphone, n = 99; IR morphine, n = 101. In SR phase: OROS® hydromorphone, n = 77; controlled-release (CR) morphine, n = 86. At the end of the SR phase, scores were similar between treatment groups for BPI items 'least pain', 'average pain', and 'pain now AM'; however, 'pain now PM' was significantly lower in the OROS® hydromorphone group compared with the CR morphine group (LS mean [SE] scores, 2.6 [0.3] versus 3.4 [0.3], respectively; p = 0.0372) (Table 3). LS mean differences (95% CI) between treatment groups were 0.06 (-0.51, 0.64), 0.15 (-0.49, 0.79), -0.38 (-1.03, 0.27), and -0.77 (-1.49, -0.05) for least, average, morning, and evening pain, respectively (Figure 3). Figures 5 and 6 show the consistency of response to both OROS® hydromorphone and morphine throughout the SR phase, during both the morning and evening assessments of BPI 'pain now' scores. Figure 5Mean (standard error, SE) pain now scores in the morning during the sustained-release (SR) phase. Pain was rated using the 11-point Brief Pain Inventory (BPI), ranging from 0 (no pain) to 10 (worst pain imaginable). CR, controlled-release. Figure 6Mean (standard error, SE) pain now scores in the evening during the sustained-release (SR) phase. Pain was rated using the 11-point Brief Pain Inventory (BPI), ranging from 0 (no pain) to 10 (worst pain imaginable). CR, controlled-release. LS mean BPI 'pain relief' at endpoint was 69.7% for the OROS® hydromorphone group and 70.0% for the CR morphine group (95% CI for treatment difference, -8.11, 7.54). BPI 'pain interference' scores were also similar between treatment groups at endpoint, (Figure 4), as were LS mean ECOG performance status scores (1.8 versus 1.7 in the OROS® hydromorphone and CR morphine groups, respectively; 95% CI for treatment difference, -0.13, 0.26), and LS mean MMSE scores (28.9 versus 29.2; 95% CI for treatment difference, -1.19, 0.51). The majority of both investigators and patients (> 70%) rated the treatments as good, very good, or excellent at the end of each study phase (Figure 7). Mean ± SD patient global assessment scores at the end of the SR phase were 3.2 ± 1.14 for the OROS® hydromorphone group and 3.3 ± 0.98 for the CR morphine group (p = 0.6696); mean ± SD investigator global assessment scores were 3.2 ± 1.07 and 3.3 ± 0.91, respectively (p = 0.4760). Figure 7Patient (A) and investigator (B) global evaluations of treatment effectiveness at the end of the sustained-release (SR) phase. CR, controlled-release. In both treatment phases, patients in the hydromorphone group took longer than those in the morphine group to achieve stable doses (p < 0.001). The number of patients requiring dose-level changes during the SR phase was similar for both groups (19 of 77 [24.7%] OROS® hydromorphone recipients and 23 of 86 [26.7%] CR morphine recipients). Comparison of dosage levels at the end of the SR phase generally showed that similar proportions of patients received OROS® hydromorphone or CR morphine at each of the 5 (approximate 1:5 hydromorphone to morphine) lower dosage levels, although more patients in the OROS® hydromorphone group (10% versus 2% in the CR morphine group) received the highest dosage level evaluated in this study (Table 4). Table 4 Dose levels at end of SR phase OROS ® hydromorphone (N = 77) CR morphine (N = 86) Dose level Dose (mg) every 24 hours Number of subjects (%) Dose (mg) every 12 hours Number of subjects (%) 1 16 22 (28.6) 30 25 (29.1) 2 24 14 (18.2) 60 23 (26.7) 3 32 18 (23.4) 90 15 (17.4) 4 48 10 (13.0) 120 13 (15.1) 5 72 5 (6.5) 175 8 (9.3) 6 96 8 (10.4) 260 2 (2.3) Abbreviations: CR, controlled-release; SR, sustained-release In the IR phase, 61.6% of patients taking IR hydromorphone and 52.5% of those receiving IR morphine required medication for breakthrough pain on the last 2 days of the phase. In the SR phase, the corresponding percentages in the OROS® hydromorphone and CR morphine groups were 48.1% and 48.8%, respectively. The mean (SD) numbers of breakthrough pain medication doses taken on the last 2 days were as follows: IR phase, 1.8 (2.2) and 1.3 (1.8) for the IR hydromorphone and IR morphine groups, respectively; SR phase, 1.6 (2.2) and 1.4 (1.9) for the OROS® hydromorphone and CR morphine groups, respectively. The mean total doses of breakthrough pain medication as a percentage of the mean daily consumption were 39.4% and 27.1% for hydromorphone and morphine in the IR phase, and 13.8% and 10.3% for OROS® hydromorphone and CR morphine in the SR phase. There were no statistically significant treatment differences in the numbers of patients who took breakthrough pain medication or in the amount of medication taken in either phase.\n\nBODY.RESULTS.SAFETY:\nThe overall safety profiles of hydromorphone and morphine were comparable, with a similar number of patients in each group reporting AEs. Overall, irrespective of study phase, 81 of 99 (81.8%) hydromorphone-treated patients and 90 of 101 (89.1%) morphine-treated patients reported at least 1 AE. The most commonly reported AEs during the IR and SR phases of the study are listed in Table 5. The types of AEs were typical of those generally associated with opioid use. The incidence of constipation was higher with hydromorphone than morphine in both phases of the study (23.2% versus 10.9% in the IR phase and 39.0% versus 22.1% in the SR phase). In the SR phase, the incidences of vomiting, nausea, and somnolence were higher with CR morphine than with OROS® hydromorphone (22.1% versus 9.1%, 29.1% versus 19.5%, and 14.0% versus 10.4%, respectively). Most AEs were mild or moderate in severity and no clear treatment difference was seen in the relative severity of the most commonly reported AEs. Approximately 30% of patients with AEs reported events that were considered unlikely to be related or not related to study therapy. Table 5 Adverse events reported by at least 5% of patients in any group during the IR or SR phase IR phase SR phase Adverse event Hydromorphone (N = 99) Morphine (N = 101) OROS ® hydromorphone (N = 77) CR morphine (N = 86) Constipation 23 (23.2%) 11 (10.9%) 30 (39.0%) 19 (22.1%) Nausea 18 (18.2%) 23 (22.8%) 15 (19.5%) 25 (29.1%) Vomiting 16 (16.2%) 19 (18.8%) 7 (9.1%) 19 (22.1%) Somnolence 11 (11.1%) 11 (10.9%) 8 (10.4%) 12 (14.0%) Dizziness 8 (8.1%) 6 (5.9%) 4 (5.2%) 8 (9.3%) Headache 8 (8.1%) 6 (5.9%) 3 (3.9%) 2 (2.3%) Diarrhea 8 (8.1%) 1 (1.0%) 7 (9.1%) 2 (2.3%) Pruritus 4 (4.0%) 5 (5.0%) 3 (3.9%) 5 (5.8%) Asthenia 5 (5.1%) 1 (1.0%) 6 (7.8%) 4 (4.7%) Fatigue 3 (3.0%) 3 (3.0%) 4 (5.2%) 6 (7.0%) Confusion 2 (2.0%) 2 (2.0%) 7 (9.1%) 2 (2.3%) Anemia 2 (2.0%) 1 (1.0%) 3 (3.9%) 6 (7.0%) Anorexia 1 (1.0%) 4 (4.0%) 2 (2.6%) 5 (5.8%) Insomnia 1 (1.0%) 2 (2.0%) 5 (6.5%) 4 (4.7%) Peripheral edema 0 3 (3.0%) 1 (1.3%) 8 (9.3%) Pyrexia 2 (2.0%) 1 (1.0%) 4 (5.2%) 2 (2.3%) Anxiety 1 (1.0%) 1 (1.0%) 5 (6.5%) 1 (1.2%) Abbreviations: CR, controlled-release; IR, immediate-release; SR, sustained-release Serious AEs were reported for 24 patients during the double-blind period (by 5 patients in each treatment group in the IR phase, and by 7 patients in the OROS® hydromorphone group and 9 in the CR morphine group in the SR phase [2 patients had serious AEs in both phases]). Many of the serious AEs were associated with the underlying disease, although approximately one third were considered definitely or probably related to study therapy. Three patients in the morphine group died during the study (2 during the IR phase, 1 during the SR phase). None of the deaths was considered related to the study medication. Twenty-six patients withdrew prematurely from the study because of AEs, 16 during the IR phase (9 hydromorphone, 7 morphine) and 10 during the SR phase (6 OROS® hydromorphone, 4 CR morphine). There was no statistically significant difference between treatment groups in the time to withdrawal, irrespective of phase (IR phase, p = 0.6537; SR phase, p = 0.2827). The use of concomitant medications, including the use of non-opioid analgesics, was similar for both treatment groups in both the IR and SR phases.\n\nBODY.DISCUSSION:\nThis was a short-term, randomized, double-blind comparative study testing the clinical equivalence of IR and SR formulations of hydromorphone and morphine in patients with chronic cancer pain. In each study phase (IR and SR), mean values for BPI 'worst pain in the past 24 hours' (the primary efficacy endpoint) decreased with both hydromorphone and morphine. In the IR phase, the LS mean difference between the treatment groups was 0.2 points, and the 95% CI (-0.4, 0.9) was within the pre-specified equivalency range of -1.5 to 1.5. In the SR phase, the LS mean difference was -0.8 points in favor of OROS® hydromorphone. The upper limit of the 95% CI (-0.01) was within the 1.5 boundary, but the lower limit (-1.6) was less than the -1.5 limit. Therefore, according to the pre-specified criteria, equivalence was not demonstrated; the direction of the difference was in favor of OROS® hydromorphone. The BPI item 'worst pain in the past 24 hours' was selected as the primary endpoint of the trial based on previous studies showing it to be the most sensitive BPI measure in the clinical trial setting [13,20-22]. The results of the present study further suggest that OROS® hydromorphone provides consistent pain relief over 24 hours, and that pain levels in the evening were significantly lower after OROS® hydromorphone compared with CR morphine treatment. This finding reflects the 'pain now PM' measure being scored at the end of morphine dosing (i.e. trough levels) but at the midpoint of hydromorphone dosing. The comparable results between groups for the other secondary efficacy variables (including assessments of pain interference with physical activity and social function, performance status, and cognition) support similarity in the treatment efficacy. Withdrawal from the study owing to lack of efficacy was more common in patients randomized to hydromorphone (n = 11) compared with morphine (n = 4). This is a potentially negative finding for hydromorphone; however, the numbers are too low to draw any definite conclusions. Results of the dose analysis were generally consistent with those of previous reports, which indicate that most cancer pain can be controlled with ≤ 240 mg of oral morphine daily [23]. In the present study, 83% of the OROS® hydromorphone group and 88% of the CR morphine group received the equivalent of morphine 240 mg/day or less. The frequency of breakthrough medication use remained stable throughout both phases in the morphine group (52.5% in the IR phase, 48.8% in the SR phase), but decreased from the IR phase to the SR phase in the OROS® hydromorphone group (from 61.6% to 48.1%). At the end of the SR phase, there was no significant difference between the groups in use of breakthrough medication. The occurrence of 3 deaths and 23 serious AEs during the study was not unexpected, given the severity of the patients' conditions and the progressive nature of the disease. In fact, many of the serious AEs were associated with the underlying disease. The participants in this study are representative of the overall population afflicted with chronic cancer pain. The mean baseline BPI score for 'worst pain in the past 24 hours' was 6.3 on an 11-point scale, a score indicative of moderate to severe pain [16]. This is comparable to baseline values in other studies of cancer pain, for which the mean score was 6.99 [24]. Baseline cognition was similar to that of healthy controls (mean MMSE score, 28.3) [25], and the lack of change observed in both treatment groups probably reflects the fact that the MMSE was designed to measure cognition in Alzheimer's disease [14] and was not intended to measure cognition in non-demented subjects. The major positive aspect of this study was the robust design: it was multicenter, randomized, double-blind, and active controlled. It also used a widely utilized and well validated primary outcome measure, the BPI. On the negative side, there was a relatively poor completion rate, with only 60 out of 99 patients (60.6%) randomized to hydromorphone and 73 out of 101 patients (72.3%) randomized to morphine completing both treatment phases. However, this completion rate was reasonably good when taking into consideration that this study was carried out in patients with cancer which was advanced enough to require strong opioid analgesics. In addition, a dropout rate of at least 30% was anticipated. Furthermore, the duration of the study was relatively short (up to 24 days), so it was not possible to make evaluations of the sustained efficacy of the formulations. Two previous randomized, double-blind, crossover studies have compared CR formulations of hydromorphone and morphine in patients with cancer pain [26,27]. Moriarty et al (N = 100) found no significant differences between treatment groups for any outcome measure, and AEs were mild and infrequent [27]. In the other study (N = 87), patients treated with hydromorphone had significantly higher pain scores and required more doses of rescue analgesia than patients receiving morphine [26]. In addition, more patients in the hydromorphone group withdrew owing to AEs or inadequate analgesia (n = 16 versus n = 2 on morphine). Although it is difficult to directly compare the results of these studies with those of the current study, the observed differences may reflect, at least in part, the more consistent delivery of hydromorphone with the OROS® formulation relative to the dosing used in those studies. The consistent release of hydromorphone over 24 hours has been demonstrated in previous studies, in which steady-state plasma concentrations were achieved by 48 hours and sustained throughout the dosing interval [8,28]. Practicing clinicians are acutely aware of the need for multiple pain management alternatives, because it is clear that the response to a particular opioid agent varies among individuals. Although our understanding of the causes of individual differences remains incomplete, receptor and receptor-binding studies suggest that they may result not only from differences in the class of opioid receptors to which each agent binds (morphine binds to mu [primary] and kappa [lesser extent] [29]; hydromorphone binds to mu [primary] and delta [lesser extent] [30]), but also from inter-individual differences in opioid receptors (several receptor subtypes have been identified [mu 1, mu 2, mu 3, delta 1, delta 2, kappa 1, kappa 2, kappa 3]) and the location of these variants within the nervous system [31-33]. Therefore, clinicians may need to utilize different opioids to achieve maximum efficacy for each patient. In addition, some studies have supported the use of opioid rotation, switching from one opioid to another when treatment-limiting toxicity results in reduced responsiveness [34-37]. Based on numerous observations that individual response varies greatly from opioid to opioid, changing to a different drug may provide an improved balance between analgesia and adverse effects; as a result, there is a need for additional long-acting opioids on the market. Long-acting opioid formulations such OROS® hydromorphone have the potential to improve the overall management of chronic pain. Such formulations provide more consistent opioid plasma concentrations, avoiding the peaks and troughs associated with short-acting agents, and result in well-tolerated, around-the-clock pain control with fewer daily doses [38,39].\n\nBODY.CONCLUSION:\nThis study demonstrates that while IR hydromorphone is equivalent to IR morphine for relieving chronic cancer pain, equivalence was not demonstrated for OROS® hydromorphone and CR morphine in the SR phase of the study. The direction of the mean difference between the treatments, and the out-of-range lower limit of the 95% CI was in favor of OROS® hydromorphone on the primary endpoint 'worst pain'. 'Pain now PM' scores were also significantly lower for those taking OROS® hydromorphone at the end of the SR phase (although the timing of this measure coincided with a trough in the dosing levels for the twice-daily morphine compared with the once-daily OROS® hydromorphone), and BPI 'interference with normal work' scores were significantly lower for IR hydromorphone. None of the other secondary efficacy analyses showed significant treatment differences in either phase. The overall safety profiles of the 2 treatments were generally similar and most AEs were typical of opioid analgesic therapy.\n\nBODY.COMPETING INTERESTS:\nMH was a paid advisor for Janssen-Cilag.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nMH was an investigator in this study and was involved in revising this manuscript for important intellectual content. JT contributed to the analysis of the study and reviewed the manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to morphine what was the result of hydromorphone on mean scores for the secondary efficacy variables and MMSE and ECOG scores?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1094
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized, double-blind comparison of OROS\n\n ABSTRACT.BACKGROUND:\nLong-acting opioid formulations are advocated for maintaining pain control in chronic cancer pain. OROS® hydromorphone is a sustained-release formulation of hydromorphone that requires dosing once daily to maintain therapeutic concentrations. The objective of this study was to demonstrate the clinical equivalence of immediate-release and sustained-release formulations of hydromorphone and morphine for chronic cancer pain.\n\nABSTRACT.METHODS:\n200 patients with cancer pain (requiring ≤ 540 mg/d of oral morphine) participated in this double-blind, parallel-group trial. Patients were randomized to receive hydromorphone or morphine (immediate-release for 2–9 days, sustained-release for 10–15 days). Efficacy was assessed with the Brief Pain Inventory (BPI), investigator and patient global evaluations, Eastern Cooperative Oncology Group performance status, and the Mini-Mental State Examination. The primary endpoint was the 'worst pain in the past 24 hours' item of the BPI, in both the immediate-release and sustained-release study phases, with treatments deemed equivalent if the 95% confidence intervals (CI) of the between-group differences at endpoint were between -1.5 and 1.5. No equivalence limits were defined for secondary endpoints.\n\nABSTRACT.RESULTS:\nLeast-squares mean differences (95% CI) between groups were 0.2 (-0.4, 0.9) in the immediate-release phase and -0.8 (-1.6, -0.01) in the sustained-release phase (intent-to-treat population), indicating that the immediate-release formulations met the pre-specified equivalence criteria, but that the lower limit of the 95% CI (-1.6) was outside the boundary (-1.5) for the sustained-release formulations. BPI 'pain now PM' was significantly lower with OROS® hydromorphone compared with controlled-release morphine (least-squares mean difference [95% CI], -0.77 [-1.49, -0.05]; p = 0.0372). Scores for other secondary efficacy variables were similar between the two sustained-release treatments. At endpoint, > 70% of investigators and patients rated both treatments as good to excellent. The safety profiles of hydromorphone and morphine were similar and typical of opioid analgesics.\n\nABSTRACT.CONCLUSION:\nEquivalence was demonstrated for immediate-release formulations of hydromorphone and morphine, but not for the sustained-release formulations of OROS® hydromorphone and controlled-release morphine. The direction of the mean difference between the treatments (-0.8) and the out-of-range lower limit of the 95% CI (-1.6) were in favor of OROS® hydromorphone.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT0041054\n\nBODY.BACKGROUND:\nOpioid analgesics are highly effective for the treatment of pain, enabling 85% to 95% of patients to gain functional control of their lives [1,2]. Opioid therapy is typically initiated after the failure of maximum doses of non-opioid analgesics [3] and is titrated to attain the best balance between pain relief and side effects for each patient. The European Association for Palliative Care [4] and the American Pain Society [5] advocate the use of long-acting oral agents for maintaining analgesia once individual dose requirements have been established. For these reasons, long-acting opioids have become the mainstay of chronic cancer pain therapy. OROS® hydromorphone is a unique long-acting opioid formulation that utilizes Push-PullTM active osmotic technology developed by ALZA Corporation (Mountain View, CA, USA). The Push-PullTM system maintains consistent hydromorphone plasma concentrations throughout the 24-hour dosing interval, providing long-lasting analgesia [6-8]. Release of the drug from the system is actively controlled by the dosage form itself, and is not significantly influenced by environmental factors such as the surrounding pH or gastric motility [9,10]. There is a minimal effect of food on the rate and extent of absorption of hydromorphone from OROS® hydromorphone; in one study the mean geometric ratios of fed and fasted subjects for peak plasma concentration (Cmax) and area under the concentration-time curve (AUC) were within 20%; the median time to peak plasma concentration (Tmax) was lower under fed conditions (12 versus 16 hours), but mean plasma concentration profiles generally overlapped, especially up to 6 hours after dosing [11]. The pharmacokinetics of OROS® hydromorphone are also minimally affected by alcohol; one study found that plasma hydromorphone concentrations were slightly higher after alcohol (240 mL solutions of 4%, 20%, and 40% alcohol and orange juice) compared with no alcohol, but there was no clear alcohol dose-response relationship and no dose dumping of hydromorphone occurred [12]. The primary objective of the current study was to demonstrate the clinical equivalence of hydromorphone and morphine (immediate-release [IR] and sustained-release [SR] formulations) using the 'worst pain in the past 24 hours' item of the Brief Pain Inventory (BPI). Morphine was selected as the active comparator since it is the gold standard for pain control. Controlled-release (CR) morphine is available in twice-daily and once-daily formulations and is widely used to alleviate cancer pain; in this study the twice-daily formulation (dosing every 12 hours) was used.\n\nBODY.METHODS.PATIENTS:\nThis study was a multicenter, phase III, randomized, double-blind (double-dummy), active-controlled, parallel-group, equivalence trial. It was conducted at 37 centers in Belgium, Canada, France, Germany, The Netherlands, Spain, Sweden, and the United Kingdom. The study enrolled inpatients, outpatients, and day-patients ≥ 18 years of age who had moderate to severe chronic cancer pain requiring 60 to 540 mg of oral morphine (or equivalent) every 24 hours. The criteria used for patient selection are listed in Table 1. Concomitant chemotherapy or radiotherapy was permitted. All patients who entered the trial were informed of the nature of the study, and provided written informed consent for participation. The study was conducted in accordance with the principles of the Declaration of Helsinki. Table 1 Criteria for patient selection Inclusion criteria Exclusion criteria • Age ≥ 18 years • Pain not considered potentially responsive to opioids • Presence of chronic cancer pain: • Pain present only upon movement  ◦ currently receiving strong oral or transdermal opioid analgesics (60–540 mg oral morphine or equivalent every 24 hours) • Need for other opioid analgesics (except study medication and breakthrough pain medication) after randomization  ◦ appropriate candidate for strong oral or transdermal opioid analgesics (anticipated requirement, 60–540 mg oral morphine or equivalent every 24 hours) • Current or recent (within 6 months) history of drug and/or alcohol abuse • Pain suitable for treatment with a once-daily formulation • Women of childbearing potential who are pregnant or lactating, seeking pregnancy, or failing to take adequate contraceptive precautions • Intolerance of, or hypersensitivity to, hydromorphone or other opioids • Presence of GI disease of sufficient severity to likely interfere with oral analgesia (e.g., dysphagia, vomiting, no bowel movement or bowel obstruction due to impaction within 5 days of study entry, severe gut narrowing that may affect analgesic absorption or transit) • Use of monoamine oxidase inhibitors within 2 weeks prior to study entry • Investigational drug use within 4 weeks of study entry • Presence of conditions for which risks of opioid use outweigh potential benefits (e.g., raised intracranial pressure, hypotension, hypothyroidism, asthma, reduced respiratory reserve, prostatic hypertrophy, hepatic impairment, renal impairment, elderly and debilitated, convulsive disorders, Addison's disease) Abbreviation: GI, gastrointestinal \n\nBODY.METHODS.DRUGS AND DOSAGES:\nPatients were randomized 1:1, with a central computer-generated randomization list, to receive hydromorphone or morphine for up to 24 days (Figure 1). This study consisted of 2 phases: an initial IR phase and a subsequent SR phase. In the IR phase, patients received IR formulations of either hydromorphone [Dilaudid®, Abbott Laboratories] or morphine (morphine sulfate IR [Sevredol®, Napp Laboratories]) every 4 hours (6 times daily) for 2 to 9 days. Patients underwent conversion from previous therapy to 1 of 6 possible initial dose levels, based on individual patient characteristics and generally accepted morphine equivalent conversion factors. Doses were selected according to the available tablet strengths and a working conversion ratio of 1:5 (1 hydromorphone:5 morphine equivalents), in the ranges hydromorphone IR 12–108 mg/day and morphine IR 60–540 mg/day. The dose was titrated to the next higher dose level if the patient had more than 3 breakthrough pain episodes requiring breakthrough pain medication within the previous 24 hours. Daily doses were titrated up to the next higher dose level, with no dose levels skipped, at most once a day. Dose titration was continued until dose-stable pain control was achieved. Patients who experienced 2 consecutive days with no more than 3 breakthrough pain episodes requiring rescue medication per day were considered to have achieved dose-stable pain control and could begin the SR phase of the study. Patients who did not achieve dose stable pain control by day 9 were withdrawn from the study, and end-of-study evaluations were carried out. Figure 1Study design. BID, twice daily; CR, controlled-release; IR, immediate-release; QD, once-daily; q4h, every 4 hours. The duration of the SR phase was 10 to 15 days. In this phase, patients continued to receive the same study drug but in an SR formulation: OROS® hydromorphone once-daily or CR morphine (morphine sulfate SR [MST Continus®, Napp Laboratories]) twice-daily. They were started on the same dose level number on which they had achieved dose-stable pain control in the preceding IR phase, which was then adjusted as required, at most every 2 days, in single steps so that no dose level was skipped. Doses were selected according to the available tablet strengths and a working conversion ratio of approximately 1:5 (hydromorphone:morphine), in the ranges OROS® hydromorphone 16–96 mg/day and CR morphine 60–520 mg/day. The SR phase was completed after a minimum of 10 days (maximum of 15 days) on OROS® hydromorphone or CR morphine if pain was controlled and the dose was stable for at least the previous 2 days. In the IR phase, the dosing regimen was the same for both drugs: patients received a dose of study medication 6 times a day, every 4 hours, at 0600, 1000, 1400, 1800, 2200, and 0200. In this phase, all treatments were over-encapsulated resulting in identical capsules. In the SR phase, dosing was done twice each day at 1000 and 2200. To maintain the blind in this phase, matching placebo tablets and capsules were used: patients in the OROS® hydromorphone group took OROS® hydromorphone and placebo CR morphine in the morning and placebo CR morphine in the evening; patients in the CR morphine group took CR morphine and placebo OROS® hydromorphone in the morning and CR morphine in the evening. In both study phases, patients could also receive breakthrough pain medication as needed, either hydromorphone or morphine, supplied as the IR formulation. A single dose of breakthrough pain medication contained approximately one-sixth of the patient's daily dose.\n\nBODY.METHODS.ASSESSMENTS:\nBaseline efficacy assessments were done before dosing on day 1 of the IR phase and included: the BPI (12-item instrument to assess pain intensity [from 0 = no pain to 10 = pain as bad as you can imagine], pain relief [0% to 100%], and pain interference with various aspects of the patient's life [from 0 = no interference to 10 = complete interference]) [13], the Mini-Mental State Examination (MMSE; 0–30, higher scores indicate better cognitive performance) [14], and the Eastern Cooperative Oncology Group (ECOG) performance status score (0–4, higher scores indicate poorer performance) [15]. Primary equivalence of efficacy was assessed using the BPI assessment of 'worst pain in the past 24 hours' on an 11-point scale ranging from 0 (no pain) to 10 (worst pain imaginable). This was selected as the primary efficacy measure because evidence suggests that severely intense pain is what interferes most significantly with activities of daily life [13]. Secondary endpoints included other assessments of pain from the BPI (completed by the investigator in consultation with the patient); investigator and patient global assessments of treatment effectiveness (1 = poor, 2 = fair, 3 = good, 4 = very good, 5 = excellent); MMSE; ECOG; time to dose stabilization; and usage and dose of breakthrough pain medication. Patients rated BPI 'worst pain in the past 24 hours' once daily (just before taking the mid-morning dose of study medication) and 'pain now' twice daily (AM and PM; just before taking the mid-morning and evening doses) throughout the study and recorded responses in daily diaries. The other BPI assessments, plus investigator and patient global evaluations of treatment effectiveness, MMSE, and ECOG, were evaluated at the end of both phases. Adverse events (AEs) and concomitant medications were recorded throughout the study. Physical examinations were carried out at baseline and at the end of each study phase. Blood and urine samples were collected for laboratory analysis (hematology, serum biochemistry, and urinalysis) at baseline and at the end of the SR phase. Patients were contacted daily during the IR phase and every 3 to 4 days during the SR phase (preferably by home or clinic visit, or by telephone if appropriate). Contact was also made 3 days after the last dose of study drug to determine the patient's AE status.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nAssuming variability (standard deviation, SD) in the primary efficacy variable of 2.0 [16] and a 30% dropout/non-evaluable rate, a sample size of approximately 70 patients per treatment group was required (140 total). This would provide 90% power to detect equivalence with an equivalence limit of 1.5. A planned blinded re-estimation of the optimal sample size was done by an appointed independent person after 55 patients had completed the study, the variability for the primary efficacy measure was estimated using their data, and the sample size was increased to 170 patients. Subsequently, an unplanned, blinded re-estimation of the variability of the primary variable for both phases was done after 120 patients had completed the study, and as a result, the sample size was increased to 200 patients. All statistical analyses were pre-specified. All efficacy variables were analyzed using the intent-to-treat (ITT) population, which included all patients who took at least 1 dose of study medication and had at least 1 assessment from each study phase. A per-protocol (PP) analysis, excluding data from patients with major protocol deviations, was also done. All patients taking at least 1 dose of study medication were included in the analysis of safety. All efficacy analyses included a pre-specified adjustment for baseline value and country to control for potentially confounding variables. The primary efficacy analysis used the mean of the last 2 post-baseline recorded values (or the last value if only 1 was available) for BPI 'worst pain in the past 24 hours'. The primary measure was calculated separately for the IR and SR phases (there were therefore two primary outcomes). An analysis of covariance (ANCOVA) was used to calculate the 95% two-sided confidence interval (CI) for the difference between the adjusted means for the 2 treatments at endpoint of each phase. An equivalence test was used to compare the mean scores of the primary efficacy measure at the end of each phase [17-19]. The 2 treatments, within a phase, were considered equivalent if the 95% two-sided CI for the treatment difference was within a -1.5 and 1.5 point difference on the BPI scale. This particular CI range was selected prospectively because it was considered clinically relevant through consultation with physicians who specialize in analgesia. For secondary endpoints (including BPI 'pain at its least in past 24 hours', 'pain on average pain', 'pain now' (AM and PM), 'pain relief in past 24 hours', 'pain interference in past 24 hours', MMSE, and ECOG) an ANCOVA model was used to calculate 95% two-sided CIs for the difference between the adjusted treatment means at the end of the IR and SR phases. The adjusted means were estimated by the same approach used for the primary endpoint, with the baseline score designated as a covariate; however, no equivalence limits were specified for the secondary endpoints. Time to dose stabilization during both study phases was analyzed by the log-rank test. Summary statistics with 95% CIs were produced for investigator and patient global evaluations of treatment effectiveness and to assess the number of medication doses taken for breakthrough pain in the last 2 days of each phase. P values (not adjusted for multiple comparisons) are reported for secondary endpoints. Statistical significance was declared at the α = 0.05 level.\n\nBODY.RESULTS.STUDY POPULATION:\n202 patients were enrolled into the study (Belgium, n = 29; Canada, n = 10; France, n = 7; Germany, n = 15; Netherlands, n = 29; Spain, n = 41; Sweden, n = 11; UK, n = 60). 200 patients were randomized and took study medication. 163 patients (81.5%) completed the IR phase and 133 (66.5%) completed the SR phase. Figure 2 is a flow diagram of patient entry and completion and the reasons for premature discontinuation. Withdrawal from the study owing to lack of efficacy was more common in patients randomized to hydromorphone (n = 11) compared with morphine (n = 4). The mean age of participants overall was 59.8 years. 98.5% of patients were Caucasian and 51% were female. The most common types of cancer were breast (28%), lung (20%), and gastrointestinal (16%). For 133 (67%) patients, the predominant pain location was bone or soft tissue; 34 (17%) others had mixed pain, and 33 (17%) had visceral pain. No patient had neuropathic pain. The most common pain medications used at study entry were morphine (n = 124 [62%]) and tramadol (n = 38 [19%]). The baseline demographics and clinical characteristics were similar for both treatment groups (Table 2). Table 2 Demographic and baseline clinical characteristics Characteristic Hydromorphone (N = 99) Morphine (N = 101) Mean (SD) age, years 60.7 (12.50) 59.0 (11.36) Gender, % male 46.5 51.5 Race, % Caucasian/Black/Asian/Other 100/0/0/0 97/0/2/1 Mean (SD) height, cm 166.7 (9.33) 167.3 (10.47) Mean (SD) weight, kg 66.3 (15.33) 67.4 (13.33) Mean (SD) BMI, kg/m 2 23.8 (5.14) 24.2 (5.06) Cancer type, n (%)  Breast 23 (23.2) 33 (32.7)  Lung 20 (20.2) 19 (18.8)  Genitourinary 18 (18.2) 12 (11.9)  Gastrointestinal 17 (17.2) 15 (14.9)  Oral cavity 3 (3.0) 3 (3.0)  Lymphoma 3 (3.0) 0  Leukemia 1 (1.0) 2 (2.0)  Bone 1 (1.0) 1 (1.0)  Other 13 (13.1) 16 (15.8) Predominant pain type, n (%)  Bone or soft tissue 61 (61.6) 72 (71.3)  Mixed 19 (19.2) 15 (14.9)  Visceral 19 (19.2) 14 (13.9) Mean (SD) MMSE score* 28.5 (2.3) 28.8 (2.0) Mean (SD) ECOG score 1.6 (0.8) 1.6 (0.9) Abbreviations: BMI, body mass index; ECOG, Eastern Cooperative Oncology Group; MMSE, Mini-Mental State Examination; SD, standard deviation *Hydromorphone, n = 75; morphine, n = 73 Figure 2Patient disposition. IR, immediate-release; SR, sustained-release.\n\nBODY.RESULTS.EFFICACY.BPI 'WORST PAIN IN THE PAST 24 HOURS' (PRIMARY ENDPOINT):\nIn each study phase, mean values for BPI 'worst pain in the past 24 hours' decreased with both hydromorphone and morphine treatments (Table 3). The least-squares (LS) mean (95% CI) differences between the hydromorphone and morphine groups were 0.2 points (-0.4, 0.9) in the IR phase and -0.8 points (-1.6, -0.01) in the SR phase (ITT population). In the IR phase, the 95% CI was within the -1.5 to 1.5 equivalency range (-0.4 to 0.9), indicating that IR hydromorphone and IR morphine were equivalent in this primary efficacy outcome measure. In the SR phase, the upper limit of the 95% CI was within the 1.5 boundary (-0.01), but the lower limit was less than -1.5 (-1.6), indicating that the SR formulations were not equivalent but that the negative direction of the mean difference was in favor of OROS® hydromorphone (Figure 3). Results of the analysis of the primary efficacy variable were similar when using data from the PP population. Table 3 BPI scores at baseline, end of IR phase, and end of SR phase (ITT population) Variable Baseline* Mean (SD) End of IR phase † LS mean (SE) End of SR phase ‡ LS mean (SE) Worst pain  Hydromorphone 6.3 (2.7) 5.0 (0.3) 3.5 (0.3)  Morphine 6.2 (2.5) 4.8 (0.3) 4.3 (0.3) Least pain  Hydromorphone 2.7 (2.5) 1.8 (0.2) 1.8 (0.2)  Morphine 2.9 (2.7) 2.2 (0.2) 1.8 (0.2) Average pain  Hydromorphone 5.0 (2.1) 3.6 (0.2) 3.4 (0.3)  Morphine 5.1 (2.1) 3.6 (0.2) 3.2 (0.3) Pain now AM  Hydromorphone 4.0 (2.7) 3.3 (0.3) 2.4 (0.3)  Morphine 4.1 (2.4) 3.4 (0.3) 2.8 (0.3) Pain now PM  Hydromorphone 4.8 (3.0) 3.6 (0.3) 2.6 (0.3) §  Morphine 4.2 (2.5) 3.7 (0.3) 3.4 (0.3) Abbreviations: BPI, Brief Pain Inventory; CR, controlled-release; IR, immediate-release; LS, least-squares; SD, standard deviation; SE, standard error; SR, sustained-release *Hydromorphone, n = 99; morphine, n = 101 †IR hydromorphone, n = 99; IR morphine, n = 101 ‡OROS ® hydromorphone, n = 77; CR morphine, n = 86 §p = 0.0372 versus CR morphine Figure 3Least-squares mean differences and 95% confidence intervals between the OROS® hydromorphone and controlled-release (CR) morphine groups at end of the sustained-release (SR) phase.\n\nBODY.RESULTS.EFFICACY.SECONDARY ENDPOINTS:\nAt the end of the IR phase, mean scores for the secondary efficacy variables (other assessments of pain from the BPI [Table 3 and Figure 4], and MMSE and ECOG scores [data not shown]) were similar for IR hydromorphone and IR morphine, except for BPI 'pain interference with normal work', which favored IR hydromorphone (Figure 4; p = 0.0386). Figure 4Mean Brief Pain Inventory (BPI) interference scores at baseline, end of immediate-release (IR) phase, and end of sustained-release (SR) phase. Values at baseline represent means; values at other time points represent least-squares means. At baseline: hydromorphone, n = 99; morphine, n = 101. In IR phase: IR hydromorphone, n = 99; IR morphine, n = 101. In SR phase: OROS® hydromorphone, n = 77; controlled-release (CR) morphine, n = 86. At the end of the SR phase, scores were similar between treatment groups for BPI items 'least pain', 'average pain', and 'pain now AM'; however, 'pain now PM' was significantly lower in the OROS® hydromorphone group compared with the CR morphine group (LS mean [SE] scores, 2.6 [0.3] versus 3.4 [0.3], respectively; p = 0.0372) (Table 3). LS mean differences (95% CI) between treatment groups were 0.06 (-0.51, 0.64), 0.15 (-0.49, 0.79), -0.38 (-1.03, 0.27), and -0.77 (-1.49, -0.05) for least, average, morning, and evening pain, respectively (Figure 3). Figures 5 and 6 show the consistency of response to both OROS® hydromorphone and morphine throughout the SR phase, during both the morning and evening assessments of BPI 'pain now' scores. Figure 5Mean (standard error, SE) pain now scores in the morning during the sustained-release (SR) phase. Pain was rated using the 11-point Brief Pain Inventory (BPI), ranging from 0 (no pain) to 10 (worst pain imaginable). CR, controlled-release. Figure 6Mean (standard error, SE) pain now scores in the evening during the sustained-release (SR) phase. Pain was rated using the 11-point Brief Pain Inventory (BPI), ranging from 0 (no pain) to 10 (worst pain imaginable). CR, controlled-release. LS mean BPI 'pain relief' at endpoint was 69.7% for the OROS® hydromorphone group and 70.0% for the CR morphine group (95% CI for treatment difference, -8.11, 7.54). BPI 'pain interference' scores were also similar between treatment groups at endpoint, (Figure 4), as were LS mean ECOG performance status scores (1.8 versus 1.7 in the OROS® hydromorphone and CR morphine groups, respectively; 95% CI for treatment difference, -0.13, 0.26), and LS mean MMSE scores (28.9 versus 29.2; 95% CI for treatment difference, -1.19, 0.51). The majority of both investigators and patients (> 70%) rated the treatments as good, very good, or excellent at the end of each study phase (Figure 7). Mean ± SD patient global assessment scores at the end of the SR phase were 3.2 ± 1.14 for the OROS® hydromorphone group and 3.3 ± 0.98 for the CR morphine group (p = 0.6696); mean ± SD investigator global assessment scores were 3.2 ± 1.07 and 3.3 ± 0.91, respectively (p = 0.4760). Figure 7Patient (A) and investigator (B) global evaluations of treatment effectiveness at the end of the sustained-release (SR) phase. CR, controlled-release. In both treatment phases, patients in the hydromorphone group took longer than those in the morphine group to achieve stable doses (p < 0.001). The number of patients requiring dose-level changes during the SR phase was similar for both groups (19 of 77 [24.7%] OROS® hydromorphone recipients and 23 of 86 [26.7%] CR morphine recipients). Comparison of dosage levels at the end of the SR phase generally showed that similar proportions of patients received OROS® hydromorphone or CR morphine at each of the 5 (approximate 1:5 hydromorphone to morphine) lower dosage levels, although more patients in the OROS® hydromorphone group (10% versus 2% in the CR morphine group) received the highest dosage level evaluated in this study (Table 4). Table 4 Dose levels at end of SR phase OROS ® hydromorphone (N = 77) CR morphine (N = 86) Dose level Dose (mg) every 24 hours Number of subjects (%) Dose (mg) every 12 hours Number of subjects (%) 1 16 22 (28.6) 30 25 (29.1) 2 24 14 (18.2) 60 23 (26.7) 3 32 18 (23.4) 90 15 (17.4) 4 48 10 (13.0) 120 13 (15.1) 5 72 5 (6.5) 175 8 (9.3) 6 96 8 (10.4) 260 2 (2.3) Abbreviations: CR, controlled-release; SR, sustained-release In the IR phase, 61.6% of patients taking IR hydromorphone and 52.5% of those receiving IR morphine required medication for breakthrough pain on the last 2 days of the phase. In the SR phase, the corresponding percentages in the OROS® hydromorphone and CR morphine groups were 48.1% and 48.8%, respectively. The mean (SD) numbers of breakthrough pain medication doses taken on the last 2 days were as follows: IR phase, 1.8 (2.2) and 1.3 (1.8) for the IR hydromorphone and IR morphine groups, respectively; SR phase, 1.6 (2.2) and 1.4 (1.9) for the OROS® hydromorphone and CR morphine groups, respectively. The mean total doses of breakthrough pain medication as a percentage of the mean daily consumption were 39.4% and 27.1% for hydromorphone and morphine in the IR phase, and 13.8% and 10.3% for OROS® hydromorphone and CR morphine in the SR phase. There were no statistically significant treatment differences in the numbers of patients who took breakthrough pain medication or in the amount of medication taken in either phase.\n\nBODY.RESULTS.SAFETY:\nThe overall safety profiles of hydromorphone and morphine were comparable, with a similar number of patients in each group reporting AEs. Overall, irrespective of study phase, 81 of 99 (81.8%) hydromorphone-treated patients and 90 of 101 (89.1%) morphine-treated patients reported at least 1 AE. The most commonly reported AEs during the IR and SR phases of the study are listed in Table 5. The types of AEs were typical of those generally associated with opioid use. The incidence of constipation was higher with hydromorphone than morphine in both phases of the study (23.2% versus 10.9% in the IR phase and 39.0% versus 22.1% in the SR phase). In the SR phase, the incidences of vomiting, nausea, and somnolence were higher with CR morphine than with OROS® hydromorphone (22.1% versus 9.1%, 29.1% versus 19.5%, and 14.0% versus 10.4%, respectively). Most AEs were mild or moderate in severity and no clear treatment difference was seen in the relative severity of the most commonly reported AEs. Approximately 30% of patients with AEs reported events that were considered unlikely to be related or not related to study therapy. Table 5 Adverse events reported by at least 5% of patients in any group during the IR or SR phase IR phase SR phase Adverse event Hydromorphone (N = 99) Morphine (N = 101) OROS ® hydromorphone (N = 77) CR morphine (N = 86) Constipation 23 (23.2%) 11 (10.9%) 30 (39.0%) 19 (22.1%) Nausea 18 (18.2%) 23 (22.8%) 15 (19.5%) 25 (29.1%) Vomiting 16 (16.2%) 19 (18.8%) 7 (9.1%) 19 (22.1%) Somnolence 11 (11.1%) 11 (10.9%) 8 (10.4%) 12 (14.0%) Dizziness 8 (8.1%) 6 (5.9%) 4 (5.2%) 8 (9.3%) Headache 8 (8.1%) 6 (5.9%) 3 (3.9%) 2 (2.3%) Diarrhea 8 (8.1%) 1 (1.0%) 7 (9.1%) 2 (2.3%) Pruritus 4 (4.0%) 5 (5.0%) 3 (3.9%) 5 (5.8%) Asthenia 5 (5.1%) 1 (1.0%) 6 (7.8%) 4 (4.7%) Fatigue 3 (3.0%) 3 (3.0%) 4 (5.2%) 6 (7.0%) Confusion 2 (2.0%) 2 (2.0%) 7 (9.1%) 2 (2.3%) Anemia 2 (2.0%) 1 (1.0%) 3 (3.9%) 6 (7.0%) Anorexia 1 (1.0%) 4 (4.0%) 2 (2.6%) 5 (5.8%) Insomnia 1 (1.0%) 2 (2.0%) 5 (6.5%) 4 (4.7%) Peripheral edema 0 3 (3.0%) 1 (1.3%) 8 (9.3%) Pyrexia 2 (2.0%) 1 (1.0%) 4 (5.2%) 2 (2.3%) Anxiety 1 (1.0%) 1 (1.0%) 5 (6.5%) 1 (1.2%) Abbreviations: CR, controlled-release; IR, immediate-release; SR, sustained-release Serious AEs were reported for 24 patients during the double-blind period (by 5 patients in each treatment group in the IR phase, and by 7 patients in the OROS® hydromorphone group and 9 in the CR morphine group in the SR phase [2 patients had serious AEs in both phases]). Many of the serious AEs were associated with the underlying disease, although approximately one third were considered definitely or probably related to study therapy. Three patients in the morphine group died during the study (2 during the IR phase, 1 during the SR phase). None of the deaths was considered related to the study medication. Twenty-six patients withdrew prematurely from the study because of AEs, 16 during the IR phase (9 hydromorphone, 7 morphine) and 10 during the SR phase (6 OROS® hydromorphone, 4 CR morphine). There was no statistically significant difference between treatment groups in the time to withdrawal, irrespective of phase (IR phase, p = 0.6537; SR phase, p = 0.2827). The use of concomitant medications, including the use of non-opioid analgesics, was similar for both treatment groups in both the IR and SR phases.\n\nBODY.DISCUSSION:\nThis was a short-term, randomized, double-blind comparative study testing the clinical equivalence of IR and SR formulations of hydromorphone and morphine in patients with chronic cancer pain. In each study phase (IR and SR), mean values for BPI 'worst pain in the past 24 hours' (the primary efficacy endpoint) decreased with both hydromorphone and morphine. In the IR phase, the LS mean difference between the treatment groups was 0.2 points, and the 95% CI (-0.4, 0.9) was within the pre-specified equivalency range of -1.5 to 1.5. In the SR phase, the LS mean difference was -0.8 points in favor of OROS® hydromorphone. The upper limit of the 95% CI (-0.01) was within the 1.5 boundary, but the lower limit (-1.6) was less than the -1.5 limit. Therefore, according to the pre-specified criteria, equivalence was not demonstrated; the direction of the difference was in favor of OROS® hydromorphone. The BPI item 'worst pain in the past 24 hours' was selected as the primary endpoint of the trial based on previous studies showing it to be the most sensitive BPI measure in the clinical trial setting [13,20-22]. The results of the present study further suggest that OROS® hydromorphone provides consistent pain relief over 24 hours, and that pain levels in the evening were significantly lower after OROS® hydromorphone compared with CR morphine treatment. This finding reflects the 'pain now PM' measure being scored at the end of morphine dosing (i.e. trough levels) but at the midpoint of hydromorphone dosing. The comparable results between groups for the other secondary efficacy variables (including assessments of pain interference with physical activity and social function, performance status, and cognition) support similarity in the treatment efficacy. Withdrawal from the study owing to lack of efficacy was more common in patients randomized to hydromorphone (n = 11) compared with morphine (n = 4). This is a potentially negative finding for hydromorphone; however, the numbers are too low to draw any definite conclusions. Results of the dose analysis were generally consistent with those of previous reports, which indicate that most cancer pain can be controlled with ≤ 240 mg of oral morphine daily [23]. In the present study, 83% of the OROS® hydromorphone group and 88% of the CR morphine group received the equivalent of morphine 240 mg/day or less. The frequency of breakthrough medication use remained stable throughout both phases in the morphine group (52.5% in the IR phase, 48.8% in the SR phase), but decreased from the IR phase to the SR phase in the OROS® hydromorphone group (from 61.6% to 48.1%). At the end of the SR phase, there was no significant difference between the groups in use of breakthrough medication. The occurrence of 3 deaths and 23 serious AEs during the study was not unexpected, given the severity of the patients' conditions and the progressive nature of the disease. In fact, many of the serious AEs were associated with the underlying disease. The participants in this study are representative of the overall population afflicted with chronic cancer pain. The mean baseline BPI score for 'worst pain in the past 24 hours' was 6.3 on an 11-point scale, a score indicative of moderate to severe pain [16]. This is comparable to baseline values in other studies of cancer pain, for which the mean score was 6.99 [24]. Baseline cognition was similar to that of healthy controls (mean MMSE score, 28.3) [25], and the lack of change observed in both treatment groups probably reflects the fact that the MMSE was designed to measure cognition in Alzheimer's disease [14] and was not intended to measure cognition in non-demented subjects. The major positive aspect of this study was the robust design: it was multicenter, randomized, double-blind, and active controlled. It also used a widely utilized and well validated primary outcome measure, the BPI. On the negative side, there was a relatively poor completion rate, with only 60 out of 99 patients (60.6%) randomized to hydromorphone and 73 out of 101 patients (72.3%) randomized to morphine completing both treatment phases. However, this completion rate was reasonably good when taking into consideration that this study was carried out in patients with cancer which was advanced enough to require strong opioid analgesics. In addition, a dropout rate of at least 30% was anticipated. Furthermore, the duration of the study was relatively short (up to 24 days), so it was not possible to make evaluations of the sustained efficacy of the formulations. Two previous randomized, double-blind, crossover studies have compared CR formulations of hydromorphone and morphine in patients with cancer pain [26,27]. Moriarty et al (N = 100) found no significant differences between treatment groups for any outcome measure, and AEs were mild and infrequent [27]. In the other study (N = 87), patients treated with hydromorphone had significantly higher pain scores and required more doses of rescue analgesia than patients receiving morphine [26]. In addition, more patients in the hydromorphone group withdrew owing to AEs or inadequate analgesia (n = 16 versus n = 2 on morphine). Although it is difficult to directly compare the results of these studies with those of the current study, the observed differences may reflect, at least in part, the more consistent delivery of hydromorphone with the OROS® formulation relative to the dosing used in those studies. The consistent release of hydromorphone over 24 hours has been demonstrated in previous studies, in which steady-state plasma concentrations were achieved by 48 hours and sustained throughout the dosing interval [8,28]. Practicing clinicians are acutely aware of the need for multiple pain management alternatives, because it is clear that the response to a particular opioid agent varies among individuals. Although our understanding of the causes of individual differences remains incomplete, receptor and receptor-binding studies suggest that they may result not only from differences in the class of opioid receptors to which each agent binds (morphine binds to mu [primary] and kappa [lesser extent] [29]; hydromorphone binds to mu [primary] and delta [lesser extent] [30]), but also from inter-individual differences in opioid receptors (several receptor subtypes have been identified [mu 1, mu 2, mu 3, delta 1, delta 2, kappa 1, kappa 2, kappa 3]) and the location of these variants within the nervous system [31-33]. Therefore, clinicians may need to utilize different opioids to achieve maximum efficacy for each patient. In addition, some studies have supported the use of opioid rotation, switching from one opioid to another when treatment-limiting toxicity results in reduced responsiveness [34-37]. Based on numerous observations that individual response varies greatly from opioid to opioid, changing to a different drug may provide an improved balance between analgesia and adverse effects; as a result, there is a need for additional long-acting opioids on the market. Long-acting opioid formulations such OROS® hydromorphone have the potential to improve the overall management of chronic pain. Such formulations provide more consistent opioid plasma concentrations, avoiding the peaks and troughs associated with short-acting agents, and result in well-tolerated, around-the-clock pain control with fewer daily doses [38,39].\n\nBODY.CONCLUSION:\nThis study demonstrates that while IR hydromorphone is equivalent to IR morphine for relieving chronic cancer pain, equivalence was not demonstrated for OROS® hydromorphone and CR morphine in the SR phase of the study. The direction of the mean difference between the treatments, and the out-of-range lower limit of the 95% CI was in favor of OROS® hydromorphone on the primary endpoint 'worst pain'. 'Pain now PM' scores were also significantly lower for those taking OROS® hydromorphone at the end of the SR phase (although the timing of this measure coincided with a trough in the dosing levels for the twice-daily morphine compared with the once-daily OROS® hydromorphone), and BPI 'interference with normal work' scores were significantly lower for IR hydromorphone. None of the other secondary efficacy analyses showed significant treatment differences in either phase. The overall safety profiles of the 2 treatments were generally similar and most AEs were typical of opioid analgesic therapy.\n\nBODY.COMPETING INTERESTS:\nMH was a paid advisor for Janssen-Cilag.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nMH was an investigator in this study and was involved in revising this manuscript for important intellectual content. JT contributed to the analysis of the study and reviewed the manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to morphine what was the result of hydromorphone on 'pain now PM'?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1108
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of low-dose ketamine on succinylcholine-induced postoperative myalgia in outpatient surgeries: a randomized, double-blind study\n\n ABSTRACT.OBJECTIVE:\nDespite the many complications of succinylcholine, it is still widely used as a superior muscle relaxant for rapid sequence induction. One of these complications is postoperative myalgia (POM). The aim of this study was to investigate the prophylactic effect of low-dose ketamine on the incidence and severity of POM.\n\nABSTRACT.MATERIALS AND METHODS:\nIn this double-blind clinical study, a total of 148 patients scheduled for general anesthesia were randomly divided into two equal groups. Initially, in Group K, 0.5 mg/kg of ketamine was injected intravenously, whereas in Group N, the same volume (5 mL) of normal saline was injected. Thereafter, anesthesia was induced in all patients, by injecting 1.5 mg/kg of fentanyl and 2 mg/kg of propofol intravenously. Following the loss of eyelid reflex, 1.5 mg/kg of succinylcholine was injected intravenously as a muscle relaxant and then the patients were intubated. POM was defined as a pain with no surgical interferences, and its intensity was graded based on a four-point scale. The incidence and severity of myalgia were assessed by a blinded observer 24 hours after surgery.\n\nABSTRACT.RESULTS:\nIn terms of demographic data, the results of this study showed that there is no significant difference between patients in both groups (P>0.05). Overall, the incidence of POM in Group K was significantly less, when compared with Group N (P<0.05), but both groups were comparable based on the grade 2 of POM. After the induction of anesthesia, the systolic and diastolic blood pressure values were found to reduce in both groups (P<0.05). However, the changes were somehow similar, and repeated measures of variance analysis showed no significant difference in the two study groups (P>0.05).\n\nABSTRACT.CONCLUSION:\nThe addition of 0.5 mg/kg of ketamine to propofol for the induction of anesthesia can be effective in reducing the incidence of low-grade POM.\n\nBODY.INTRODUCTION:\nFor >60 years, succinylcholine is still being administered as a selective relaxant for rapid sequence intubation by anesthesiologists in many countries.1 It has been shown to possess unique features such as low cost, fast-acting, short half-life, safe metabolites, and causing excellent muscle relaxation for intubation;2 however, it has many side effects as well. Postoperative myalgia (POM), with an incidence rate of ~41%–92%, is one of the most common side effects of this drug and can take several days to cause significant discomfort in patients.3 However, its effect is felt more in the throat, neck, shoulder, and abdominal muscles and is common among patients with outpatient surgery.4 Due to its unknown real context of pathogenesis and in an effort to reduce the incidence and severity of succinylcholine-induced myalgia, various medications including nondepolarizing muscle relaxants, benzodiazepines, magnesium sulfate, opioids, gabapentin, and nonsteroidal anti-inflammatory drugs have been tested, with varying degrees of success.5,6 Ketamine is an N-methyl-d-aspartate (NMDA) receptor antagonist with excellent analgesic activity, and at subanesthetic doses it is capable of preventing central sensitization, hyperalgesia, drug resistance creation, and reduction of postoperative pain.7 Although, ketamine administration could produce unpleasant psychogenic complications, no disagreeable effects have been recorded in patients who received a low dose of ketamine. The effects of ketamine on postelectroconvulsive therapy (ECT) myalgia have been studied earlier, and the results showed that ketamine could not prevent this type of myalgia.8 To the best of our knowledge, the effect of ketamine on the prevention of POM is yet to be determined. Therefore, in this randomized, double-blind study, the prophylactic effect of low-dose ketamine on the incidence and severity of myalgia caused by succinylcholine injection in patients undergoing outpatient surgery was investigated.\n\nBODY.MATERIALS AND METHODS:\nThis study was approved by the Ethics Committee of Kurdistan University of Medical Sciences and was registered in the Iranian Registry of Clinical Trials (IRCT2014062412789N5). A complete description of the study was also presented to each participant, and written informed consent was obtained. The sample size was calculated based on the assumption that the incidence of POM in outpatient cases is ~70%, and intervention that can cause 25% reduction in incidence of POM will be interesting. To consider this difference and type I error equal to 5% and 90% power of the study, 72 patients were required to be in each group (α=0.05 and β=90%), but in order to avoid possible loss of samples during the study, the number of patients in each group was increased to 74. Thus, 148 patients, who were scheduled for outpatient surgery under general anesthesia, belonging to the American Society of Anesthesiologists physical status I and II within the age of 18–70 years, were included in this double-blind randomized clinical trial. Patients with a history of allergy to medications; substance abuse; malignant hyperthermia; myopathy; cardiovascular, liver, and advanced kidney diseases; and the risk of difficult intubation based on physical examination were excluded from the study. Before surgery, patients were evaluated by the anesthesiologists. The study protocol and evaluation of myalgia based on Kararmaz's criteria were described to the patients.4 Based on a computer-generated random sequence, the 148 patients were enrolled into one of the two groups (ketamine or saline). The patients did not receive premedication. In the operating room, standard monitoring of the noninvasive blood pressure, electrocardiogram, heart rate, and pulse oximetry was done, and the preliminary values were recorded. Thereafter, a venous cannula (G=18) was placed on the dorsum of a patient's hand. Before the induction of anesthesia in patients of Group K, 0.5 mg/kg of ketamine (a 5 mL volume was reached by adding distilled water) was injected intravenously and 5 mL of normal saline was injected intravenously slowly into patients in Group N. Drugs were prepared in 5 mL syringes by an anesthesia nurse who was unaware of the grouping. After injecting the study drugs, 1.5 mg/kg of fentanyl was injected intravenously within 60 seconds and subsequently 2 mg/kg of propofol was administered intravenously within 30 seconds for the induction of anesthesia. Following the loss of eyelid reflex, 1.5 mg/kg of succinylcholine was injected intravenously and patients were ventilated using bag and mask and with 100% oxygen. After fasciculation, the values of heart rate and blood pressure were measured and recorded, and tracheal intubation was performed. The maintenance of anesthesia continued using a mixture of oxygen, N2O (40/60), and isoflurane 1 MAC. After 5 minutes of tracheal intubation, the values of heart rate and blood pressure were obtained and recorded again. For maintenance of muscle relaxation, 0.2 mg/kg of atracurium was used as suggested by the surgeon. At the end of the surgery, muscle relaxation was reversed using neostigmine and atropine. After the desired spontaneous ventilation, the patients were extubated. The patients were transferred to the recovery room and complications, if any, were recorded during recovery. After meeting the discharge criteria, the patients were discharged to be taken home and cared for by a responsible adult. The incidence and severity of myalgia in the patients were determined 24 hours after surgery by a medical student who was unaware of the grouping. Myalgia is defined as \"a pain with no surgical interference\" and is graded based on Kararmaz et al's4 four-point scale as follows: 0= no muscle pain, 1= muscle stiffness limited to one area of the body, 2= muscle pain or stiffness noticed spontaneously by a patient who requires analgesics, and 3= incapacitating generalized, severe muscle stiffness or pain. Statistical analysis was conducted using SPSS Version 12.0 software (IBM Corporation, Armonk, NY). The comparison profile of the patients was performed using the analysis of variance test. The incidence and severity of myalgia were compared using the chi-square and independent t-tests. The value of P<0.05 was considered statistically significant.\n\nBODY.RESULTS:\nIn this study, a total of 207 patients were scheduled for outpatient surgery from July 2013 to August 2014. Of them, 29 patients could not meet the entry criteria. Twenty-seven patients had no willingness to participate in the study, and the surgery of three patients was canceled. The remaining 148 patients were randomly assigned into two groups. Two patients in Group N due to nausea, vomiting, and pain and one patient in Group K due to arrhythmia were turned from outpatient to inpatient. Two persons from Group K did not cooperate with the evaluation of POM. The data associated with the 72 patients in Group N and 71 patients in Group K were analyzed (Figure 1). There were no significant differences in terms of age, sex, and duration of surgery between both groups (Table 1). In Group K, 13 (18.1%) out of the 71 patients had myalgia, whereas 36 (50%) out of the 72 patients had myalgia in Group N (P=0.001). Grade 1 POM was lower in Group K when compared with Group N (nine in Group K versus 33 in Group N; P<0.001), whereas the incidence of grade 2 POM was comparable among patients of the two groups (Table 2). The baseline values of systolic and diastolic blood pressure and heart rate in both groups were similar. After the induction of anesthesia, the values of systolic and diastolic blood pressure decreased in both groups (P<0.05). However, these changes were somehow similar in the two groups, and repeated measures of variance analysis showed no significant difference in both groups (P>0.05). Changes in heart rate after induction and intubation between the two groups were compared and repeated measures of variance analysis showed no significant difference in the two groups (P>0.05; Table 3).\n\nBODY.DISCUSSION:\nThe results of this study showed that ketamine significantly reduced the incidence of succinylcholine-induced myalgia. Succinylcholine is a muscle relaxant that is commonly used due to deep neuromuscular blocks for intubation in patients undergoing ambulatory anesthesia, but associated myalgia has been shown to occur in 41%–92% of patients.9,10 Since ambulation increases the possibility of myalgia and its intensity, ambulatory patients are suitable for investigating this complication. Therefore, this group of patients was chosen for the study. Myalgia is most prevalent following the use of succinylcholine in the first day of surgery.11 As shown in a study, 92% of patients reported myalgia in the first 24 hours of study and the incidence of myalgia did not differ at 24 hours and 48 hours after surgery.8 In this study, myalgia was evaluated only in the first 24 hours after surgery. It seems that the intrafusal contraction of muscle fibers caused damage to spindles and subsequently myalgia.12 In vitro studies have shown that active, excessive, and continuous muscle contractions increase the uptake of calcium, activation of phospholipase A2, arachidonic acid, and prostaglandins production, thereby increasing the risk of muscle damage and pain.13 The results of this study showed that 50% of the patients in Group N experienced myalgia after anesthesia was induced with propofol. In a meta-analysis, the average incidence of myalgia in the first 24 hours with thiopental was 49.2% and with propofol was 65.4%.14 The incidence of myalgia in the present study was slightly lower when compared with the aforementioned meta-analysis. It appears that participants in this study belonged to low-level POM in general. Ketamine creates a complex combination of both antinociceptive and pronociceptive actions. In fact, based on the mechanisms of analgesia with a direct receptor, the analgesic effect of ketamine is associated with the plasma levels of drugs and pain reduces with subanesthetic doses of ketamine.15,16 In general, it seems that the NMDA receptor plays an important role in the pathophysiology of pain, and the Supra spinal block of the NR2B NMDA subunit by ketamine has an important antinociceptive effect.17 Recent studies have shown the role of NMDA receptors in facilitating the process of pain in the central nervous system, this receptor is also responsible for central sensitization and windup phenomenon. The administration of NMDA receptor antagonists prevents the development of sensitization, hyperalgesia, and drug resistance.18 Ketamine is an antagonist of this receptor and has excellent analgesic activity at doses under anesthesia.19 It strengthens the performance of mu and kappa opioid receptors and also has direct effects on the delta opioid receptor. As such, ketamine certainly modulates the response of opioid receptors and reduces tolerance to opioids.20 It also strengthens the endogenesis of antinociceptive systems, in part, by its aminergic (serotonergic and noradrenergic) activation and inhibition of reuptake.21 It inhibits the synthesis of nitric oxide, which probably contributed to the analgesic effect.22 Also, there is evidence suggesting that ketamine interferes with nicotinic, muscarinic, and monoaminergic receptors. As a result of the undesirable and unwanted side effects, ketamine usage is controversial in nonanesthetized patients, but when surgery is done under general anesthesia, these side effects are barely seen.19 In this study, surgery was performed under general anesthesia and none of the patients experienced delirium and nightmare. In our previous study,8 the effect of prophylactic ketamine on myalgia after ECT in 50 patients with major depression was demonstrated, in which 1 mg/kg of propofol and 0.3 mg/kg of ketamine were used for the induction of anesthesia, and muscle relaxation was induced using 0.5 mg/kg of succinylcholine. The results showed that the addition of ketamine to propofol had no effect on the incidence of myalgia after ECT, and only 12% of the patients had myalgia within 24 hours after ECT. However, our previous study8 is different in some aspects from the present study. In our previous study, ECT patients were investigated and the cause of myalgia was found to be different in surgical patients. Second, unlike the surgery, myalgia intensity in ECT is not related to fasciculation and motor activities.23 Third, the dose of anesthetic drugs for the induction of anesthesia and muscle relaxation was less than that of the present study. The result of our previous study, which was carried out on patients undergoing ECT, is not in line with the present study. Cardiovascular changes were similar after the induction of anesthesia in both groups. In theory, hemodynamic stability can be predicted based on the induction of anesthesia with propofol and ketamine, when compared with propofol alone. This is because ketamine increases blood pressure and heart rate; therefore, the addition of this drug to propofol during anesthesia induction can inhibit the reduction of these parameters before the stimulation of laryngoscopy and tracheal intubation. Also, following laryngoscopy and tracheal intubation, the synergistic effect of these two drugs can increase the depth of anesthesia and result in smoother cardiovascular response to airway stimulation. It appears that 0.5 mg/kg of ketamine could not produce such an impact on the hemodynamic system, thereby resulting in a significant effect.\n\nBODY.STUDY LIMITATIONS:\nOne of the limitations of this study is the insufficient time for myalgia evaluation (24 hours). Failure to assess the incidence and severity of fasciculation is another limitation of this study. In this study, ketamine was diluted with sterile water, although this is not a wrong practice, but further addition of water, as carried out in this study, could produce ketamine with concentration below the physiological osmotic pressure, which may interfere with the effects of ketamine.\n\nBODY.CONCLUSION:\nThe addition of 0.5 mg/kg of ketamine to propofol for the induction of anesthesia can be effective in reducing the incidence of POM, without any change in hemodynamic indices.\n\n**Question:** Compared to Group N, the same volume (5 mL) of normal saline was injected what was the result of Group K, 0.5 mg/kg of ketamine was injected intravenously, on Postoperative myalgia (POM)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1109
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of low-dose ketamine on succinylcholine-induced postoperative myalgia in outpatient surgeries: a randomized, double-blind study\n\n ABSTRACT.OBJECTIVE:\nDespite the many complications of succinylcholine, it is still widely used as a superior muscle relaxant for rapid sequence induction. One of these complications is postoperative myalgia (POM). The aim of this study was to investigate the prophylactic effect of low-dose ketamine on the incidence and severity of POM.\n\nABSTRACT.MATERIALS AND METHODS:\nIn this double-blind clinical study, a total of 148 patients scheduled for general anesthesia were randomly divided into two equal groups. Initially, in Group K, 0.5 mg/kg of ketamine was injected intravenously, whereas in Group N, the same volume (5 mL) of normal saline was injected. Thereafter, anesthesia was induced in all patients, by injecting 1.5 mg/kg of fentanyl and 2 mg/kg of propofol intravenously. Following the loss of eyelid reflex, 1.5 mg/kg of succinylcholine was injected intravenously as a muscle relaxant and then the patients were intubated. POM was defined as a pain with no surgical interferences, and its intensity was graded based on a four-point scale. The incidence and severity of myalgia were assessed by a blinded observer 24 hours after surgery.\n\nABSTRACT.RESULTS:\nIn terms of demographic data, the results of this study showed that there is no significant difference between patients in both groups (P>0.05). Overall, the incidence of POM in Group K was significantly less, when compared with Group N (P<0.05), but both groups were comparable based on the grade 2 of POM. After the induction of anesthesia, the systolic and diastolic blood pressure values were found to reduce in both groups (P<0.05). However, the changes were somehow similar, and repeated measures of variance analysis showed no significant difference in the two study groups (P>0.05).\n\nABSTRACT.CONCLUSION:\nThe addition of 0.5 mg/kg of ketamine to propofol for the induction of anesthesia can be effective in reducing the incidence of low-grade POM.\n\nBODY.INTRODUCTION:\nFor >60 years, succinylcholine is still being administered as a selective relaxant for rapid sequence intubation by anesthesiologists in many countries.1 It has been shown to possess unique features such as low cost, fast-acting, short half-life, safe metabolites, and causing excellent muscle relaxation for intubation;2 however, it has many side effects as well. Postoperative myalgia (POM), with an incidence rate of ~41%–92%, is one of the most common side effects of this drug and can take several days to cause significant discomfort in patients.3 However, its effect is felt more in the throat, neck, shoulder, and abdominal muscles and is common among patients with outpatient surgery.4 Due to its unknown real context of pathogenesis and in an effort to reduce the incidence and severity of succinylcholine-induced myalgia, various medications including nondepolarizing muscle relaxants, benzodiazepines, magnesium sulfate, opioids, gabapentin, and nonsteroidal anti-inflammatory drugs have been tested, with varying degrees of success.5,6 Ketamine is an N-methyl-d-aspartate (NMDA) receptor antagonist with excellent analgesic activity, and at subanesthetic doses it is capable of preventing central sensitization, hyperalgesia, drug resistance creation, and reduction of postoperative pain.7 Although, ketamine administration could produce unpleasant psychogenic complications, no disagreeable effects have been recorded in patients who received a low dose of ketamine. The effects of ketamine on postelectroconvulsive therapy (ECT) myalgia have been studied earlier, and the results showed that ketamine could not prevent this type of myalgia.8 To the best of our knowledge, the effect of ketamine on the prevention of POM is yet to be determined. Therefore, in this randomized, double-blind study, the prophylactic effect of low-dose ketamine on the incidence and severity of myalgia caused by succinylcholine injection in patients undergoing outpatient surgery was investigated.\n\nBODY.MATERIALS AND METHODS:\nThis study was approved by the Ethics Committee of Kurdistan University of Medical Sciences and was registered in the Iranian Registry of Clinical Trials (IRCT2014062412789N5). A complete description of the study was also presented to each participant, and written informed consent was obtained. The sample size was calculated based on the assumption that the incidence of POM in outpatient cases is ~70%, and intervention that can cause 25% reduction in incidence of POM will be interesting. To consider this difference and type I error equal to 5% and 90% power of the study, 72 patients were required to be in each group (α=0.05 and β=90%), but in order to avoid possible loss of samples during the study, the number of patients in each group was increased to 74. Thus, 148 patients, who were scheduled for outpatient surgery under general anesthesia, belonging to the American Society of Anesthesiologists physical status I and II within the age of 18–70 years, were included in this double-blind randomized clinical trial. Patients with a history of allergy to medications; substance abuse; malignant hyperthermia; myopathy; cardiovascular, liver, and advanced kidney diseases; and the risk of difficult intubation based on physical examination were excluded from the study. Before surgery, patients were evaluated by the anesthesiologists. The study protocol and evaluation of myalgia based on Kararmaz's criteria were described to the patients.4 Based on a computer-generated random sequence, the 148 patients were enrolled into one of the two groups (ketamine or saline). The patients did not receive premedication. In the operating room, standard monitoring of the noninvasive blood pressure, electrocardiogram, heart rate, and pulse oximetry was done, and the preliminary values were recorded. Thereafter, a venous cannula (G=18) was placed on the dorsum of a patient's hand. Before the induction of anesthesia in patients of Group K, 0.5 mg/kg of ketamine (a 5 mL volume was reached by adding distilled water) was injected intravenously and 5 mL of normal saline was injected intravenously slowly into patients in Group N. Drugs were prepared in 5 mL syringes by an anesthesia nurse who was unaware of the grouping. After injecting the study drugs, 1.5 mg/kg of fentanyl was injected intravenously within 60 seconds and subsequently 2 mg/kg of propofol was administered intravenously within 30 seconds for the induction of anesthesia. Following the loss of eyelid reflex, 1.5 mg/kg of succinylcholine was injected intravenously and patients were ventilated using bag and mask and with 100% oxygen. After fasciculation, the values of heart rate and blood pressure were measured and recorded, and tracheal intubation was performed. The maintenance of anesthesia continued using a mixture of oxygen, N2O (40/60), and isoflurane 1 MAC. After 5 minutes of tracheal intubation, the values of heart rate and blood pressure were obtained and recorded again. For maintenance of muscle relaxation, 0.2 mg/kg of atracurium was used as suggested by the surgeon. At the end of the surgery, muscle relaxation was reversed using neostigmine and atropine. After the desired spontaneous ventilation, the patients were extubated. The patients were transferred to the recovery room and complications, if any, were recorded during recovery. After meeting the discharge criteria, the patients were discharged to be taken home and cared for by a responsible adult. The incidence and severity of myalgia in the patients were determined 24 hours after surgery by a medical student who was unaware of the grouping. Myalgia is defined as \"a pain with no surgical interference\" and is graded based on Kararmaz et al's4 four-point scale as follows: 0= no muscle pain, 1= muscle stiffness limited to one area of the body, 2= muscle pain or stiffness noticed spontaneously by a patient who requires analgesics, and 3= incapacitating generalized, severe muscle stiffness or pain. Statistical analysis was conducted using SPSS Version 12.0 software (IBM Corporation, Armonk, NY). The comparison profile of the patients was performed using the analysis of variance test. The incidence and severity of myalgia were compared using the chi-square and independent t-tests. The value of P<0.05 was considered statistically significant.\n\nBODY.RESULTS:\nIn this study, a total of 207 patients were scheduled for outpatient surgery from July 2013 to August 2014. Of them, 29 patients could not meet the entry criteria. Twenty-seven patients had no willingness to participate in the study, and the surgery of three patients was canceled. The remaining 148 patients were randomly assigned into two groups. Two patients in Group N due to nausea, vomiting, and pain and one patient in Group K due to arrhythmia were turned from outpatient to inpatient. Two persons from Group K did not cooperate with the evaluation of POM. The data associated with the 72 patients in Group N and 71 patients in Group K were analyzed (Figure 1). There were no significant differences in terms of age, sex, and duration of surgery between both groups (Table 1). In Group K, 13 (18.1%) out of the 71 patients had myalgia, whereas 36 (50%) out of the 72 patients had myalgia in Group N (P=0.001). Grade 1 POM was lower in Group K when compared with Group N (nine in Group K versus 33 in Group N; P<0.001), whereas the incidence of grade 2 POM was comparable among patients of the two groups (Table 2). The baseline values of systolic and diastolic blood pressure and heart rate in both groups were similar. After the induction of anesthesia, the values of systolic and diastolic blood pressure decreased in both groups (P<0.05). However, these changes were somehow similar in the two groups, and repeated measures of variance analysis showed no significant difference in both groups (P>0.05). Changes in heart rate after induction and intubation between the two groups were compared and repeated measures of variance analysis showed no significant difference in the two groups (P>0.05; Table 3).\n\nBODY.DISCUSSION:\nThe results of this study showed that ketamine significantly reduced the incidence of succinylcholine-induced myalgia. Succinylcholine is a muscle relaxant that is commonly used due to deep neuromuscular blocks for intubation in patients undergoing ambulatory anesthesia, but associated myalgia has been shown to occur in 41%–92% of patients.9,10 Since ambulation increases the possibility of myalgia and its intensity, ambulatory patients are suitable for investigating this complication. Therefore, this group of patients was chosen for the study. Myalgia is most prevalent following the use of succinylcholine in the first day of surgery.11 As shown in a study, 92% of patients reported myalgia in the first 24 hours of study and the incidence of myalgia did not differ at 24 hours and 48 hours after surgery.8 In this study, myalgia was evaluated only in the first 24 hours after surgery. It seems that the intrafusal contraction of muscle fibers caused damage to spindles and subsequently myalgia.12 In vitro studies have shown that active, excessive, and continuous muscle contractions increase the uptake of calcium, activation of phospholipase A2, arachidonic acid, and prostaglandins production, thereby increasing the risk of muscle damage and pain.13 The results of this study showed that 50% of the patients in Group N experienced myalgia after anesthesia was induced with propofol. In a meta-analysis, the average incidence of myalgia in the first 24 hours with thiopental was 49.2% and with propofol was 65.4%.14 The incidence of myalgia in the present study was slightly lower when compared with the aforementioned meta-analysis. It appears that participants in this study belonged to low-level POM in general. Ketamine creates a complex combination of both antinociceptive and pronociceptive actions. In fact, based on the mechanisms of analgesia with a direct receptor, the analgesic effect of ketamine is associated with the plasma levels of drugs and pain reduces with subanesthetic doses of ketamine.15,16 In general, it seems that the NMDA receptor plays an important role in the pathophysiology of pain, and the Supra spinal block of the NR2B NMDA subunit by ketamine has an important antinociceptive effect.17 Recent studies have shown the role of NMDA receptors in facilitating the process of pain in the central nervous system, this receptor is also responsible for central sensitization and windup phenomenon. The administration of NMDA receptor antagonists prevents the development of sensitization, hyperalgesia, and drug resistance.18 Ketamine is an antagonist of this receptor and has excellent analgesic activity at doses under anesthesia.19 It strengthens the performance of mu and kappa opioid receptors and also has direct effects on the delta opioid receptor. As such, ketamine certainly modulates the response of opioid receptors and reduces tolerance to opioids.20 It also strengthens the endogenesis of antinociceptive systems, in part, by its aminergic (serotonergic and noradrenergic) activation and inhibition of reuptake.21 It inhibits the synthesis of nitric oxide, which probably contributed to the analgesic effect.22 Also, there is evidence suggesting that ketamine interferes with nicotinic, muscarinic, and monoaminergic receptors. As a result of the undesirable and unwanted side effects, ketamine usage is controversial in nonanesthetized patients, but when surgery is done under general anesthesia, these side effects are barely seen.19 In this study, surgery was performed under general anesthesia and none of the patients experienced delirium and nightmare. In our previous study,8 the effect of prophylactic ketamine on myalgia after ECT in 50 patients with major depression was demonstrated, in which 1 mg/kg of propofol and 0.3 mg/kg of ketamine were used for the induction of anesthesia, and muscle relaxation was induced using 0.5 mg/kg of succinylcholine. The results showed that the addition of ketamine to propofol had no effect on the incidence of myalgia after ECT, and only 12% of the patients had myalgia within 24 hours after ECT. However, our previous study8 is different in some aspects from the present study. In our previous study, ECT patients were investigated and the cause of myalgia was found to be different in surgical patients. Second, unlike the surgery, myalgia intensity in ECT is not related to fasciculation and motor activities.23 Third, the dose of anesthetic drugs for the induction of anesthesia and muscle relaxation was less than that of the present study. The result of our previous study, which was carried out on patients undergoing ECT, is not in line with the present study. Cardiovascular changes were similar after the induction of anesthesia in both groups. In theory, hemodynamic stability can be predicted based on the induction of anesthesia with propofol and ketamine, when compared with propofol alone. This is because ketamine increases blood pressure and heart rate; therefore, the addition of this drug to propofol during anesthesia induction can inhibit the reduction of these parameters before the stimulation of laryngoscopy and tracheal intubation. Also, following laryngoscopy and tracheal intubation, the synergistic effect of these two drugs can increase the depth of anesthesia and result in smoother cardiovascular response to airway stimulation. It appears that 0.5 mg/kg of ketamine could not produce such an impact on the hemodynamic system, thereby resulting in a significant effect.\n\nBODY.STUDY LIMITATIONS:\nOne of the limitations of this study is the insufficient time for myalgia evaluation (24 hours). Failure to assess the incidence and severity of fasciculation is another limitation of this study. In this study, ketamine was diluted with sterile water, although this is not a wrong practice, but further addition of water, as carried out in this study, could produce ketamine with concentration below the physiological osmotic pressure, which may interfere with the effects of ketamine.\n\nBODY.CONCLUSION:\nThe addition of 0.5 mg/kg of ketamine to propofol for the induction of anesthesia can be effective in reducing the incidence of POM, without any change in hemodynamic indices.\n\n**Question:** Compared to Group N, the same volume (5 mL) of normal saline was injected what was the result of Group K, 0.5 mg/kg of ketamine was injected intravenously, on Changes in heart rate after induction and intubation?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1110
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of low-dose ketamine on succinylcholine-induced postoperative myalgia in outpatient surgeries: a randomized, double-blind study\n\n ABSTRACT.OBJECTIVE:\nDespite the many complications of succinylcholine, it is still widely used as a superior muscle relaxant for rapid sequence induction. One of these complications is postoperative myalgia (POM). The aim of this study was to investigate the prophylactic effect of low-dose ketamine on the incidence and severity of POM.\n\nABSTRACT.MATERIALS AND METHODS:\nIn this double-blind clinical study, a total of 148 patients scheduled for general anesthesia were randomly divided into two equal groups. Initially, in Group K, 0.5 mg/kg of ketamine was injected intravenously, whereas in Group N, the same volume (5 mL) of normal saline was injected. Thereafter, anesthesia was induced in all patients, by injecting 1.5 mg/kg of fentanyl and 2 mg/kg of propofol intravenously. Following the loss of eyelid reflex, 1.5 mg/kg of succinylcholine was injected intravenously as a muscle relaxant and then the patients were intubated. POM was defined as a pain with no surgical interferences, and its intensity was graded based on a four-point scale. The incidence and severity of myalgia were assessed by a blinded observer 24 hours after surgery.\n\nABSTRACT.RESULTS:\nIn terms of demographic data, the results of this study showed that there is no significant difference between patients in both groups (P>0.05). Overall, the incidence of POM in Group K was significantly less, when compared with Group N (P<0.05), but both groups were comparable based on the grade 2 of POM. After the induction of anesthesia, the systolic and diastolic blood pressure values were found to reduce in both groups (P<0.05). However, the changes were somehow similar, and repeated measures of variance analysis showed no significant difference in the two study groups (P>0.05).\n\nABSTRACT.CONCLUSION:\nThe addition of 0.5 mg/kg of ketamine to propofol for the induction of anesthesia can be effective in reducing the incidence of low-grade POM.\n\nBODY.INTRODUCTION:\nFor >60 years, succinylcholine is still being administered as a selective relaxant for rapid sequence intubation by anesthesiologists in many countries.1 It has been shown to possess unique features such as low cost, fast-acting, short half-life, safe metabolites, and causing excellent muscle relaxation for intubation;2 however, it has many side effects as well. Postoperative myalgia (POM), with an incidence rate of ~41%–92%, is one of the most common side effects of this drug and can take several days to cause significant discomfort in patients.3 However, its effect is felt more in the throat, neck, shoulder, and abdominal muscles and is common among patients with outpatient surgery.4 Due to its unknown real context of pathogenesis and in an effort to reduce the incidence and severity of succinylcholine-induced myalgia, various medications including nondepolarizing muscle relaxants, benzodiazepines, magnesium sulfate, opioids, gabapentin, and nonsteroidal anti-inflammatory drugs have been tested, with varying degrees of success.5,6 Ketamine is an N-methyl-d-aspartate (NMDA) receptor antagonist with excellent analgesic activity, and at subanesthetic doses it is capable of preventing central sensitization, hyperalgesia, drug resistance creation, and reduction of postoperative pain.7 Although, ketamine administration could produce unpleasant psychogenic complications, no disagreeable effects have been recorded in patients who received a low dose of ketamine. The effects of ketamine on postelectroconvulsive therapy (ECT) myalgia have been studied earlier, and the results showed that ketamine could not prevent this type of myalgia.8 To the best of our knowledge, the effect of ketamine on the prevention of POM is yet to be determined. Therefore, in this randomized, double-blind study, the prophylactic effect of low-dose ketamine on the incidence and severity of myalgia caused by succinylcholine injection in patients undergoing outpatient surgery was investigated.\n\nBODY.MATERIALS AND METHODS:\nThis study was approved by the Ethics Committee of Kurdistan University of Medical Sciences and was registered in the Iranian Registry of Clinical Trials (IRCT2014062412789N5). A complete description of the study was also presented to each participant, and written informed consent was obtained. The sample size was calculated based on the assumption that the incidence of POM in outpatient cases is ~70%, and intervention that can cause 25% reduction in incidence of POM will be interesting. To consider this difference and type I error equal to 5% and 90% power of the study, 72 patients were required to be in each group (α=0.05 and β=90%), but in order to avoid possible loss of samples during the study, the number of patients in each group was increased to 74. Thus, 148 patients, who were scheduled for outpatient surgery under general anesthesia, belonging to the American Society of Anesthesiologists physical status I and II within the age of 18–70 years, were included in this double-blind randomized clinical trial. Patients with a history of allergy to medications; substance abuse; malignant hyperthermia; myopathy; cardiovascular, liver, and advanced kidney diseases; and the risk of difficult intubation based on physical examination were excluded from the study. Before surgery, patients were evaluated by the anesthesiologists. The study protocol and evaluation of myalgia based on Kararmaz's criteria were described to the patients.4 Based on a computer-generated random sequence, the 148 patients were enrolled into one of the two groups (ketamine or saline). The patients did not receive premedication. In the operating room, standard monitoring of the noninvasive blood pressure, electrocardiogram, heart rate, and pulse oximetry was done, and the preliminary values were recorded. Thereafter, a venous cannula (G=18) was placed on the dorsum of a patient's hand. Before the induction of anesthesia in patients of Group K, 0.5 mg/kg of ketamine (a 5 mL volume was reached by adding distilled water) was injected intravenously and 5 mL of normal saline was injected intravenously slowly into patients in Group N. Drugs were prepared in 5 mL syringes by an anesthesia nurse who was unaware of the grouping. After injecting the study drugs, 1.5 mg/kg of fentanyl was injected intravenously within 60 seconds and subsequently 2 mg/kg of propofol was administered intravenously within 30 seconds for the induction of anesthesia. Following the loss of eyelid reflex, 1.5 mg/kg of succinylcholine was injected intravenously and patients were ventilated using bag and mask and with 100% oxygen. After fasciculation, the values of heart rate and blood pressure were measured and recorded, and tracheal intubation was performed. The maintenance of anesthesia continued using a mixture of oxygen, N2O (40/60), and isoflurane 1 MAC. After 5 minutes of tracheal intubation, the values of heart rate and blood pressure were obtained and recorded again. For maintenance of muscle relaxation, 0.2 mg/kg of atracurium was used as suggested by the surgeon. At the end of the surgery, muscle relaxation was reversed using neostigmine and atropine. After the desired spontaneous ventilation, the patients were extubated. The patients were transferred to the recovery room and complications, if any, were recorded during recovery. After meeting the discharge criteria, the patients were discharged to be taken home and cared for by a responsible adult. The incidence and severity of myalgia in the patients were determined 24 hours after surgery by a medical student who was unaware of the grouping. Myalgia is defined as \"a pain with no surgical interference\" and is graded based on Kararmaz et al's4 four-point scale as follows: 0= no muscle pain, 1= muscle stiffness limited to one area of the body, 2= muscle pain or stiffness noticed spontaneously by a patient who requires analgesics, and 3= incapacitating generalized, severe muscle stiffness or pain. Statistical analysis was conducted using SPSS Version 12.0 software (IBM Corporation, Armonk, NY). The comparison profile of the patients was performed using the analysis of variance test. The incidence and severity of myalgia were compared using the chi-square and independent t-tests. The value of P<0.05 was considered statistically significant.\n\nBODY.RESULTS:\nIn this study, a total of 207 patients were scheduled for outpatient surgery from July 2013 to August 2014. Of them, 29 patients could not meet the entry criteria. Twenty-seven patients had no willingness to participate in the study, and the surgery of three patients was canceled. The remaining 148 patients were randomly assigned into two groups. Two patients in Group N due to nausea, vomiting, and pain and one patient in Group K due to arrhythmia were turned from outpatient to inpatient. Two persons from Group K did not cooperate with the evaluation of POM. The data associated with the 72 patients in Group N and 71 patients in Group K were analyzed (Figure 1). There were no significant differences in terms of age, sex, and duration of surgery between both groups (Table 1). In Group K, 13 (18.1%) out of the 71 patients had myalgia, whereas 36 (50%) out of the 72 patients had myalgia in Group N (P=0.001). Grade 1 POM was lower in Group K when compared with Group N (nine in Group K versus 33 in Group N; P<0.001), whereas the incidence of grade 2 POM was comparable among patients of the two groups (Table 2). The baseline values of systolic and diastolic blood pressure and heart rate in both groups were similar. After the induction of anesthesia, the values of systolic and diastolic blood pressure decreased in both groups (P<0.05). However, these changes were somehow similar in the two groups, and repeated measures of variance analysis showed no significant difference in both groups (P>0.05). Changes in heart rate after induction and intubation between the two groups were compared and repeated measures of variance analysis showed no significant difference in the two groups (P>0.05; Table 3).\n\nBODY.DISCUSSION:\nThe results of this study showed that ketamine significantly reduced the incidence of succinylcholine-induced myalgia. Succinylcholine is a muscle relaxant that is commonly used due to deep neuromuscular blocks for intubation in patients undergoing ambulatory anesthesia, but associated myalgia has been shown to occur in 41%–92% of patients.9,10 Since ambulation increases the possibility of myalgia and its intensity, ambulatory patients are suitable for investigating this complication. Therefore, this group of patients was chosen for the study. Myalgia is most prevalent following the use of succinylcholine in the first day of surgery.11 As shown in a study, 92% of patients reported myalgia in the first 24 hours of study and the incidence of myalgia did not differ at 24 hours and 48 hours after surgery.8 In this study, myalgia was evaluated only in the first 24 hours after surgery. It seems that the intrafusal contraction of muscle fibers caused damage to spindles and subsequently myalgia.12 In vitro studies have shown that active, excessive, and continuous muscle contractions increase the uptake of calcium, activation of phospholipase A2, arachidonic acid, and prostaglandins production, thereby increasing the risk of muscle damage and pain.13 The results of this study showed that 50% of the patients in Group N experienced myalgia after anesthesia was induced with propofol. In a meta-analysis, the average incidence of myalgia in the first 24 hours with thiopental was 49.2% and with propofol was 65.4%.14 The incidence of myalgia in the present study was slightly lower when compared with the aforementioned meta-analysis. It appears that participants in this study belonged to low-level POM in general. Ketamine creates a complex combination of both antinociceptive and pronociceptive actions. In fact, based on the mechanisms of analgesia with a direct receptor, the analgesic effect of ketamine is associated with the plasma levels of drugs and pain reduces with subanesthetic doses of ketamine.15,16 In general, it seems that the NMDA receptor plays an important role in the pathophysiology of pain, and the Supra spinal block of the NR2B NMDA subunit by ketamine has an important antinociceptive effect.17 Recent studies have shown the role of NMDA receptors in facilitating the process of pain in the central nervous system, this receptor is also responsible for central sensitization and windup phenomenon. The administration of NMDA receptor antagonists prevents the development of sensitization, hyperalgesia, and drug resistance.18 Ketamine is an antagonist of this receptor and has excellent analgesic activity at doses under anesthesia.19 It strengthens the performance of mu and kappa opioid receptors and also has direct effects on the delta opioid receptor. As such, ketamine certainly modulates the response of opioid receptors and reduces tolerance to opioids.20 It also strengthens the endogenesis of antinociceptive systems, in part, by its aminergic (serotonergic and noradrenergic) activation and inhibition of reuptake.21 It inhibits the synthesis of nitric oxide, which probably contributed to the analgesic effect.22 Also, there is evidence suggesting that ketamine interferes with nicotinic, muscarinic, and monoaminergic receptors. As a result of the undesirable and unwanted side effects, ketamine usage is controversial in nonanesthetized patients, but when surgery is done under general anesthesia, these side effects are barely seen.19 In this study, surgery was performed under general anesthesia and none of the patients experienced delirium and nightmare. In our previous study,8 the effect of prophylactic ketamine on myalgia after ECT in 50 patients with major depression was demonstrated, in which 1 mg/kg of propofol and 0.3 mg/kg of ketamine were used for the induction of anesthesia, and muscle relaxation was induced using 0.5 mg/kg of succinylcholine. The results showed that the addition of ketamine to propofol had no effect on the incidence of myalgia after ECT, and only 12% of the patients had myalgia within 24 hours after ECT. However, our previous study8 is different in some aspects from the present study. In our previous study, ECT patients were investigated and the cause of myalgia was found to be different in surgical patients. Second, unlike the surgery, myalgia intensity in ECT is not related to fasciculation and motor activities.23 Third, the dose of anesthetic drugs for the induction of anesthesia and muscle relaxation was less than that of the present study. The result of our previous study, which was carried out on patients undergoing ECT, is not in line with the present study. Cardiovascular changes were similar after the induction of anesthesia in both groups. In theory, hemodynamic stability can be predicted based on the induction of anesthesia with propofol and ketamine, when compared with propofol alone. This is because ketamine increases blood pressure and heart rate; therefore, the addition of this drug to propofol during anesthesia induction can inhibit the reduction of these parameters before the stimulation of laryngoscopy and tracheal intubation. Also, following laryngoscopy and tracheal intubation, the synergistic effect of these two drugs can increase the depth of anesthesia and result in smoother cardiovascular response to airway stimulation. It appears that 0.5 mg/kg of ketamine could not produce such an impact on the hemodynamic system, thereby resulting in a significant effect.\n\nBODY.STUDY LIMITATIONS:\nOne of the limitations of this study is the insufficient time for myalgia evaluation (24 hours). Failure to assess the incidence and severity of fasciculation is another limitation of this study. In this study, ketamine was diluted with sterile water, although this is not a wrong practice, but further addition of water, as carried out in this study, could produce ketamine with concentration below the physiological osmotic pressure, which may interfere with the effects of ketamine.\n\nBODY.CONCLUSION:\nThe addition of 0.5 mg/kg of ketamine to propofol for the induction of anesthesia can be effective in reducing the incidence of POM, without any change in hemodynamic indices.\n\n**Question:** Compared to Group N, the same volume (5 mL) of normal saline was injected what was the result of Group K, 0.5 mg/kg of ketamine was injected intravenously, on the incidence of POM?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1125
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical efficacy of \n\n ABSTRACT:\nAging has become one of the distinctive demographic phenomena in the 21st century and its social, economic and health implications are the most challenging issues. Senile Memory Impairment is a common condition characterized by mild symptoms of cognitive decline and occurs as a part of the normal aging process. It can be correlated to \"Jarajanya Smrtibhramsha\" according to Ayurveda. The present study deals with the efficacy of Guduchyadi Medhya Rasayana on Senile Memory Impairment. A total of 138 patients aged in between 55–75 years were registered and randomly divided into two groups as the trial and control groups. The drugs were administered for 3. The trial drug showed memory enhancement, anti-stress, anti-depressant and anxiolytic properties. The trial group showed better results in the management compared to the control group.\n\nBODY.INTRODUCTION:\nAging has become a great issue for many countries due to a worldwide life prolongation. Number of people over 60 years has grown rapidly to 10% of the world population.[1] In India too, in the last decade, increasing population aged has been projected significantly.[2] So, it is associated with physiological, functional, and pathological changes affecting physical, psychological, emotional and social well-being.[3] The major consequences of the increasing number of individuals in advanced age group is the increase in number of patients suffering from age-related disorders. Conservative estimation revealed that 25% of the elderly persons have significant psychiatric symptoms[4] and the number of psychologicaly ill elderly persons was estimated to be 9 million in the year 2005.[5] Though psychological disorders are common among the aged, it frequently remains undetected and untreated. It induces functional disability, disturbs rehabilitation, burdens the health system and impairs the quality of life of elderly individuals.[6] Research based on the psychological health revealed that age-associated cognitive like memory disorders are found in high prevalence among the aged.[7] Subjective complaints from individuals with age-associated memory impairment often include difficulties in remembering names and words, etc. So, this condition is the result of physiological changes in the aging brain and not a specific neurological disorder.[8] In Ayurveda literature, impairment of memory is mentioned as Smritibhramsha which occurs due to vitiation of Rajas and Tamas Doshas in the mind.[9] The appearance of Jara (aging) at the appropriate age is termed as \"Kalaja Jara\",[10] and memory impairment appears as a clinical Rupa (feature) of Jara,[11] hence Senile Memory Impairment can be correlated as Jarajanya Smritibhramsha according to Ayurveda. In Sarangadhara Samhita, it is clearly emphasized that Medha (intellect) and Buddhi (wisdom) deteriorate in the fourth and ninth decades of life accordingly.[12] Both are closely associated with the mind and help for the normal function of the mind. If these components decrease in the mind, then it will affect the normal function of mind. Moreover, Caraka has noted that cognitive functions are thoroughly affected in old age as Grahana (power of understanding), Dharana (power of retention), Smarana (power of memorizing), Vachana (speech) and functions of sense organs qualitatively.[13]\n\nBODY.INTRODUCTION.IMPORTANCE OF :\nThe aging process occurs over all the body. It does not occur in the same way in every human being. People's lifestyles also have additional influence on aging.[14] The decline in brain weight and volume, degeneration of intracellular organelles, changes in cellular DNA and RNA, neuronal degeneration as well as significant loss of synapses are the salient features of aging in the brain.[15] Therefore, psychological and neurological manifestations often form the hallmark of senility. Hence, Ayurveda clearly emphasizes that in the fourth and ninth decades of lifespan, individuals are subjected to the loss of Medha (intellect) and Buddhi (wisdom). Hence, it seems therapeutic measures of Medhya Rasayanas are exclusively needed for elders because they are brain tonics and improve the psychological faculties. It is responsible for Dhi (intelligence), Dhrti (retention power), Smriti (memory) and also possesses anti-stress and adaptogenic effect.[16] Having considered all literary views and ideas as above mentioned, Guduchyadi Medhya Rasayana (GMR)[17] has been attempted to evaluate its efficacy on Senile Memory Impairment.\n\nBODY.MATERIALS AND METHODS.SELECTION OF THE PATIENTS:\nTotal 138 patients having symptoms of memory impairment attending O.P.D. and I.P.D. of Department of Kaya Chikitsa of I.P.G.T. and R.A., Gujarat Ayurved University, Jamnagar were selected for the present study. A purposive sampling technique was adopted irrespective of their sex, religion, occupation, educational status, etc.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF INCLUSION:\nPatients aged 55 and above up to75 years of age.Complaints of memory impairment on the basis of short term and long term.\nShort term, e.g. misplacing objects, difficulty in remembering multiple items to be purchased, difficulty in recalling information quickly, problems in remembering names and telephone numbers.Long term, e.g. difficulty in recall of special events of personal life, difficulty in recall of previous histories of life and recognition.Patients suffering from dementia.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF EXCLUSION:\nPatients below 55 years and above 75 years of age.Patients with evidence of delirium, confusion or other disturbances of consciousness, Parkinson's disease, stroke, intracranial hemorrhage, brain tumors, history of alcoholism or drug dependence, use of any psychotropic drug, Alzheimer's disease, diabetes mellitus, etc.\n\nBODY.MATERIALS AND METHODS.INVESTIGATIONS:\nRoutine hematological and biochemical tests were carried out before and after the treatment. Among the biochemical tests, serum Acetylcholine Esterase (AChE) estimation, as a biomarker of Senile Memory Impairment, was carried out.\n\nBODY.MATERIALS AND METHODS.PLAN OF THE STUDY:\nPatients fulfilling the inclusion criterion were grouped in to two after obtaining written consent.\n\nBODY.MATERIALS AND METHODS.GROUP A (TRIAL GROUP):\nPatients of this group were administered with the Granules of Guduchyadi Medhya Rasayana (GMR), which contains Guduchi (Tinospora cordifolia Wild.), Apamarga (Achyranthes aspera Linn.), Vidanga (Embelia ribes Burm. f.), Shankhapushpi (Convolvulus pluricaulis Chois.), Vaca (Acorus calamus Linn.), Haritaki (Terminalia chebula Zetz.), Kushtha (Saussurea lappa C.B. Clarke), Shatavari (Asparagus racemosus Wild.), Cow's ghee and sugar. Dosage: 5 g, 3 times per day, after meal Anupana: Water in sufficient quantity Duration: 3 months Total, 73 patients were registered.\n\nBODY.MATERIALS AND METHODS.GROUP B (CONTROL GROUP):\nPatients of this group were administered Sarkaradi Granules (SG). Dosage: 3 g, 3 times per day, after meal Anupana: Water in sufficient quantity Duration: 3 months Total 65 patients were registered.. Both the drugs were prepared at Pharmacy, Gujarat Ayurved University, Jamnagar.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF ASSESSMENT:\nEffects of Medhya Rasayana on memory impairment were assessed in two ways, i.e. as subjective and objective, before and after the treatment. After completion of the treatment, the patients were graded into five groups based on their improvement in order to ascertain the overall therapeutic efficacy. The data so obtained were analyzed using SPSS and Sigma Stat statistical packages. Observations and results pertaining to clinical parameters were presented in tabular form along with the findings of statistical analysis. Paired \"t\" test was adopted to find out the effect of therapy for each group individually, while unpaired \"t\" test used for comparative effect.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF ASSESSMENT.SUBJECTIVE CRITERIA:\nAssessments on improvements in signs and symptoms were carried out using appropriately designed scoring method, with the scores ranging from 0 to 4. Cognitive state was assessed by Mini Mental State Examination.[18] and psychological health of the patients was evaluated with Hamilton Rating Scales for Depression and Anxiety.[19]\n\nBODY.MATERIALS AND METHODS.CRITERIA OF ASSESSMENT.OBJECTIVE CRITERIA:\nSerum AChE estimation was carried out as a biomarker of Senile Memory Impairment.\n\nBODY.OBSERVATIONS AND RESULTS:\nTotal 138 patients of Senile Memory Impairment were registered in the study. Forty-three patients in the Group A (trial group) and 56 patients in the Group B (control group) completed the treatment, while 30 patients in Group A and 9 patients in Group B discontinued. In this study, various etiological factors were identified. Maximum number of patients reported Chinta (thoughts; 92%), followed by Krodha (anger; 83.3%), Shoka (worries; 70.2%), Chittodvega (anxiety; 64.4%), Bhaya (fear; 51.4%), Ruksha Ahara (dried food; 55.7%), Rattri Jagarana (awaking at night; 87%), etc.\n\nBODY.OBSERVATIONS AND RESULTS.EFFECT OF THERAPY:\nBoth the groups showed highly significant (P`0.001) relief from all the chief complaints [Table 1 and Table 2]. Table 1 Effect of GMR in Group A on the symptoms of Senile Memory Impairment Table 2 Effect of SG in Group B on the symptoms of Senile Memory Impairment In comparison to Group B, Group A showed highly significant relief in symptoms such as names forgotten (63.56%), numbers forgotten (61.62%), and memory impairment in recalling special events in life (59.71%) at a level of P`0.001, and forgetfulness (68.60%), losing of valuables (73.64%), difficulty in remembering multiple items to be purchased (68.21%), inability to concentrate (68.14%), impaired recognition (79.7%), memory impairment of recall of previous histories (60.66%) at a level of P`0.01, and a significant relief (P`0.05) from difficulty in recalling information quickly (71.57%), depression (89.66%), and inability to relax (82.93%) [Table 3]. Table 3 Comparison of the effects of GMR and SG on the symptoms of Senile Memory Impairment With respect to Dosha Dushti, the Group A showed significant relief from Vata Dushti (85.48%), Pitta Dushti (94.15%) and in Kapha Dushti (84.3%), while the Group B showed significant relief from Vata Dushti (68.3%), Pitta Dushti (61.78%) and in Kapha Dushti (62.75%). On Hamilton Anxiety Rating Scale, both the groups showed highly significant relief (P`0.001) from all the symptoms individually. In comparison to the results obtained from Group B, the Group A showed highly significant relief with respect to intellectual cognitive functions (79.03%), somatic (muscular) disorders (83.71%), depression (74.12%), and a significant relief from fear (81.11%). On Hamilton Depression Rating Scale, both the groups showed highly significant relief (P`0.001) from all the symptoms individually. However, the comparative results were insignificant. The results of symptoms of Mini Mental State Examination showed that both the groups provided highly significant (P`0.001) improvement individually. In comparison with SG, GMR showed highly significant improvement in registration (9.30%), attention and calculation (24.49%), and a significant improvement in language (13.51%) and in total score (14.89%) [Table 4]. The results obtained with hematological parameters reveal that Group A showed [Table 5] highly significant increase in total RBC (1.20%) count, a significant increase in Hb% (1.34%), packed cell volume (PCV) (1.30%) and a significant decrease in erythrocyte sedimentation rate (ESR) value (7.40%), while Group B showed highly significant decrease in RBC count (4.30%), PCV (3.34%), and a significant decrease in Hb% (1.55%). Table 4 Comparison of the effects of GMR and SG on the symptoms of mini mental state examination Table 5 Effect of GMR on hematological parameters in Group A (n=43) With regard to the biochemical parameters, Group A showed highly significant decrease (P`0.01) in serum cholesterol, serum triglyceride, low density lipoprotein (LDL), and very low density lipoprotein (VLDL) levels, while high density lipoprotein (HDL) increased highly significantly (P`0.01) and a significant decrease (P`0.05) was found in blood urea (13.80%) as well as serum AChE (3.67%) within normal limits [Table 6]. The Group B showed highly significant increase in blood urea (10.04%), but it was within normal limit. Other parameters were found to be statistically insignificant. Table 6 Effect of GMR on biochemical parameters in Group A (n=43) Regarding the overall effect, Group A showed marked improvement in 18.6% patients, moderate improvement in 74.4% patients and mild improvement in 7.0% patients, while Group B showed moderate improvement in 32.2% patients and mild improvement in 67.8% patients.\n\nBODY.DISCUSSION:\nIt has been mentioned in texts that GMR contributes to enhance the memory power within a short period. On the basis of Rasapanchaka, the pharmacotherapeutic properties reveal that it contains Katu, Tikta and Madhura in Rasa, and the effect of Katu and Madhura in Vipaka. Moreover, due to various properties of Tikshna, Laghu, Ruksha, Sara, Guru and Snigdha, it possesses markedly Ushna Virya and mildly Sheeta Virya. Accordingly, GMR was considered to be an effective treatment for Tridoshahara. In Ayurveda, it is clearly emphasized that Smritibhramsha (memory impairment) results due to Rajas and Tamasdoshas in the mind. These doshas are responsible for producing avarana over the mind. The relief from the symptoms has been found to be statistically highly significant. These observations reveal that the drug might have counteracted or pacified Raja and Tamo Doshas, so Avarana of mind was removed and the disease was subsided or controlled. Further pharmacological studies have proven that Vaca (Acorus calemus Linn.) and Shankhapushpi (Convolvulus pluricaulis Chois) have anxiolytic, memory enhancing, anti-stress, antidepressant, tranquilizing and sedative activities.[20, 21] Further, it was found that there was a significant improvement by 14.84% on the total score of the Mini Mental State Examination [Table 4]. So, these evidences indicate the positive role of Medhya property of GMR on Senile Memory Impairment among elderly individuals. Considering the symptoms of memory impairment along with the cognitive decline, a highly significant result was obtained both from GMR and SG [Table 1 and Table 2]. However, in comparison, GMR has given better relief for the majority of symptoms [Table 3]. The Medhya Rasayanas are special Ayurvedic neuro nutraceuticals which are specific to brain and nervous system. They are claimed to promote cognitive function of the brain.[22] Further, it was mentioned in the Caraka Samhita that Shankhapushpi, a Rasayana drug, has the property of promoting intellect.[23] Moreover, Group B also showed improvement of memory due to the Medhya effect of ghee.[24] The serum AChE value has shown an upper range in both the groups before the treatment and a significant decrease was observed after the treatment in Group A [Table 6]. The recent advances in clinical research reveal that AChE activity is increased in human cerebrospinal fluid during aging[25] and scientists believe that dementia-related memory issues stem from the increased breakdown of acetylcholine due to a preponderance of AChE which degrades acetylcholine.[26] Furthermore, a decrease in AChE activity has noted in the treated animals, leading to an increase in the cholinergic activity in the brain.[27] The significant reduction of AChE in Group A shows that GMR exerts its effect on memory enhancement by increasing the cholinergic function by preventing the breakdown of acetylcholine in the aging human brain.\n\nBODY.CONCLUSIONS:\nSenile Memory Impairment can be correlated to Jarajanya Smritibhramsha according to Ayurveda, and imbalance of Trigunas through aggravation of Rajas and Tamas Doshas in the mind leads to development of Senile Memory Impairment among elders. Modern literature reveals that Senile Memory Impairment is closely associated with cognitive decline during aging. All the patients of Senile Memory Impairment were having features of impairment of short memory and long memory in this study. Manasika Nidana (psychic factors), Aharaja (dietary) and Viharaja (behavioral) factors, which were reported by the patients, play a major role in the pathogenesis of Senile Memory Impairment by vitiating Rajas and Tamas Doshas in the mind. Those factors were Chinta (thoughts), Krodha (anger), Shoka (worries), Chittodvega (anxiety), Bhaya (fear), Moha (illusions), Ruksha Ahara (dried food), Ratri Jagarana (awaking at night), etc. GMR has shown statistically highly significant improvement on short memory impairment as well as long memory impairment in Senile Memory Impairment, at the level of P`0.001. Reduction of AChE showed enhancement of memory functions due to improvement of cholinergic actions in the human brain in the trial group (Group A). GMR has shown memory enhancing, antidepressant, anti-stress, and anxiolytic potential. Hence, Guduchyadi Medhya Rasayana therapy has proved so as to provide better improvement in comparison to the control group.\n\n**Question:** Compared to administered Sarkaradi Granules (SG). what was the result of administered with the Granules of Guduchyadi Medhya Rasayana (GMR), which contains Guduchi (Tinospora cordifolia Wild.), Apamarga (Achyranthes aspera Linn.), Vidanga (Embelia ribes Burm. f.), Shankhapushpi (Convolvulus pluricaulis Chois.), Vaca (Acorus calamus Linn.), Haritaki (Terminalia chebula Zetz.), Kushtha (Saussurea lappa C.B. Clarke), Shatavari (Asparagus racemosus Wild.), Cow's ghee and sugar. on serum cholesterol, serum triglyceride, low density lipoprotein (LDL), and very low density lipoprotein (VLDL) levels?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1127
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical efficacy of \n\n ABSTRACT:\nAging has become one of the distinctive demographic phenomena in the 21st century and its social, economic and health implications are the most challenging issues. Senile Memory Impairment is a common condition characterized by mild symptoms of cognitive decline and occurs as a part of the normal aging process. It can be correlated to \"Jarajanya Smrtibhramsha\" according to Ayurveda. The present study deals with the efficacy of Guduchyadi Medhya Rasayana on Senile Memory Impairment. A total of 138 patients aged in between 55–75 years were registered and randomly divided into two groups as the trial and control groups. The drugs were administered for 3. The trial drug showed memory enhancement, anti-stress, anti-depressant and anxiolytic properties. The trial group showed better results in the management compared to the control group.\n\nBODY.INTRODUCTION:\nAging has become a great issue for many countries due to a worldwide life prolongation. Number of people over 60 years has grown rapidly to 10% of the world population.[1] In India too, in the last decade, increasing population aged has been projected significantly.[2] So, it is associated with physiological, functional, and pathological changes affecting physical, psychological, emotional and social well-being.[3] The major consequences of the increasing number of individuals in advanced age group is the increase in number of patients suffering from age-related disorders. Conservative estimation revealed that 25% of the elderly persons have significant psychiatric symptoms[4] and the number of psychologicaly ill elderly persons was estimated to be 9 million in the year 2005.[5] Though psychological disorders are common among the aged, it frequently remains undetected and untreated. It induces functional disability, disturbs rehabilitation, burdens the health system and impairs the quality of life of elderly individuals.[6] Research based on the psychological health revealed that age-associated cognitive like memory disorders are found in high prevalence among the aged.[7] Subjective complaints from individuals with age-associated memory impairment often include difficulties in remembering names and words, etc. So, this condition is the result of physiological changes in the aging brain and not a specific neurological disorder.[8] In Ayurveda literature, impairment of memory is mentioned as Smritibhramsha which occurs due to vitiation of Rajas and Tamas Doshas in the mind.[9] The appearance of Jara (aging) at the appropriate age is termed as \"Kalaja Jara\",[10] and memory impairment appears as a clinical Rupa (feature) of Jara,[11] hence Senile Memory Impairment can be correlated as Jarajanya Smritibhramsha according to Ayurveda. In Sarangadhara Samhita, it is clearly emphasized that Medha (intellect) and Buddhi (wisdom) deteriorate in the fourth and ninth decades of life accordingly.[12] Both are closely associated with the mind and help for the normal function of the mind. If these components decrease in the mind, then it will affect the normal function of mind. Moreover, Caraka has noted that cognitive functions are thoroughly affected in old age as Grahana (power of understanding), Dharana (power of retention), Smarana (power of memorizing), Vachana (speech) and functions of sense organs qualitatively.[13]\n\nBODY.INTRODUCTION.IMPORTANCE OF :\nThe aging process occurs over all the body. It does not occur in the same way in every human being. People's lifestyles also have additional influence on aging.[14] The decline in brain weight and volume, degeneration of intracellular organelles, changes in cellular DNA and RNA, neuronal degeneration as well as significant loss of synapses are the salient features of aging in the brain.[15] Therefore, psychological and neurological manifestations often form the hallmark of senility. Hence, Ayurveda clearly emphasizes that in the fourth and ninth decades of lifespan, individuals are subjected to the loss of Medha (intellect) and Buddhi (wisdom). Hence, it seems therapeutic measures of Medhya Rasayanas are exclusively needed for elders because they are brain tonics and improve the psychological faculties. It is responsible for Dhi (intelligence), Dhrti (retention power), Smriti (memory) and also possesses anti-stress and adaptogenic effect.[16] Having considered all literary views and ideas as above mentioned, Guduchyadi Medhya Rasayana (GMR)[17] has been attempted to evaluate its efficacy on Senile Memory Impairment.\n\nBODY.MATERIALS AND METHODS.SELECTION OF THE PATIENTS:\nTotal 138 patients having symptoms of memory impairment attending O.P.D. and I.P.D. of Department of Kaya Chikitsa of I.P.G.T. and R.A., Gujarat Ayurved University, Jamnagar were selected for the present study. A purposive sampling technique was adopted irrespective of their sex, religion, occupation, educational status, etc.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF INCLUSION:\nPatients aged 55 and above up to75 years of age.Complaints of memory impairment on the basis of short term and long term.\nShort term, e.g. misplacing objects, difficulty in remembering multiple items to be purchased, difficulty in recalling information quickly, problems in remembering names and telephone numbers.Long term, e.g. difficulty in recall of special events of personal life, difficulty in recall of previous histories of life and recognition.Patients suffering from dementia.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF EXCLUSION:\nPatients below 55 years and above 75 years of age.Patients with evidence of delirium, confusion or other disturbances of consciousness, Parkinson's disease, stroke, intracranial hemorrhage, brain tumors, history of alcoholism or drug dependence, use of any psychotropic drug, Alzheimer's disease, diabetes mellitus, etc.\n\nBODY.MATERIALS AND METHODS.INVESTIGATIONS:\nRoutine hematological and biochemical tests were carried out before and after the treatment. Among the biochemical tests, serum Acetylcholine Esterase (AChE) estimation, as a biomarker of Senile Memory Impairment, was carried out.\n\nBODY.MATERIALS AND METHODS.PLAN OF THE STUDY:\nPatients fulfilling the inclusion criterion were grouped in to two after obtaining written consent.\n\nBODY.MATERIALS AND METHODS.GROUP A (TRIAL GROUP):\nPatients of this group were administered with the Granules of Guduchyadi Medhya Rasayana (GMR), which contains Guduchi (Tinospora cordifolia Wild.), Apamarga (Achyranthes aspera Linn.), Vidanga (Embelia ribes Burm. f.), Shankhapushpi (Convolvulus pluricaulis Chois.), Vaca (Acorus calamus Linn.), Haritaki (Terminalia chebula Zetz.), Kushtha (Saussurea lappa C.B. Clarke), Shatavari (Asparagus racemosus Wild.), Cow's ghee and sugar. Dosage: 5 g, 3 times per day, after meal Anupana: Water in sufficient quantity Duration: 3 months Total, 73 patients were registered.\n\nBODY.MATERIALS AND METHODS.GROUP B (CONTROL GROUP):\nPatients of this group were administered Sarkaradi Granules (SG). Dosage: 3 g, 3 times per day, after meal Anupana: Water in sufficient quantity Duration: 3 months Total 65 patients were registered.. Both the drugs were prepared at Pharmacy, Gujarat Ayurved University, Jamnagar.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF ASSESSMENT:\nEffects of Medhya Rasayana on memory impairment were assessed in two ways, i.e. as subjective and objective, before and after the treatment. After completion of the treatment, the patients were graded into five groups based on their improvement in order to ascertain the overall therapeutic efficacy. The data so obtained were analyzed using SPSS and Sigma Stat statistical packages. Observations and results pertaining to clinical parameters were presented in tabular form along with the findings of statistical analysis. Paired \"t\" test was adopted to find out the effect of therapy for each group individually, while unpaired \"t\" test used for comparative effect.\n\nBODY.MATERIALS AND METHODS.CRITERIA OF ASSESSMENT.SUBJECTIVE CRITERIA:\nAssessments on improvements in signs and symptoms were carried out using appropriately designed scoring method, with the scores ranging from 0 to 4. Cognitive state was assessed by Mini Mental State Examination.[18] and psychological health of the patients was evaluated with Hamilton Rating Scales for Depression and Anxiety.[19]\n\nBODY.MATERIALS AND METHODS.CRITERIA OF ASSESSMENT.OBJECTIVE CRITERIA:\nSerum AChE estimation was carried out as a biomarker of Senile Memory Impairment.\n\nBODY.OBSERVATIONS AND RESULTS:\nTotal 138 patients of Senile Memory Impairment were registered in the study. Forty-three patients in the Group A (trial group) and 56 patients in the Group B (control group) completed the treatment, while 30 patients in Group A and 9 patients in Group B discontinued. In this study, various etiological factors were identified. Maximum number of patients reported Chinta (thoughts; 92%), followed by Krodha (anger; 83.3%), Shoka (worries; 70.2%), Chittodvega (anxiety; 64.4%), Bhaya (fear; 51.4%), Ruksha Ahara (dried food; 55.7%), Rattri Jagarana (awaking at night; 87%), etc.\n\nBODY.OBSERVATIONS AND RESULTS.EFFECT OF THERAPY:\nBoth the groups showed highly significant (P`0.001) relief from all the chief complaints [Table 1 and Table 2]. Table 1 Effect of GMR in Group A on the symptoms of Senile Memory Impairment Table 2 Effect of SG in Group B on the symptoms of Senile Memory Impairment In comparison to Group B, Group A showed highly significant relief in symptoms such as names forgotten (63.56%), numbers forgotten (61.62%), and memory impairment in recalling special events in life (59.71%) at a level of P`0.001, and forgetfulness (68.60%), losing of valuables (73.64%), difficulty in remembering multiple items to be purchased (68.21%), inability to concentrate (68.14%), impaired recognition (79.7%), memory impairment of recall of previous histories (60.66%) at a level of P`0.01, and a significant relief (P`0.05) from difficulty in recalling information quickly (71.57%), depression (89.66%), and inability to relax (82.93%) [Table 3]. Table 3 Comparison of the effects of GMR and SG on the symptoms of Senile Memory Impairment With respect to Dosha Dushti, the Group A showed significant relief from Vata Dushti (85.48%), Pitta Dushti (94.15%) and in Kapha Dushti (84.3%), while the Group B showed significant relief from Vata Dushti (68.3%), Pitta Dushti (61.78%) and in Kapha Dushti (62.75%). On Hamilton Anxiety Rating Scale, both the groups showed highly significant relief (P`0.001) from all the symptoms individually. In comparison to the results obtained from Group B, the Group A showed highly significant relief with respect to intellectual cognitive functions (79.03%), somatic (muscular) disorders (83.71%), depression (74.12%), and a significant relief from fear (81.11%). On Hamilton Depression Rating Scale, both the groups showed highly significant relief (P`0.001) from all the symptoms individually. However, the comparative results were insignificant. The results of symptoms of Mini Mental State Examination showed that both the groups provided highly significant (P`0.001) improvement individually. In comparison with SG, GMR showed highly significant improvement in registration (9.30%), attention and calculation (24.49%), and a significant improvement in language (13.51%) and in total score (14.89%) [Table 4]. The results obtained with hematological parameters reveal that Group A showed [Table 5] highly significant increase in total RBC (1.20%) count, a significant increase in Hb% (1.34%), packed cell volume (PCV) (1.30%) and a significant decrease in erythrocyte sedimentation rate (ESR) value (7.40%), while Group B showed highly significant decrease in RBC count (4.30%), PCV (3.34%), and a significant decrease in Hb% (1.55%). Table 4 Comparison of the effects of GMR and SG on the symptoms of mini mental state examination Table 5 Effect of GMR on hematological parameters in Group A (n=43) With regard to the biochemical parameters, Group A showed highly significant decrease (P`0.01) in serum cholesterol, serum triglyceride, low density lipoprotein (LDL), and very low density lipoprotein (VLDL) levels, while high density lipoprotein (HDL) increased highly significantly (P`0.01) and a significant decrease (P`0.05) was found in blood urea (13.80%) as well as serum AChE (3.67%) within normal limits [Table 6]. The Group B showed highly significant increase in blood urea (10.04%), but it was within normal limit. Other parameters were found to be statistically insignificant. Table 6 Effect of GMR on biochemical parameters in Group A (n=43) Regarding the overall effect, Group A showed marked improvement in 18.6% patients, moderate improvement in 74.4% patients and mild improvement in 7.0% patients, while Group B showed moderate improvement in 32.2% patients and mild improvement in 67.8% patients.\n\nBODY.DISCUSSION:\nIt has been mentioned in texts that GMR contributes to enhance the memory power within a short period. On the basis of Rasapanchaka, the pharmacotherapeutic properties reveal that it contains Katu, Tikta and Madhura in Rasa, and the effect of Katu and Madhura in Vipaka. Moreover, due to various properties of Tikshna, Laghu, Ruksha, Sara, Guru and Snigdha, it possesses markedly Ushna Virya and mildly Sheeta Virya. Accordingly, GMR was considered to be an effective treatment for Tridoshahara. In Ayurveda, it is clearly emphasized that Smritibhramsha (memory impairment) results due to Rajas and Tamasdoshas in the mind. These doshas are responsible for producing avarana over the mind. The relief from the symptoms has been found to be statistically highly significant. These observations reveal that the drug might have counteracted or pacified Raja and Tamo Doshas, so Avarana of mind was removed and the disease was subsided or controlled. Further pharmacological studies have proven that Vaca (Acorus calemus Linn.) and Shankhapushpi (Convolvulus pluricaulis Chois) have anxiolytic, memory enhancing, anti-stress, antidepressant, tranquilizing and sedative activities.[20, 21] Further, it was found that there was a significant improvement by 14.84% on the total score of the Mini Mental State Examination [Table 4]. So, these evidences indicate the positive role of Medhya property of GMR on Senile Memory Impairment among elderly individuals. Considering the symptoms of memory impairment along with the cognitive decline, a highly significant result was obtained both from GMR and SG [Table 1 and Table 2]. However, in comparison, GMR has given better relief for the majority of symptoms [Table 3]. The Medhya Rasayanas are special Ayurvedic neuro nutraceuticals which are specific to brain and nervous system. They are claimed to promote cognitive function of the brain.[22] Further, it was mentioned in the Caraka Samhita that Shankhapushpi, a Rasayana drug, has the property of promoting intellect.[23] Moreover, Group B also showed improvement of memory due to the Medhya effect of ghee.[24] The serum AChE value has shown an upper range in both the groups before the treatment and a significant decrease was observed after the treatment in Group A [Table 6]. The recent advances in clinical research reveal that AChE activity is increased in human cerebrospinal fluid during aging[25] and scientists believe that dementia-related memory issues stem from the increased breakdown of acetylcholine due to a preponderance of AChE which degrades acetylcholine.[26] Furthermore, a decrease in AChE activity has noted in the treated animals, leading to an increase in the cholinergic activity in the brain.[27] The significant reduction of AChE in Group A shows that GMR exerts its effect on memory enhancement by increasing the cholinergic function by preventing the breakdown of acetylcholine in the aging human brain.\n\nBODY.CONCLUSIONS:\nSenile Memory Impairment can be correlated to Jarajanya Smritibhramsha according to Ayurveda, and imbalance of Trigunas through aggravation of Rajas and Tamas Doshas in the mind leads to development of Senile Memory Impairment among elders. Modern literature reveals that Senile Memory Impairment is closely associated with cognitive decline during aging. All the patients of Senile Memory Impairment were having features of impairment of short memory and long memory in this study. Manasika Nidana (psychic factors), Aharaja (dietary) and Viharaja (behavioral) factors, which were reported by the patients, play a major role in the pathogenesis of Senile Memory Impairment by vitiating Rajas and Tamas Doshas in the mind. Those factors were Chinta (thoughts), Krodha (anger), Shoka (worries), Chittodvega (anxiety), Bhaya (fear), Moha (illusions), Ruksha Ahara (dried food), Ratri Jagarana (awaking at night), etc. GMR has shown statistically highly significant improvement on short memory impairment as well as long memory impairment in Senile Memory Impairment, at the level of P`0.001. Reduction of AChE showed enhancement of memory functions due to improvement of cholinergic actions in the human brain in the trial group (Group A). GMR has shown memory enhancing, antidepressant, anti-stress, and anxiolytic potential. Hence, Guduchyadi Medhya Rasayana therapy has proved so as to provide better improvement in comparison to the control group.\n\n**Question:** Compared to administered Sarkaradi Granules (SG). what was the result of administered with the Granules of Guduchyadi Medhya Rasayana (GMR), which contains Guduchi (Tinospora cordifolia Wild.), Apamarga (Achyranthes aspera Linn.), Vidanga (Embelia ribes Burm. f.), Shankhapushpi (Convolvulus pluricaulis Chois.), Vaca (Acorus calamus Linn.), Haritaki (Terminalia chebula Zetz.), Kushtha (Saussurea lappa C.B. Clarke), Shatavari (Asparagus racemosus Wild.), Cow's ghee and sugar. on overall effect?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1135
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of High-Intensity Interval Training versus Continuous Training on Physical Fitness, Cardiovascular Function and Quality of Life in Heart Failure Patients\n\n ABSTRACT.INTRODUCTION:\nPhysical fitness is an important prognostic factor in heart failure (HF). To improve fitness, different types of exercise have been explored, with recent focus on high-intensity interval training (HIT). We comprehensively compared effects of HIT versus continuous training (CT) in HF patients NYHA II-III on physical fitness, cardiovascular function and structure, and quality of life, and hypothesize that HIT leads to superior improvements compared to CT.\n\nABSTRACT.METHODS:\nTwenty HF patients (male:female 19:1, 64±8 yrs, ejection fraction 38±6%) were allocated to 12-weeks of HIT (10*1-minute at 90% maximal workload—alternated by 2.5 minutes at 30% maximal workload) or CT (30 minutes at 60–75% of maximal workload). Before and after intervention, we examined physical fitness (incremental cycling test), cardiac function and structure (echocardiography), vascular function and structure (ultrasound) and quality of life (SF-36, Minnesota living with HF questionnaire (MLHFQ)).\n\nABSTRACT.RESULTS:\nTraining improved maximal workload, peak oxygen uptake (VO2peak) related to the predicted VO2peak, oxygen uptake at the anaerobic threshold, and maximal oxygen pulse (all P<0.05), whilst no differences were present between HIT and CT (N.S.). We found no major changes in resting cardiovascular function and structure. SF-36 physical function score improved after training (P<0.05), whilst SF-36 total score and MLHFQ did not change after training (N.S.).\n\nABSTRACT.CONCLUSION:\nTraining induced significant improvements in parameters of physical fitness, although no evidence for superiority of HIT over CT was demonstrated. No major effect of training was found on cardiovascular structure and function or quality of life in HF patients NYHA II-III.\n\nABSTRACT.TRIAL REGISTRATION:\nNederlands Trial Register NTR3671\n\n\nBODY.INTRODUCTION:\nHeart failure (HF) is a common disease with an increasing prevalence worldwide [1], and is characterized by a low 5-year survival of 35–55% [2–4]. Several studies have indicated that physical fitness is an important prognostic factor for HF patients, in which a low physical fitness is associated with higher mortality rates [5,6]. Previous studies demonstrate that exercise training can improve physical fitness [7], cardiac function [8], vascular function [9,10], and quality of life [7] in HF patients [11]. Therefore, exercise training is recommended for HF patients and encompasses a central component of cardiac rehabilitation [12]. Most previous studies examining the impact of exercise training in HF patients have adopted exercise at moderate-intensity. Recent studies explored the impact of high-intensity interval training (HIT), which can be described as short periods of exercise performed at high-intensity (>80–85% peak oxygen uptake [13–15]), alternated by periods of active or passive rest. Wisløff et al. suggested a superior effect of HIT compared to continuous training (CT) on physical fitness, cardiovascular function and quality of life in HF patients [16]. This study applied an intensive HIT training regimen in HF patients and found large improvements in fitness levels. After this promising study, Fu et al. found similar results [17], but not all subsequent studies [18–22], reinforced the superior effect of HIT compared to CT in HF patients. Previous studies that revealed a superior effect of HIT adopted relatively long bouts of high-intensity exercise (3–4 min) [16,17], followed by active 'rest' periods of 3 minutes at an intensity up to 70% of maximal heart rate. Performance of such long bouts of exercise at high-intensity and/or 'rest' periods at such vigorous intensity levels may not be feasible for all HF patients. Furthermore, most previous training studies have applied a training frequency of three times per week, which is time consuming. Whilst training frequencies of twice per week can be sufficient to induce a positive effect on fitness levels [23], no previous study examined the efficacy of HIT with lower training frequency or whether there is a difference between such HIT and CT training protocols. Therefore, we explored the benefits of a feasible HIT-protocol with high-intensity bouts of moderate duration (i.e. 1 min), the active rest at relatively low intensity (i.e. 30% maximal workload), and a training frequency of twice a week. In this study, we examined whether 12-weeks of CT or HIT is effective and feasible for HF patients, and whether this HIT-protocol leads to superior effects on fitness, cardiovascular function and quality of life compared to CT. We hypothesize that the HIT-protocol is feasible, whilst the effects on physical fitness, cardiovascular function and quality of life in HF patients New York Heart Association (NYHA) class II-III are superior compared to CT.\n\nBODY.METHODS.SUBJECTS:\nWe included 29 patients (65±8 yrs) diagnosed with HF classified as NYHA class II-III, with a history of left ventricular ejection fraction (LVEF) ≤45% (assessed by 2D/4D echocardiography). Patients were recruited from the department of Cardiology of the Radboud university medical center and the Canisius-Wilhelmina Hospital (Nijmegen, the Netherlands) and through advertisements. Patients with HF due to congenital heart disease or HF caused by valve pathology were excluded. Other exclusion criteria were: diabetes mellitus (type 1 or 2), hypercholesterolemia (total cholesterol >6.5mmol/L), severe renal failure (glomerular filtration rate<30 mL/min/1.73m2), exercise-induced ischemia (i.e. ECG abnormalities suggestive for ischemia on maximal exercise testing), severe co-morbidities (e.g. COPD GOLD ≥3), pathology that restricts patients from participation to exercise (e.g. orthopedic/neurological disorders interfering with movement), pre-menopausal women or women on hormone replacement therapy, and subjects with contra-indications for maximal exercise testing [24]. Subjects had to be in a clinically and pharmacologically stable situation (>3 months) prior to participation. The Medical Ethical Committee of the Radboud university medical center approved this study (CMO region Arnhem-Nijmegen) on October 26th 2010. This trial is registered in the Dutch Trial Register (NTR3671) 3 months after the start due to practical reasons. The authors confirm that all ongoing and related trials for this intervention are registered. Written informed consent was obtained from each subject before participation in this study. Subject recruitment was done between July 2011 and September 2014. Follow-up lasted until February 2015. This study was monitored by a data safety monitoring board, which also approved the submission of this study.\n\nBODY.METHODS.EXPERIMENTAL PROTOCOL:\nSubjects reported to our laboratory for a medical screening, after which they were allocated to 12-weeks moderate-intensity CT or HIT. In addition, to assess changes over time in HF patients, we included a control group consisting of HF patients who were unable to participate in the training program due to geographical reasons or time-constraints, which was tested before and after a 12-week control period. Before and after the 12-week intervention/control period, subjects underwent a maximal incremental cycling test to determine physical fitness, echocardiography to examine cardiac function and structure, and vascular ultrasound measurements to examine peripheral artery vascular function and structure. Finally, questionnaires were used before and after the intervention to assess health-related quality of life and HF symptoms. All measurements were performed in the Radboud university medical center (Nijmegen, the Netherlands). Due to the nature of the study design and practical reasons, blinding participants and researchers was not possible.\n\nBODY.METHODS.MEASUREMENTS.SUBJECT CHARACTERISTICS:\nWe determined height, weight (Seca 888 Scale, Seca, Hamburg, Germany), body mass index, body fat percentage [25], and waist and hip circumference. Furthermore, we obtained heart rate and blood pressure (manually, WelchAllyn, Maxi-Stabil 3, NY, USA), an electrocardiogram to determine heart rhythm, and a venous blood sample to determine fasted glucose and (total) cholesterol concentrations.\n\nBODY.METHODS.MEASUREMENTS.PRIMARY OUTCOME—PHYSICAL FITNESS:\nAn incremental maximal cycling test was performed on a cycle ergometer (Ergoline, Ergoselect 200k, Bitz, Germany). Subjects were instructed to pedal (>60rpm) whilst workload was increased 10–15 Watt/min, depending on the expected physical fitness of the participant (based on sex, age, height, and previous results on exercise testing). During exercise, breath-by-breath gas analysis was recorded continuously (LabManager V5.32.0). For the termination of maximal exercise testing we adhered to recent guidelines [24]. Peak oxygen uptake (VO2peak) was defined as the highest oxygen uptake (30-second average).\n\nBODY.METHODS.MEASUREMENTS.SECONDARY OUTCOME—PHYSICAL FITNESS:\nOxygen uptake at the anaerobic threshold (AT) was determined using the V-slope method [26]. Peak oxygen uptake and oxygen uptake at the AT was also expressed as a percentage of the predicted maximal oxygen consumption [27]. Ventilatory efficiency was defined as the slope of the ventilation to the carbon dioxide production (VE/VCO2 slope) calculated over the linear phase of the response up to the AT. The maximal oxygen pulse (oxygen consumption per heart rate, O2/HR) was determined (10-second average). The presence of chronotropic incompetence, defined as a heart rate reserve <80% of predicted [28], was noted. Prior to testing, all medication was continued.\n\nBODY.METHODS.MEASUREMENTS.SECONDARY OUTCOME—VASCULAR FUNCTION AND STRUCTURE:\nSubjects were prepared according to guidelines for the assessment of flow-mediated dilation (FMD) [29]. Subjects were instructed to continue medication, but to refrain from diuretics the day of testing for practical reasons. The measurements were performed in a temperature-controlled room (22.5±0.7°C). Prior to testing, subjects rested in the supine position for 10 minutes. Vascular function measurements were performed using a 10-MHz multifrequency linear array probe attached to a high-resolution ultrasound machine (Terason T3000, Burlington, MA, USA). We examined brachial and superficial femoral artery endothelial function using the FMD according to the guidelines [29]. Subsequently, we measured the brachial artery maximal diameter and blood flow responses to ischemic handgrip exercise, as described in previous studies [30,31]. The peak blood flow provides a valid and accepted index of resistance artery size and remodelling and the brachial artery diameter response for maximal dilating capacity [30]. We examined carotid artery intima-media thickness (IMT), which represents a surrogate measure for atherosclerosis [32]. Finally, we examined the endothelium-independent dilation of the brachial artery by examining the diameter response to an exogenous nitric oxide donor (sublingual administration of 400μg glyceryl trinitrate (GTN)).\n\nBODY.METHODS.MEASUREMENTS.SECONDARY OUTCOME—CARDIAC FUNCTION:\nTransthoracic (4D) echocardiography was performed with an ultrasound scanner (Vivid E9, General Electric Healthcare, Horten, Norway) with M5-S and V4 probe according to the guidelines of the American Society of Echocardiography [33]. Echocardiographic images were analyzed post-hoc with EchoPAC software (version 112, General Electric Healthcare, Horten, Norway). From 4D-images we assessed: left ventricular end-diastolic volume, left ventricular end-systolic volume, stroke volume, LVEF, cardiac output, and cardiac index. Left ventricular longitudinal, circumferential, radial, and area strain were analyzed by 4D speckle tracking. Moreover, we measured the isovolumetric contraction and relaxation time from tissue Doppler tracings of the lateral and septal mitral annulus. To describe diastolic function we obtained the following parameters by (tissue) pulsed-wave Doppler tracings: peak mitral flow velocity during early (E) and late (A) diastole, the systolic (S) and diastolic (D) inflow velocity over the pulmonary valve, and the peak mitral annulus velocity during early filling (E') of the lateral and septal mitral annulus. The E/A-ratio, S/D-ratio, and E/E'-ratio were calculated.\n\nBODY.METHODS.MEASUREMENTS.SECONDARY OUTCOME–QUESTIONNAIRES:\nTo measure health-related quality of life, the SF-36 Health Survey was used [34,35]. Additionally, we used the Minnesota living with Heart Failure Questionnaire (MLHFQ), to measure patient perceptions of the effects of HF on their physical, psychological and socioeconomic lives [36].\n\nBODY.METHODS.12-WEEK INTERVENTION:\nTraining was performed twice a week for 12 weeks in a rehabilitation setting or hospital and was supervised by physiotherapists. When a participant missed a training session, this session was rescheduled to ensure a total of 24 training sessions (i.e. 100% compliance). Training was performed on a cycle ergometer (Lode Corival, Procare, Groningen, The Netherlands). Both the CT- and HIT-session started with a warm-up of 10-minutes at 40% of maximal workload (Watt) as obtained from the cardiopulmonary maximal exercise test at baseline, and concluded with a cooling-down of 5-minutes at 30% of maximal workload. Workload was increased during the 12-week training period based on the Borg scores of perceived exertion, to maintain a sufficient training stimulus when physical fitness was expected to improve.\n\nBODY.METHODS.12-WEEK INTERVENTION.CT-GROUP:\nCT consisted of 30-minutes at 60–75% of maximal workload. Training intensity was controlled using the Borg score (scale 6–20) [37], aiming at a Borg score of 12–14 during the training session, as recommended in the current exercise guidelines for HF patients [12]. Borg score and heart rate were determined after the warm-up, at 20, 30 and 40 minutes of exercise, and after the cooling-down.\n\nBODY.METHODS.12-WEEK INTERVENTION.HIT-GROUP:\nHIT consisted of 10 periods of 3.5-minutes of exercise, consisting of intervals of 1-minute at 90% of maximal workload, and 2.5-minutes at 30% of maximal workload, aiming at a Borg score of 15–17 during the high-intensity intervals. Borg score and heart rate were determined at the end of the warm-up, after repetition 1, 3, 7 and 10, and after the cooling-down.\n\nBODY.METHODS.12-WEEK INTERVENTION.CONTROL GROUP:\nControl subjects were instructed not to alter their daily physical activities.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nWe have made a pre-study sample size calculation based on previous studies examining the difference in effect between CT and HIT. Some studies suggest n = 2–3 per group is sufficient [16,17], whilst data from others suggest several thousand subjects must be recruited to detect differences between CT and HIT [19]. We rationalized that n = 10–20 will provide (clinically) meaningful insight into the effect of CT versus HIT. Therefore, using a conservative approach (accounting for drop-outs), we aimed for n = 20 for both exercise training groups (and n = 10 in the control group). Data was analyzed using IBM SPSS Statistics 20.0 (IBM Corp., Armonk, NY, USA). Parameters were checked for normality using a Kolmogorov-Smirnov test. When data was not normally distributed, a non-parametric alternative was used or natural logarithmic data transformation was applied. Categorical and nominal parameters were compared with a Chi-Square test. Baseline characteristics of the groups were compared with a 1-way ANOVA or Kruskal-Wallis test when data was not normally distributed. A 2-way repeated measures ANOVA was used to examine the impact of exercise training (time-effect), and whether the change differs between HIT and CT (time*group-effect). When a significant main effect (time) or interaction-effect (time*group) was observed, post-hoc tests with least-significant difference were used to identify differences between and within groups. When data for this 2-way comparison was not normally distributed, we used individual tests to examine the effect of time, group and time*group. Changes in the control group were tested with a paired Student's t-test, or Wilcoxon test when data was not normally distributed. To control for the potential impact of baseline diameter on FMD [38], we used logarithmically transformed diameter data and adopted a univariate General Linear Model with baseline arterial diameter as a covariate, to compare differences between groups. Potential drop-outs were left out of the analysis and were not replaced. A Pearson correlation coefficient was determined for the relation between baseline quality of life and exercise-induced changes in quality of life. Data are presented as mean±standard deviation (SD), unless stated otherwise. Significance level was set at P<0.05.\n\nBODY.RESULTS:\nFor this study, 59 HF patients were screened. Fifteen patients did not meet inclusion criteria (screen failures) and 11 patients declined to participate after screening due to lack of time (n = 10) or change in health (n = 1) (Fig 1). We allocated 24 patients to HIT or CT. Nine HF patients were included in the control group (non-randomized). In both the HIT- and CT-group, 2 drop-outs were reported after allocation (71±2yrs; male:female 3:1; NYHA class II:III 3:1), which makes the total drop-out 17%. A patient in both training groups dropped out due to musculoskeletal complaints and a patient in both training groups dropped out due to progression of HF. Twenty patients in the training-groups and nine controls completed the study and were available for final analysis. The groups were not significantly different in age, body mass index, NYHA class, etiology of HF, blood pressure, heart rate, LVEF and physical fitness (Table 1). The control group consisted of significantly more females than the CT-group (P = 0.028, Table 1). Cardiovascular medication use is documented in Table 1. 10.1371/journal.pone.0141256.g001Fig 1Flow-chart of the inclusion of subjects. 10.1371/journal.pone.0141256.t001 Table 1 Subject characteristics and cardiovascular medication. CT (n = 10) HIT (n = 10) Control (n = 9) Age (yrs) 64±8 63±8 67±7 0.57 Sex (male:female) 10:0 * \n 9:1 5:4 0.028 Body weight (kg) 89.7±11.9 87.6±23.6 77.0±10.5 0.16 Height (cm) 177±5 177±3 174±9 0.66 Body mass index (kg/m 2 ) 28.9±4.7 28.1±7.5 25.4±2.7 0.24 NYHA class (II:III) 8:2 8:2 8:1 0.84 Etiology (Isch:Non-isch) 8:2 7:3 5:4 0.51 Systolic blood pressure (mmHg) 132±23 132±18 130±25 0.98 Diastolic blood pressure (mmHg) 83±11 79±10 78±14 0.48 Resting heart rate (/min) 59±11 57±7 60±10 0.80 Maximal heart rate (/min) \n † \n \n 129±19 126±16 120±15 0.53 Chronotropic incompetence (yes:no) \n † \n \n 5:4 8:2 6:1 0.33 VO 2peak \n \n † \n (mL/min/kg) 21.0±3.4 19.1±4.1 17.4±5.8 0.26 VO 2peak \n \n † \n (% of predicted VO 2peak ) 86±8 79±17 81±22 0.63 LVEF (%) 38±6 37±6 40±11 0.84 Medication     Angiotensin converting enzyme-inhibitors 5 (50%) 6 (60%) 8 (89%) 0.19     Angiotensin II receptor antagonists 4 (40%) 4 (40%) 1 (11%) 0.30     Aldosterone antagonist 6 (60%) 7 (70%) 8 (89%) 0.36     Diuretics (loopdiuretics) 7 (70%) 6 (60%) 4 (44%) 0.50     β-blockers 10 (100%) 9 (90%) 9 (100%) 0.37     Antiplatelet drugs 6 (60%) 4 (40%) 3 (33%) 0.47     Coumarin derivates 4 (40%) 7 (70%) 4 (44%) 0.35     Statins 10 (100%) 9 (90%) 4 (44%) \n § \n \n 0.007 Data is presented as mean±SD. P-values refer to a 1-way ANOVA. \n † Data was unavailable for 1 subject in the CT-group and 3 subjects in the control-group. * Significantly less females compared to the control-group. \n § Lower compared to CT-group and HIT-group. \n\nBODY.RESULTS.EXERCISE TRAINING:\nWhen averaging all training sessions, CT was performed at 66±5% of maximal workload, whilst the high-intensity intervals during HIT were performed at 102±7% of maximal workload (P<0.001). CT was performed at 81±7% of maximal heart rate, and the high-intensity intervals during HIT were performed at 83±9% of maximal heart rate (P = 0.70). Borg-scores during CT and HIT were 13±1 and 14±1 respectively (P = 0.27).\n\nBODY.RESULTS.IMPACT OF EXERCISE TRAINING.PHYSICAL FITNESS:\nVO2peak tended to increase after training (P = 0.06), whilst VO2peak/kg did not change after exercise (P = 0.10). A significant increase after training was found in the VO2peak related to predicted VO2peak (%), maximal workload, oxygen uptake at the AT and maximal oxygen pulse (Table 2). No significant differences were observed between both interventions (Table 2). 10.1371/journal.pone.0141256.t002 Table 2 Maximal incremental cycling test. CT (n = 10) HIT (n = 10) P-value \n Pre \n \n Post \n \n Pre \n \n Post \n \n Time \n \n Group \n \n Time*Group \n VO 2peak (mL/min) 1881±214 1887±27 1662±562 1792±559 0.06 0.44 0.08 VO 2peak (mL/min/kg) 21.2±3.6 21.3±3.7 19.1±4.1 20.4±4.3 0.10 0.14 0.09 \n # \n \n VO 2peak (% pred. VO 2peak ) 86±8 87±10 79±17 85±16 0.044 0.48 0.08 Max. workload (Watt) 145±22 152±26 126±38 142±45 <0.001 0.24 0.07 Max. heart rate (/min) 129±19 132±24 126±16 125±15 0.78 0.30 0.51 VE/VCO 2 slope 32.2±3.3 32.7±5.8 28.7±5.8 29.4±7.7 0.52 0.18 0.91 VO 2 at AT (mL) 1030±287 1248±388 1033±319 1090±225 0.041 0.54 0.22 Max. O 2 /HR (mL) 16.2±2.2 16.7±2.8 14.0±4.0 15.4±3.8 0.006 0.25 0.13 Data is presented as mean±SD. P-values refer to a 2-way repeated measures ANOVA between the two training groups. One subject in the CT-group did not reach VO 2peak , and therefore only VE/VCO 2 slope and VO 2 at AT could be determined. \n # For statistical reasons, data was analyzed with three separate tests to determine time , group and time*group P-values. \n\nBODY.RESULTS.IMPACT OF EXERCISE TRAINING.VASCULAR FUNCTION/STRUCTURE:\nWe found no significant changes in brachial and superficial femoral artery diameter, peak blood flow, and FMD (Table 3). No change in endothelium-independent dilation of the brachial artery was observed after training for both groups (Table 3). Furthermore, we found no significant impact of HIT or CT on carotid artery IMT or IMT-to-lumen ratio (Table 3). 10.1371/journal.pone.0141256.t003 Table 3 Brachial (BA) and superficial femoral artery (SFA) endothelium-dependent vasodilation through flow-mediated dilation (FMD), peak diameter and endothelium-independent dilation (GTN), and common carotid artery (CCA) intima-media thickness (IMT). CT (n = 10) HIT (n = 10) P-value \n Pre \n \n Post \n \n Pre \n \n Post \n \n Time \n \n Group \n \n Time*Group \n BA diameter (mm) 4.5±0.5 4.5±0.5 4.4±0.9 4.4±0.8 0.83 0.68 0.86 BA FMD (%) 5.2±2.5 4.8±3.0 5.3±2.6 4.7±2.5 0.33 0.92 0.98 BA FMD (%, scaled) 5.3±2.5 4.8±2.5 5.2±2.5 4.6±2.5 0.47 >0.999 0.91 \n # \n \n BA SR AUC (s, 10 3 ) 19.9±9.6 18.6±7.4 17.8±9.2 22.3±7.7 0.42 0.80 0.14 BA GTN (%) 17.6±7.0 16.3±6.5 16.8±8.7 15.2±4.9 0.33 0.74 0.92 BA GTN (%, scaled) 17.9±4.5 16.6±4.5 16.1±4.5 14.6±4.5 0.42 0.27 0.94 BA FMD-GTN ratio 0.34±0.21 0.33±0.24 0.42±0.30 0.34±0.15 0.46 0.63 0.82 BA CADC (%) 10.6±6.4 11.3±4.8 16.1±7.4 13.4±5.5 0.52 0.12 0.26 BA CADC (%, scaled) 11.0±4.2 12.0±4.2 15.0±4.2 12.8±4.2 0.72 0.11 0.27 BA peak blood flow (mL/min) 794±139 862±261 711±264 556±165 0.38 0.56 0.46 SFA diameter (mm) 7.1±1.2 6.9±1.2 6.7±1.1 6.5±1.0 0.38 0.48 0.88 SFA FMD (%) 3.3±1.9 5.1±4.8 4.2±1.8 3.4±2.8 0.58 0.74 0.19 SFA FMD (%, scaled) 3.5±2.9 5.1±2.9 4.1±2.9 3.1±2.9 0.74 0.50 0.19 SFA SR AUC (s, 10 3 ) 9.2±5.6 10.4±10.8 13.5±7.2 6.9±3.9 0.11 0.80 0.07 CA IMT (mm) 0.80±0.13 0.76±0.20 0.72±0.12 0.67±0.16 0.20 0.24 0.96 CA IMT-to-lumen ratio 0.12±0.02 0.11±0.02 0.11±0.02 0.11±0.02 0.40 0.59 0.45 Data is presented as mean±SD. P-values refer to 2-way repeated measures ANOVA between the 2 training groups. \n # For statistical reasons, data was analyzed with three separate tests to determine time , group and time*group P-values. Due to technical problems, BA GTN/FMD-GTN ratio/peak blood flow was available for 9 subjects in the HIT-group and SFA FMD was available for 8 subjects in the HIT-group. CCA IMT and IMT-to-lumen ratio were available for 8 subjects in each group. SR AUC ; shear rate area-under-the-curve. CADC; conduit artery dilating capacity. \n\nBODY.RESULTS.IMPACT OF EXERCISE TRAINING.CARDIAC FUNCTION/STRUCTURE:\nMost of the parameters of cardiac systolic function, left ventricle strain, or diastolic function demonstrated no change after HIT or CT (Table 4). Negligible but significant changes were found in area strain and isovolumetric contraction time (Table 4). 10.1371/journal.pone.0141256.t004 Table 4 Echocardiographic left ventricular volumes, systolic function, strain and diastolic function. CT (n = 10) HIT (n = 10) P-value Systolic function \n Pre \n \n Post \n \n Pre \n \n Post \n \n Time \n \n Group \n \n Time*Group \n LVEDV (ml) 154±24 159±28 194±39 204±44 0.26 0.002 0.64 \n # \n \n LVESV (ml) 98±14 102±19 134±32 132±40 0.87 0.037 0.63 Stroke volume (ml) 56±13 57±13 61±14 72±16 0.06 0.16 0.12 LVEF (%) 36±4 36±5 32±7 36±9 0.09 0.57 0.07 Cardiac output (L/min) 3.5±0.6 3.4±0.7 3.5±0.8 4.3±1.0 0.20 0.21 0.07 Cardiac index (L/min/m 2 ) 1.7±0.3 1.6±0.3 1.7±0.4 2.1±0.5 0.22 0.14 0.08 Longitudinal strain (%) -9±3 -9±3 -9±3 -8±3 0.60 0.47 0.38 \n # \n \n Circumferential strain (%) -10±2 -10±3 -9±3 -8±3 0.22 0.43 0.19 Radial strain (%) 23±7 22±6 23±8 20±8 0.13 0.71 0.48 Area strain (%) -17±4 -15±6 -17±5 -14±5 0.044 0.73 0.97 IVCT-L (ms) 52±7 50±10 49±12 58±7 0.18 0.56 0.029 IVCT-S (ms) 57±14 59±11 53±9 56±11 0.35 0.46 0.87 \n Diastolic function \n IVRT-L (ms) 145±32 149±27 142±27 159±27 0.13 0.75 0.38 IVRT-S (ms) 160±36 148±22 164±41 170±37 0.60 0.45 0.22 \n # \n \n E/A 1.15±0.71 1.17±0.89 1.53±1.42 1.60±1.53 0.49 0.36 0.59 \n # \n \n S/D 1.38±0.74 1.17±0.34 1.00±0.40 1.26±0.59 0.85 0.48 0.14 E/E’-L 6.8±1.9 6.7±1.9 10.3±4.4 9.8±6.3 0.71 0.07 0.74 E/E’-S 10.1±4.1 11.1±5.2 12.6±9.8 11.8±11.2 0.93 0.67 0.42 Data is presented as mean±SD. P-values refers to 2-way repeated measures ANOVA between the 2 training groups. \n # For statistical reasons, data was analyzed with three separate tests were performed to determine time , group and time*group P-values. 4D data was available for 7 patients in the CT-group and 8 patients in the HIT-group. IVCT-l, IVRT-C, IVRT-S and E/E’-L was available for 9 patients in the HIT-group. IVCT-S and E/E’-S was available for 8 patients in the HIT-group. IVCT-L and S/D ratio was available for 9 subjects in the CT-group. IVRT-L and E/A ratio was available for 8 subjects in the CT-group. LVEDV; left ventricular end-diastolic volume. LVESV; left-ventricular end-systolic volume. IVCT-L/S: isovolumetric contraction time, lateral/septal. IVRT-L/S; isovolumetric relaxation time, lateral/septal. E/A ratio; peak mitral flow velocity during early filling/peak mitral flow velocity during atrial contraction. S/D; systolic flow velocity pulmonary vein/diastolic flow velocity pulmonary vein. E/E’-L/S; peak mitral flow velocity during early filling/peak mitral annulus velocity during early filling, lateral/septal. \n\nBODY.RESULTS.IMPACT OF EXERCISE TRAINING.QUALITY OF LIFE:\nThere was no significant change in the SF-36 total score (Table 5). There was a significant increase in the SF-36 subscale 'physical function' after training (P = 0.004, Table 5), which did not differ between groups (time*group P = 0.11). A trend for an inverse correlation was found between baseline SF-36 scores and training-induced change in SF-36 scores (r = -0.51, P = 0.052). We found no change in the MLHFQ for both groups (Table 5). No significant correlations were found between baseline MLHFQ scores and training-induced change in MLHFQ. 10.1371/journal.pone.0141256.t005 Table 5 Results of the SF-36 and Minnesota living with HF questionnaire (MLHFQ). CT (n = 9) HIT (n = 8) P-value \n SF-36 \n \n Pre \n \n Post \n \n Pre \n \n Post \n \n Time \n \n Group \n \n Time*Group \n Physical functioning 74±22 78±17 57±21 69±17 0.004 0.16 0.11 Physical health subscore \n † \n \n 73±20 76±16 60±22 67±19 0.15 0.26 0.52 Mental health subscore \n † \n \n , \n \n ‡ \n \n 81±9 83±10 83±5 82±10 0.75 0.87 0.54 Total score \n † \n \n , \n \n ‡ \n \n 75±16 78±13 68±14 73±14 0.18 0.42 0.76 \n CT (n = 9) \n \n HIT (n = 10) \n \n P-value \n \n MLHFQ \n \n Pre \n \n Post \n \n Pre \n \n Post \n \n Time \n \n Group \n \n Time*Group \n Total score 18±14 16±16 21±15 20±14 0.81 0.56 0.89 Data is presented as mean±SD. P-values refers to 2-way repeated measures ANOVA between the 2 training groups. Results of the SF-36 were scored on a 0–100 scale, in which a high score represents a better quality of life. Results of the MLHFQ were scored on a 0–105 scale, in which a low score indicates few HF-related complaints. \n † Data was available for 7 subjects in the HIT-group. \n ‡ Data was available for 8 subjects in the CT-group. \n\nBODY.RESULTS.IMPACT OF EXERCISE TRAINING.CONTROL GROUP:\nWe found no changes over the 12-week period in maximal oxygen uptake (17.4±5.9 versus 17.5±5.8 ml/min/kg, P = 0.79) or in any of the other parameters of physical fitness (all P>0.05, S1 Table). Except for a decrease across time in the superficial femoral artery FMD and an increase in lateral E-E'-ratio, we found no changes in cardiac and vascular structure or function or in the SF-36 score and MLHFQ in controls (all P>0.05, S1 Table).\n\nBODY.DISCUSSION:\nThis study comprehensively compared physical fitness, vascular function, cardiac function and quality of life between a feasible and practical HIT-protocol versus traditional CT in HF patients. We have demonstrated that CT and HIT are both feasible in HF patients and induced a significant improvement in measures of (sub)maximal exercise performance and fitness, with no significant differences between both types of exercise training. Despite these changes in fitness, we did not find improvement in measures of resting cardiac and vascular structure and function. Furthermore, except an improvement in the subscale 'physical function' that may be related to the change in fitness, no effect of training was found on quality of life. Therefore, our data suggest that both types of exercise training successfully improve measures of physical fitness within 12-weeks, but not cardiovascular function at rest or quality of life. Moreover, we found no evidence for superiority of this more feasible HIT-protocol over traditional CT on the parameters presented in our study.\n\nBODY.DISCUSSION.PHYSICAL FITNESS:\nPrevious work found improvements in physical fitness in HF after HIT ranging from 8–46% [16,20,21], whilst studies adopting CT found changes ranging between 0–22% [17,19]. Most of these training studies have included low numbers of participants, which may explain the large variation in results. The largest HF exercise training study so far (>1,000 participants), the HF-ACTION trial, reported a median increase in fitness of 4% [39]. Although in the lower ends of the spectrum, the change in physical fitness in our study (~4%) is within the range of improvements as reported in previous work. As demonstrated in various previous studies, training characteristics (e.g. frequency, intensity and duration) are important factors determining training responses [40]. Nonetheless, our relatively low-frequency protocols were sufficient to induce significant improvements in fitness levels. Similar observations were made by Belardinelli et al [23], who adopted a long-term (10-year), low-frequency exercise training program in HF patients. They reported improvement in physical fitness levels in trained subjects after 1 year of training, whilst fitness levels remained higher than in controls across the 10-year intervention. In addition to the low frequency of training, the relatively modest changes in fitness in our study may relate to characteristics of the included participants, such as genetic factors [41] or to a priori higher levels of physical fitness as lower physical fitness levels are associated with larger training-induced improvements in HF patients [42,43]. Indeed, some previous HIT-studies demonstrated large improvement after exercise training in HF patients with low baseline levels of physical fitness [16–18]. Moreover, patients with chronotropic incompetence (i.e. 68% in our study) have attenuated exercise-induced improvement in parameters of fitness, which could have affected the effect size in our study [44]. Despite the relatively modest effect sizes, our exercise training protocols were successful in improving parameters of physical fitness. We did not demonstrate significant differences between CT and HIT on the change in physical fitness, a finding which is in agreement with some [18,19,22], but not all previous studies [16,17]. Interestingly, recent meta-analyses suggest that (high-intensity) interval training results in superior effects on physical fitness compared to CT in HF [45,46]. Despite these promising results of HIT in HF, larger trials that focus on clinical end-points are needed [47]. Also, limitations of a meta-analysis should be taken into consideration, as both HIT and CT interventions included in these meta-analyses comprise of many different protocols and exercise intensities. Especially for HIT, it is suggested that the time spent at a high percentage of peak oxygen uptake, determined by intensity and duration of the work and rest intervals, importantly contributes to the effect size [48]. Whilst the intensity of our high-intensity-bouts is high, the time spent at these high-intensity bouts was lower than in previous studies demonstrating superior effects of HIT compared to CT [16,17]. Furthermore, to validly compare exercise training regimes, it is important that total workload is not significantly different. A study that compared training-effects of HIT and CT and applied individually designed training programs with specific emphasis on comparable workloads for HIT and CT, found no differences between training modalities [19]. Therefore, more rigorous exercise programs (both in frequency, duration, and training load) rather than the type of training per se may contribute to larger improvements in physical fitness.\n\nBODY.DISCUSSION.CARDIAC AND VASCULAR ADAPTATION:\nExercise in HF patients is associated with beneficial cardiac remodelling [8]. After 12-weeks of training, we found no improvements in cardiac structure and function at rest, although the increase in maximal oxygen pulse suggests an increase in stroke volume during exercise. Previous studies that reported significant changes in cardiac function or structure generally applied training periods ≥6 months [8]. Therefore, the relatively short duration of training may contribute, at least partly, to the lack of cardiac remodelling in our study. Moreover, the largest proportion of our training-participants reported ischemic HF etiology. This could be of special importance, since a recent study suggested that cardiac adaptation during outpatient rehabilitation is more prominent in HF patients with non-ischemic etiology [49]. Furthermore, our results are in line with a recent meta-analysis that could not confirm that HIT is superior to CT to induce cardiac adaptations [46]. In our study, we found no superior effect of HIT to improve vascular function, which contrasts with the findings of a recent meta-analysis [50]. However, 4 out the 7 studies included in this meta-analysis showed a superior effect of HIT. Interestingly, these studies were all from the same laboratory and did not follow contemporary guidelines to assess endothelial function. We found no overall effect of exercise training on vascular function. As previous literature has reported a wide range of exercise-induced responses (and also demonstrated non-responders) in vascular function, Green et al. investigated which factors predict this response [51]. They concluded that exercise-induced improvements in vascular function are associated with a lower pre-training vascular function. When comparing the baseline FMD values of our subjects to normal values published previously [52], the endothelial function of the subjects in our study was within the normal range for their age. This may relate to the optimal pharmacological therapy of the HF patients we included [53]. Moreover, we have previously demonstrated that the shear rate pattern during exercise (i.e. an important stimulus for exercise-induced adaptation in vascular function) is less beneficial in HF patients compared to controls [54]. Preserved FMD before training and a suboptimal shear rate stimulus during exercise training may contribute to the absence of a significant training-induced change in vascular function.\n\nBODY.DISCUSSION.QUALITY OF LIFE:\nQuality of life after exercise training in HF patients is previously demonstrated to improve [7,55]. In our study, we indeed found improvement in the subscale for physical functioning after exercise training, but not for total quality of life. The lack of improvement in quality of life may relate to the relatively 'good' quality of life at baseline, which was well above that of previous studies [17,56,57] and consequently, provides little space for further improvement. In support of this idea, we observed a trend for an inverse relation between baseline SF-36 and change in SF-36 in the training group. This indicates that subjects with lower quality of life prior to exercise training demonstrate a larger benefit from the intervention. Furthermore, the inclusion of more relatively old HF patients in our study may also contribute to the smaller effect size of exercise training on quality of life, as demonstrated in a previous study [58]. This latter study found that older HF patients (>60 yrs) demonstrate a smaller effect of exercise training on quality of life compared to younger HF patients (<60 yrs).\n\nBODY.DISCUSSION.CLINICAL RELEVANCE:\nGiven the importance of (even small improvements in) fitness levels for the prognosis of HF patients [5,6,59], finding both a feasible and effective training program is clinically relevant. Another important factor is adherence to exercise training, which often is reported to be low in HF patients due to time-constraints and lack of energy [60]. A HIT training program with lower training frequency and high-intensity intervals of moderate duration might address these two major factors of non-compliance. Therefore, we have studied whether such a program is effective and whether it is superior to CT. The results of this study suggest that low frequency HIT with high-intensity intervals of moderate duration is feasible and successful in improving fitness. Such findings may be of clinical relevance and future studies should therefore focus on finding the optimal exercise protocol for HF patients to achieve long-term benefits and adherence.\n\nBODY.DISCUSSION.LIMITATIONS:\nAlthough we have included a relatively small number of patients, our sample size was in line with previous studies that demonstrated differences in effects of HIT and CT on physical fitness [16,17] and vascular function [50]. Moreover, we have used state-of-the-art techniques for measuring physical fitness and vascular function, in contrast to some previous studies that used suboptimal techniques to assess endothelial function. Therefore, we a priori expected to have sufficient power to detect significant differences between HIT and CT. Post-hoc power analysis revealed a power of 54% to detect differences in change in physical fitness between the two types of training. Finally, we did not provide a comparison between HIT with short high-intensity bouts and HIT with long high-intensity bouts. We encourage future studies to investigate whether different HIT-protocols render different results and to elucidate the optimal HIT-protocol. In conclusion, we have demonstrated that 12-weeks of exercise training in HF patients is associated with improvements in parameters of physical fitness, whilst no improvements in cardiovascular function at rest or (total) quality of life are observed. Moreover, our data does not provide strong evidence for a potentially superior improvement in physical fitness, cardiovascular function or quality of life after 12-weeks of HIT compared to CT in HF patients NYHA class II-III.\n\nBODY.SUPPORTING INFORMATION:\nS1 TREND ChecklistTREND statement checklist.(PDF)Click here for additional data file. S1 ProtocolResearch protocol.(PDF)Click here for additional data file. S1 TableResults of the control group.Data is presented as mean ±SD. †Data missing for 1 control subject. VO2peak; peak oxygen uptake. AT; anaerobic threshold. BA; brachial artery. SFA; superficial femoral artery. GTN; glyceryl trinitrate. CADC; conduit artery dilating capacity. IMT; intima-media thickness. SRAUC; shear rate area-under-the-curve. CADC; conduit artery dilating capacity. LVEDV; left ventricular end-diastolic volume. LVESV; left-ventricular end-systolic volume. IVCT-L/S: isovolumetric contraction time, lateral/septal. IVRT-L/S; isovolumetric relaxation time, lateral/septal. E/A ratio; peak mitral flow velocity during early filling/peak mitral flow velocity during atrial contraction. S/D; systolic flow velocity pulmonary vein/diastolic flow velocity pulmonary vein. E/E'-L/S; peak mitral flow velocity during early filling/peak mitral annulus velocity during early filling, lateral/septal. MLHFQ; Minnesota living with heart failure questionnaire.(DOCX)Click here for additional data file.\n\n**Question:** Compared to continuous training (CT) 12-weeks inpatients NYHA II-III what was the result of high-intensity interval training (HIT) for 12-weeks in patients NYHA II-III on brachial and superficial femoral artery diameter, peak blood flow, and FMD?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1139
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Safety of Tiropramide in the Treatment of Patients With Irritable Bowel Syndrome: A Multicenter, Randomized, Double-blind, Non-inferiority Trial, Compared With Octylonium\n\n ABSTRACT.BACKGROUND/AIMS:\nAntispasmodics such as octylonium are widely used to manage irritable bowel syndrome (IBS) symptoms. However, the efficacy and safety of another antispasmodic, tiropramide, remain uncertain. We aimed to evaluate the efficacy and safety of tiropramide compared with octylonium in patients with IBS.\n\nABSTRACT.METHODS:\nIn this multicenter, randomized, non-inferiority trial, 287 patients with IBS (143 receiving tiropramide and 144 octylonium) were randomly allocated to either tiropramide 100 mg or octylonium 20 mg t.i.d (means 3 times a day) for 4 weeks. Primary endpoint was the mean change of abdominal pain from baseline assessed by visual analogue scales (VAS) score after 4 weeks of treatment. Secondary endpoints were the changes in abdominal pain from baseline at week 2 and in abdominal discomfort at weeks 2 and 4, using VAS scores, patient-reported symptom improvement including stool frequency and consistency, using symptom diaries, IBS-quality of life (IBS-QoL), and depression and anxiety, at week 4.\n\nABSTRACT.RESULTS:\nThe VAS scores of abdominal pain at week 4, were significantly decreased in both tiropramide and octylonium groups, but the change from baseline did not differ between the 2 groups (difference,-0.26 mm; 95% CI,-4.33-3.82; P = 0.901). Abdominal pain and discomfort assessed using VAS scores, diaries, and IBS-QoL were also improved by both treatments, and the changes from baseline did not differ. The incidence of adverse events was similar in the 2 groups, and no severe adverse events involving either drug were observed.\n\nABSTRACT.CONCLUSIONS:\nTiropramide is as effective as octylonium in managing abdominal pain in IBS, with a similar safety profile.\n\nBODY.INTRODUCTION:\nIrritable bowel syndrome (IBS) is a functional bowel disorder characterized by abdominal pain or discomfort relieved by defecation and accompanied by a change in bowel habit.1 According to epidemiologic studies in Korea, the prevalence of IBS was reported to be 6.6% overall, 7.1% in men and 6.0% in women.2 Patients with IBS are reported to seek healthcare frequently, have poor health-related quality of life (QoL) and be responsible for high societal costs.3-5 Like other functional gastrointestinal (GI) disorders, the development of IBS and its symptoms are associated with multifactorial pathophysiology.6 Moreover, symptoms even in an individual patient with IBS may have a different pathophysiology depending on various factors such as diet or emotional status. Therefore, therapy targeting a specific pathophysiology in IBS has been difficult, and symptom treatment focusing on major complaints of IBS or drug combination covering multiple pathophysiology has usually been applied to the management of IBS. Of multifactorial pathophysiology in IBS, colonic smooth muscle spasm has long been postulated to be responsible for abdominal pain in IBS from clinical observations as well as some experimental evidences.7 Therefore, pharmacologic agents directly acting on smooth muscle or on cholinergic receptors, called as antispasmodics, have been developed and used to control abdominal pain and discomfort in IBS. They may be classified as smooth muscle relaxants or agents with anti-cholinergic or anti-muscarinic property. One of the antispasmodics, tiropramide, is a derivative of tyrosine, and exerts a spasmolytic effect on the intestine through reducing Ca2+ release into intestinal smooth muscle, rather than acting on the enteric nervous system.8,9 In a double-blind, placebo-controlled, randomized trial, tiropamide led to symptom improvement in terms of total symptom scores for 4 weeks, compared with 3 weeks in the placebo group; in addition, at week 4 abdominal pain was only improved in the tiropramide group.10 Also in a Korean study of similar design involving 63 patients with IBS, tiropramide improved overall symptom scores and decreased abdominal pain at weeks 2 and 4 (78.2% of the treatment group with tiropramide and 40.5% of placebo group).10 Another antispasmodic, octylonium, also modifies Ca2+ fluxes in the intestinal smooth muscle, with direct myolytic properties similar to tiropramide.11,12 Previously, octylonium has already been proven to be more effective in reducing IBS symptoms than placebo.13-15 However, there have been few large-scale clinical trials comparing tiropamide with antispasmodics of proven efficacy in IBS patients. Therefore, this study aimed at evaluating the efficacy and safety profile of tiropamide in IBS patients compared with octylonium.\n\nBODY.MATERIALS AND METHODS.STUDY PATIENTS:\nEligible patients were 20 to 75 years of age; had been diagnosed with IBS (as assessed according to the Rome III diagnostic criteria for IBS); complained of abdominal pain or discomfort at least 2 times per week for the 2-week run-in period; had visual analogue scale (VAS) scores over 30 mm for abdominal pain during the randomization period. Patients who fit the study criteria had a run-in period of at least 2 weeks during which all IBS-related medications, and concomitant medications were stopped. Exclusion criteria included a history of inflammatory bowel disease, lactose intolerance, uncontrolled diabetes (haemoglobin A1C > 8%) or hypertension (systolic or diastolic blood pressure of ≥ 140/90 mmHg), psychiatric disorder requiring medication, malignancy of organs other than digestive (except for cases with no recurrence for over 5 years since treatment), laparoscopic or open abdominal surgery (except for appendectomy), renal or hepatic disease, glaucoma, and prostate hyperplasia. Those with a history of the following diseases within the previous 6 months were also excluded; peptic ulcer disease, GI bleeding, gastroesophageal reflux disease, intestinal stenosis or obstruction, infectious diarrhea, and pancreatic insufficiency. Patients were excluded if they were intolerable of or sensitive to, study drug, had a history of alcohol or drug addiction, or were expected to take any contraindicated medication during the study period. Pregnant or lactating women and women of childbearing age not using contraception, were also excluded.\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN:\nThis was a multicenter (20 centers), randomized, double-blind, parallel-group, non-inferiority comparative study conducted in South Korea from May 2012 to April 2013. The study protocol was approved by the Institutional Review Board of each study center and conducted in accordance with the ethical principles based on the Declaration of Helsinki and Good Clinical Practice guidelines. For a run-in period of about 2 weeks, eligible subjects for screening were instructed to stop IBS-related medications and prohibited from other drugs. Following the screening phase of 2 to 4 weeks, eligible patients were randomly assigned to either tiropramide (Tiropa; Dae Woong Pharma Korea Inc., Seoul, South Korea) 100 mg or octylonium (Menoctyl; DONG HWA PHARM, Seoul, South Korea) 20 mg, taken 3 times daily before meals for 4 weeks. Treatment assignments were carried out by a computer-generated randomization schedule that was designed to allocate patients to the 2 treatment arms in a 1:1 ratio. The subjects were assigned to sequential allocation numbers at each site, and the medications were presented as 2 identically-appearing tablets containing 1 active drug and 1 placebo of each comparative drug to maintain the double blind condition. Follow-up visits during the study period were scheduled at weeks 2 and 4 to assess abdominal pain/discomfort using VAS, IBS quality of life, IBS symptoms including bowel habits using diaries, the Beck depression inventory (BDI) and the Beck anxiety inventory (BAI), drug compliance and adverse events (Fig. 1). Written, informed consent was obtained from all subjects before enrollment.\n\nBODY.MATERIALS AND METHODS.STUDY ASSESSMENTS.EFFICACY:\nThe primary endpoint was the change in abdominal pain from baseline assessed by VAS scores after 4 weeks of treatment. The secondary endpoints included the change in abdominal pain from baseline by VAS scores at week 2, the change in abdominal discomfort from baseline by VAS scores at weeks 2 and 4, improvement of patients' overall assessment at weeks 2 and 4, the change in IBS QoL from baseline at 4 weeks, the change in symptom improvement in abdominal pain and discomfort from baseline using symptom diaries (severity of symptoms and days of no symptoms), and stool frequency and form assessed by Bristol stool form scale and according to IBS subtypes. Symptom improvement was defined as \"completely improved,\" \"considerably improved\" or \"slightly improved.\" Percentages of symptom improvement include \"completely, considerably and slightly improved.\" Symptom diaries assessed daily symptoms of abdominal pain and discomfort using VAS of 0 to 100 with 100 representing extreme symptom. The number of defecation per day and the most common stool consistency during the day were recorded using the Bristol stool form scale. IBS-QoL was assessed by asking patients to answer each question using a 5-point Likert scale (0 = none, 1 = mild, 2 = moderate, 3 = severe, 4 = extremely severe). Exploratory end points were the changes in BDI and BAI from baseline. Each BDI or BAI was composed of 21 questions and assessed as \"0 = do not feel at all,\" \"1 = feel a bit,\" \"2 = feel significantly,\" and \"3 = feel badly.\" Confounding factors were baseline factors of efficacy including BDI and BAI.\n\nBODY.MATERIALS AND METHODS.STUDY ASSESSMENTS.SAFETY:\nSafety assessments included adverse events, and abnormalities in laboratory findings and vital signs.\n\nBODY.MATERIALS AND METHODS.STATISTICAL METHODS:\nWe estimated the required sample size for this study from the data of a previous study which found a change in VAS for abdominal pain from baseline as 23.6 mm with standard deviation (SD) of 25.0 mm.14 The margin of non-inferiority was assumed to be 11 mm, approximately half of 22.7 mm, which was the difference of the changes observed between the octylonium and the placebo groups. Based on these assumptions, a sample of 129 patients was needed for each study arm in order to have a power of 90%, and α of 0.025 (one-sided), while allowing for a 15% dropout rate. The analyses were performed using full analysis set (FAS) and per protocol set (PPS). All efficacy analyses were conducted in the FAS population, which consisted of all randomized patients satisfying the inclusion and exclusion criteria who received at least one dose of study medication and had at least one post-baseline efficacy measurement. The primary efficacy analysis was based on the PPS, which included all patients within the FAS population who took more than 80% of their assigned drugs and had no major protocol violations such as violations of scheduled visits or administration of contraindicated medication. The change in VAS for abdominal pain from baseline was evaluated by 2 sample t test. Seconday outcomes were compared by 2 sample t test or Wilcoxon's rank sum test. Rates of symptom improvement were described as frequencies and percentages, and compared between groups by Pearson's chi-square test or Fisher's exact test. For safety variables including treatment emrgent adverse event (TEAE), adverse drug reaction, and serious adverse event, Pearson's chi-square test or Fisher's exact test was used to evaluate differences in incidence between groups. Changes in continuous variables from baseline after treatment were analyzed by 2 sample t test or Wilcoxon's rank sum test, while categorical variables, by Pearson's chi-square test or Fisher's exact test. Confounding factors such as BDI, BAI and baseline values of the efficacy variables were further analysed by ANCOVA or multiple logistic regression. A P-value of 0.05 was considered statistically significant. Statistical analyses were performed using SAS (Enterprise Guide 4.3).\n\nBODY.RESULTS.BASELINE CHARACTERISTICS:\nTotal of 356 patients were evaluated for study inclusion. After exclusion of 61 patients during the screening period, 295 patients were randomized to either tiropramide or octylonium. Three percent of patients in the tiropramide group (4/143) and octylonium group (4/144) did not complete the study. Data on the remaining 287 patients were available for the FAS analysis: 143 for tiropramide vs. 144 for octylonium. Data on 245 patients were available for the PPS analysis: 117 for tiropramide vs. 128 for octylonium. A flowchart of patient progression through the study with reasons for premature discontinuation is presented in Fig. 2. Demographic and baseline characteristics of patients are presented in Table 1. Both treatment arms were well balanced for gender, age and severity of symptoms at baseline.\n\nBODY.RESULTS.EFFICACY:\nThe PPS population comprised of 245 patients (117 in the test group and 128 in the control group). The mean changes of VAS scores for abdominal pain from baseline were-23.49 ± 17.51 mm in the tiropramide group and-23.13 ± 17.61 in the octylonium group. The mean difference between the 2 groups was-0.26 mm in the FAS and-0.35 mm in the PPS, and the upper limit of the 97.5% CI at one-sided was 3.82 mm in the FAS and 4.07 mm in the PPS; these are below the margin of inferiority of 11 mm, indicating that tiropramide is not inferior to octylonium (Table 2). In secondary outcome analyses, VAS scores for abdominal pain at week 2 were significantly decreased in both groups (Table 3). Significant improvement in abdominal pain and discomfort at weeks 2 and 4 using symptom diaries, and in IBS-QoL, BDI and BAI at week 4 were observed in both groups, however the differences between the groups were not significant. In the subgroup of IBS-D, stool frequency was decreased and stool consistency was improved by treatment. Furthermore, the degree of symptom reduction in abdominal pain using symptom diaries was higher in the tiropramide group than in the octylonium group from week 1 (P = 0.033). After adjusting the baseline values of related factors, our analysis showed that all the factors based on symptom diaries, except the change of IBS symptoms, did not differ in the 2 groups. IBS symptoms at week 4 using symptom diaries were reduced further compared to baseline in the tiropramide group than the octylonium group.\n\nBODY.RESULTS.SAFETY:\nAdverse events reported during the study are shown in Table 4. TEAE was 10.88% (16/147) in the tiropramide group and 12.24% (18/147) in the octylonium group, which included 28 each. The safety profile of tiropramide was similar to that of octylonium. Adverse drug reactions were reported in 7 patients in the tiropramide group and 6 patients in the octylonium group, which included 17 and 9, respectively. Serious adverse events developed in one patient in each group, idiopathic thrombocytopenic purpura in the tiropramide group and intervertebral disc protrusion in the octilonium group, but were not related to the study drugs, and were improved without any sequelae. Laboratory test results, when comparing those at screening period with those at last visit, showed no violation of safety regarding normal values, or the changes of direction and quantity. Vital signs did not change significantly from baseline in either group at weeks 2 and 4.\n\nBODY.DISCUSSION:\nThe present study was designed to assess the non-inferiority of tiropramide compared to the proven antispasmodic, octylonium, for managing IBS symptoms. This multicenter, randomized, double-blind, non-inferiority trial showed that 4-week treatment with tiropramide improved abdominal pain and discomfort, patient-reported overall assessments of IBS symptoms, IBS-QoL, depression and anxiety, with no significant differences from octylonium. These results demonstrate non-inferiority of tiropramide in efficacy and safety, compared with otiolonium. Tiropramide effectively and safely managed abdominal pain in the IBS patients. Our results showed that the change in VAS scores for abdominal pain from baseline was about 23.5 mm after 4-week treatment with tiropramide, similar to 23.6 mm observed after 4-week treatment with octylonium. The changes from baseline between the 2 groups were not significantly different, and the upper limit of 95% CI was lower than the margin of inferiority in both the FAS and PPS populations, indicating non-inferiority of tiropramide compared to octylonium. In a previous study comparing tiroparmide with octylonium in IBS patients, the proportion of patients whose abdominal pain was relieved by treatment was higher in tiropramide group than in octylonium group at days 3 and 5 of medication, suggesting rapid drug action of tiropramide.16 Also, at the end of 30-day treatment, normal gut motility was restored in 88% of the tiropramide group compared with 47% of octylonium group.16 The presumed mechanism of tiropramide in improving abdominal pain is its antispasmodic effect on the intestine. In an animal study using the isolated smooth muscle, tiropramide inhibited Ca2+ influx for the most part and, to a lesser extent, increased the intracellular cyclic adenosine monophosphate (cAMP) levels.9 This study using rabbit also found that tiropamide increased Ca2+ absorption into microvesicles and decreased the amount of Ca2+ released to stimulate the contraction of the smooth muscle by increasing cAMP; as a consequence the smooth muscle relaxed.9 In terms of the distribution of tiropramide, a study showed that the radioactivity of tiropramide lasted longer in the colon than in other organs after intravenous or oral administration,17 indicating relative colonic selectivity. Furthermore, the effect of tiropramide were more evident in the smooth muscle of the digestive, urinary and female genital tract than in those of the vessel, and in stimulated contractions rather than in spontaneous ones.17 In the present study, other advantages of tiropramide were observed compared with octylonium: tiropramide was effective earlier than octylonium; in the assessment using symptom diaries, the severity and degree of abdominal discomfort tended to be reduced further in the tiropramide group than in the octylonium group. Since this study was designed to evaluate the non-inferiority of tiropramide over octylonium, we could not conclude that tiropramide is more effective than octylonium, but we may at least speculate about this possibility, considering that assessment using symptom diaries is less biased than that using VAS, which is based on patients' rememberance of symptoms from the previous week. Taken together, there may be some advantage of tiroparmide over octylonium for managing IBS symptoms; this could be due to its colonic selectivity. Regarding the safety of tiropramide, percentages of patients reporting adverse events were similar in both groups. Vital signs and labortory test results did not significantly change after a 4-week treatment with either drug. It has been known that it does not affect neurotransmitters such as histamine, acetylcholine, and adrenaline, thus leading to a lower incidence of side effects than other antispasmodic agents. It was also found that tiropramide did not affect esophageal motility, including the pressure of the low esophageal sphincter, and the coordinated movements of the esophagus.18 Actually, both tiropramide and octylonium have been safely prescribed over the past ten years. Similarly, the present study raised no safety concern. In the subgroup of patients with diarrhea predominant IBS (IBS-D), tiropramide improved bowel habits. Its spasmolytic effect of the drug may be related to the improvement of bowel habits in terms of stool frequency and consistency. These improvements may also be explained by the fact that tiropramide can restore normal intestinal transit in patients with IBS-D having rapid intestinal transit, and are consistent with observations in a study on octylonium demonstrating improvements of bowel habit and abdominal pain and discomfort.14 Moreover, as previously mentioned, gut motility was normalized in 88% of the tiropramide group after 30-day treatment with tiropramide, against 47% of the octylonium group.16 Considering the effect on bowel habit improvement in the subgroup of patients with IBS-D, there may be an additional benefit of tiropramide in patients with IBS-D, although the small number of patients with constipation predominant IBS (IBS-C) in our study might preclude us from concluding that the study drug is effective in patients with IBS-C. In this study, levels of depression and anxiety in the IBS patients were also improved by tiropramide. This may be interpreted in terms of the brain-gut axis.19 Relief of abdominal pain may affect positively the emotional status of study patients. IBS-QoL was also improved. However, theses secondary outcomes should be cautiously interpreted because there may be a variety of confounding factors. Placebo effects are known to be relatively high in IBS, ranging from 20% to 70%. This is believed to be partly due to a variety of physical and psychological factors affecting the brain-gut axis, which is thought to be involved in IBS pathophysiology. Therefore, studies of the treatment of IBS should be designed as randomized, double-blind, and placebo-controlled trials.20 The present study lacks a placebo control therefore could be limited in directly showing the superiority of tiropramide to placebo. However, this limitation in our study may be sufficiently compensated by previous evidence to demonstrate the superiority of octylonium to placebo.13,14,21 Other limitations of our study include the relatively short period of treatment (only 4 weeks) and the lower dose of octylonium than that used in the reference study (20 mg vs. 40 mg, t.i.d). However, the efficacy of both tiropramide and octylonium in improving symptoms of IBS was clearly demonstrated by our study results with statistical differences between pre-and post-treatments. In conclusion, our study demonstrates that treatment with tiropramide is effective and safe for managing IBS symptoms such as abdominal pain and discomfort, stool frequency and consistency, IBS-QoL, depression and anxiety. Considering not enough data fully show the efficacy and safety of tiropramide in IBS patients, the present study should be a useful reference in IBS research and clinical practice for managing IBS patients. Further studies are required to elucidate the mechanism of action of tiropramide on intestine, particularly its pharmacodynamic and pharmacokinetic properties.\n\n**Question:** Compared to octylonium 20 mg what was the result of tiropramide 100 on The VAS scores of abdominal pain at week 4?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1147
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Benefits of different postoperative treatments in patients undergoing knee arthroscopic debridement\n\n ABSTRACT.PURPOSE:\nTo assess the effectiveness of viscosupplementation or platelet-rich plasma (PRP), compared to standard care, for pain relief after knee arthroscopic debridement in patients with meniscal pathology and osteoarthritis (OA), under normal clinical practice conditions.\n\nABSTRACT.PATIENTS AND METHODS:\nWe conducted a prospective, randomized, evaluator-blind, pilot study. After arthroscopy, patients were randomized to receive 1) five injections of HA1 (Suprahyal®/Adant®); 2) four injections of HA2 (Orthovisc®); 3) three injections of HA3 (Synvisc®); 4) a single injection of PRP (GPSTM II); or 5) standard care (control). Patients were followed up for 18 months. Clinical outcomes were evaluated using the Western Ontario and McMaster Universities Arthritis Index (WOMAC) at 3, 6, 12, and 18 months. Minimally Clinical Important Improvement (MCII), as relative improvement ≥20 for pain and function, was also calculated.\n\nABSTRACT.RESULTS:\nFifty patients were included. At early follow-up (3 months), total WOMAC scores improved in all groups compared to baseline with reductions of 44.79% (HA1), 24.02% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control) (p=0.002 HA1 compared to HA2). At 18 months, the higher improvement in total WOMAC was in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.55% increase over baseline (p=0.001 control compared to HA1 and HA3). The percentage of patients achieving the MCII for both pain and function at 18 months was 100% (HA1), 80% (HA3), 60% (HA2), and 60% (PRP), whereas, in the control group, all patients returned to pre-arthroscopy levels. There were no adverse events attributable to surgery or to intraarticular administration.\n\nABSTRACT.CONCLUSION:\nViscosupplementation following arthroscopy is more effective than PRP in adequately selected patients with meniscal lesions occurring concomitantly with OA. Further controlled studies with a larger sample size and/or alternative regimens would be of interest for the scientific community.\n\nBODY.INTRODUCTION:\nOsteoarthritis (OA) is the most common of all joint diseases and exacts a heavy economic toll due to its high prevalence in the general population and potential for causing progressive disability.1 The etiology is multifactorial, including a variety of risk factors (aging, genetics, trauma, malalignment, and obesity) that interact to cause this disorder.2 The primary goal of treatment is the alleviation of pain, leading to an improvement in joint function and quality of life. Treatment options for knee OA include conservative measures, pharmacological treatments, intraarticular (IA) injections, and surgical intervention.3 Viscosupplementation is a well-established treatment option in OA. Hyaluronic acid (HA) exerts a mechanical effect providing lubrication of the joint, protecting against loads and impacts, and restoring the rheological properties of the synovial fluid.4 HA also has a pharmacological effect by interacting with mediators of inflammation, inhibiting nociceptors of pain, stimulating chondrocyte growth, synthesis of extracellular matrix protein, and reduction of apoptosis in osteoarthritic cartilage.5–8 HA products that are available differ in many characteristics, including origin (animal vs biofermentation), molecular weight (MW), structure (linear, crosslinked, and mix of both), volume of injection, and posology. Clinical investigations with these products have been mainly focused on the potential differences attributable to MW. Arbitrarily, different HA products have been divided into three MW categories: low (500–730 kDa), intermediate (800–2,000 kDa), and high MW (2,000–6,000 kDa) including crosslinked formulations of HA.9 The therapeutic use of platelet-rich plasma (PRP) is based on the use of the patient's own platelets as carrier of growth factors and other proteins (eg, fibronectin and other adhesive proteins) that play an important role in cell biology. The resulting products vary in their leukocyte content, platelet concentration, subsequent activation, and, at this moment, it is not defined what should be the ideal concentration of platelets that should reach the preparations as well as the dosage regimen.10 Although it seems PRP may improve the symptomatology, one part of the scientific community considers that there still exists a lack of clarity in available evidence for this to be recommended in international guidelines.11 Knee arthroscopy has become the gold standard in the diagnosis of meniscal and ligamentous injury12 and also plays a role in their management.13 This intervention has been widely used as a treatment for OA of the knee, but recent studies14–16 with controversial design and methodology17 have questioned its usefulness. However, it should be noted that >90% of patients with symptomatic knee OA have magnetic resonance imaging findings compatible with meniscal pathology.18 Therefore, arthroscopic surgery would be indicated in those patients with meniscal injuries, loose bodies, osteochondral lesions, or synovial pathology, with a concomitant diagnosis of knee OA playing an essential role in the adequate selection of patients who can benefit from this technique.19 Arthroscopy has decreased morbidity as compared with that in open procedures, but it has not eliminated pain completely. The causative factors behind this persistent pain are unknown, although it is known that most of the IA structures of the knee, including the synovial tissue, the anterior fat pad, and the joint capsule, have free nerve endings that are capable of sensing painful stimuli and producing severe pain.20,21 Moreover, it has been postulated that the severity and extent of OA, the degree of debridement undertaken, and the skill of the surgeon could be factors involved.22 Considering the abovementioned aspects, this study was planned as a pilot to assess the effectiveness of different treatments such as viscosupplementation or PRP as compared to standard care for pain relief after knee arthroscopic debridement in patients with meniscal pathology and OA, under normal clinical practice conditions.\n\nBODY.PATIENTS AND METHODS.ETHICS:\nThe study was conducted in accordance with the Good Clinical Practice guidelines and in compliance with the principles of the Declaration of Helsinki. The study protocol was approved by the local Ethics Review Board from Hospital Español de México, and all patients gave written informed consent to participate in the study.\n\nBODY.PATIENTS AND METHODS.DESIGN:\nThis was a prospective, randomized, evaluator-blind, pilot study with four parallel arms and a control group. Eligible patients were informed about study purpose and design. Demographic characteristics and medical history of the participants were recorded, and laboratory tests were done. X-rays of both knees were performed using anteroposterior projection with support, lateral with 30° flexion, and Merchant (45°) views; bipodalic mechanical axis digitalization as well as magnetic resonance imaging of the affected knee were also done. Arthroscopies were all carried out by a single orthopedic/arthroscopic surgeon. All patients were given three doses of antibiotic (cephalothin 1 g) as prophylaxis. Postsurgical pain treatment comprised oral acetaminophen 1 g/8 h and celecoxib 200 mg/12 h for 10 days. After surgery, patients were randomized to receive: 1) five IA injections of HA1 (Suprahyal®/Adant®: biofermentation, non-crosslinked, 600–1,200 kDa, 25 mg/syringe); 2) four IA injections of HA2 (Orthovisc®: biofermentation, non-crosslinked, 1,700–2,900 kDa, 30 mg/syringe); 3) three IA injections of HA3 (Synvisc®: chicken combs, crosslinked, 7,000 kDa, 16 mg/syringe); 4) a single IA injection of PRP (GPS® II); and 5) standard care. The HAs used were considered of interest as they have different characteristics, 23, 24 to study whether these differences could have any impact on the efficacy/safety results. The number of doses specified are those authorized for each product.23 The GPS® II is a closed system that enables PRP to be collected while avoiding the risk of contamination and makes possible to concentrate the autologous platelets for reinjection locally.25 Eligible patients were randomized 1:1 using a computer-generated list of random numbers. The random sequence was created using freely accessible tools available at http://www.randomization.com, which uses the pseudo-random number generator of Wichmann and Hill, 26 modified by McLeod.27 PRP was administered immediately after arthroscopy and viscosupplements were given once a week, starting at 21 days post-arthroscopy. Rescue medication comprised acetaminophen 1 g/8 h and celecoxib 200 mg/12 h. The administration of IA treatments was conducted under aseptic conditions, inserting the needle into the patellofemoral joint space by a superolateral approach, with the patients in a supine position. In case effusion was present, it was removed before each injection. Follow-up was conducted by five orthopedic surgeons, each one assigned to a specific group and blinded to the treatment administered. Visits were scheduled at 3, 6, 12, and 18 months after arthroscopy. All patients were prescribed the same protocol of rehabilitation to be followed at the patient's convenience. The rehabilitation protocol started progressively, starting with isometric exercises and muscle stretching from Week 1 post-arthroscopy, adding weight exercises from Week 3, cardiovascular training from Week 7, and high-impact exercises from 3 months onward.\n\nBODY.PATIENTS AND METHODS.PATIENT SELECTION CRITERIA:\nEligible patients were men and women, aged 40–85, with body mass index (BMI) <35 kg/m2, and indications for knee arthroscopic surgery due to symptomatic meniscal rupture Mink et al28 grade III confirmed by magnetic resonance imaging, radiographic OA grades I–II according to Kellgren and Lawrence (KL),29 meniscal preservation >60%, ligament integrity, and ability and willingness to provide informed consent for study participation. Main exclusion criteria were infectious conditions, use of anticoagulants, history of trauma (dislocation or fracture), inflammatory arthritis, microcrystalline arthropathies, history of septic arthritis, ligament injury, nonspecific synovitis, angular deformity >10°, chondral lesions G-IV Outerbridge (>1 cm2),30 neoplasms, or allergy to any of the components of the products under study.\n\nBODY.PATIENTS AND METHODS.EVALUATION OF EFFICACY:\nEfficacy was evaluated using the Western Ontario McMaster University Osteoarthritis Index (WOMAC) total and subscales of pain, stiffness, and function using a five-point (0–4) Likert scale. Scores at the end of follow-up (18 months) were compared to baseline and the differences between treatment groups were analyzed. Secondary outcomes were efficacy rates at the different follow-up visits. The WOMAC questionnaire used was the Spanish translated version from Batlle-Gualda et al,31 validated by Escobar et al.32 The number of patients achieving the Minimal Clinically Important Improvement (MCII) in each treatment group was also calculated. The MCII is the smallest change in measurement that signifies an important improvement in a patient's symptom.33 According to Tubach et al,34 a 15 out of 100 for absolute improvement and 20% for relative improvement could be used as cutoffs for MCII estimation. Taking into account that, in our study, a Likert scale was used, the 20% of relative improvement cutoff was applied.33\n\nBODY.PATIENTS AND METHODS.EVALUATION OF SAFETY:\nThe incidence and type of adverse events was recorded throughout the study.\n\nBODY.STATISTICS:\nCategorical variables were summarized by their number and relative frequencies. For continuous variables following a normal distribution (Kolmogorov–Smirnov test), mean, standard deviation, and maximum and minimum values were used. Non-normally distributed variables were summarized with their median, interquartile range, minimum, and maximum. Changes in WOMAC total and subscales at 3, 6, 12, and 18 months were expressed as raw scores and as percentage of change from the baseline value. Comparisons between groups were done using chi-square test or Fisher's Exact test for categorical variables when necessary. Continuous data were analyzed using Mann–Whitney test. The study was designed as a pilot evaluation; therefore, real sample-size estimation was not done. The main population for analysis was the per protocol (PP) population, comprising patients who completed the 18 months follow-up according to the study protocol. Safety analysis was conducted in all patients who received at least one IA injection. Data were analyzed using SPSS V14 (SPSS Inc, Chicago Il). All tests were two-sided, and the statistical significance was set at 0.05.\n\nBODY.RESULTS.DISPOSITION OF PATIENTS AND DEMOGRAPHICS:\nPatients were recruited from March 2013 to October 2014. Ninety patients were initially screened, of whom 40 were excluded due to age (n=4), history of trauma or ligament involvement (n=5), overweight (n=6), meniscal injury > 40% (n=4), chondral lesions G-IV (Outerbridge >1 cm2; n=5), eventual total knee arthroplasty (TKA) (n=7), lost to follow-up (n=8), or death unrelated to the study (n=1), leaving a total of 50 patients who participated in the study and completed all procedures as scheduled (Figure 1). Among study subjects, 52% were male and the mean age was 64.4 years, with statistically significant differences between HA2 and HA3 (p=0.007). Mean BMI was 27.8 kg/m2 without differences among groups (p=0.693), despite HA2 comprising more obese patients (Table 1). Near all patients in the HA3 group had mild OA (KL grade I), with the differences significant versus HA2 (p=0.001) and control (p=0.002). The groups were homogeneous with respect to WOMAC total values (p=0.062) as well as pain (p=0.234), stiffness (p=0. 383), and function (p=0.089) subscales (Table 1). There were no differences among groups in arthroscopic lesions (Table 2). The following surgical procedures were conducted: partial meniscectomy of no more than 40% (35 patients), synovectomy and resection of Hoffa's fat pad (five patients), and debridement and stabilization of the chondral lesion (16 patients). More than one procedure in the same patient was possible.\n\nBODY.EFFICACY:\nAt early follow-up (3 months), total WOMAC scores improved as compared to baseline, in all groups with reductions of 44.79% (HA1), 24.0% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control), (p=0.002 HA1 compared to HA2). At 18 months, a higher improvement in total WOMAC was observed in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.5% increase over baseline despite the use of rescue medication and rehabilitation, with the differences statistically significant with respect to HA1 and HA3 (p=0.001; Table 3 and Figures 2A–D). At 3 months follow-up, the percentage of patients achieving the MCII for both pain and function (20% of relative change) in different groups was 80% (HA1), 20% (HA2), 40% (HA3), 30% PRP, and 30% (control). At the end of follow-up, a growing number of patients experienced a clinically significant improvement in HA1 with a 100% responders rate, followed by HA3 (80%). In the group treated with HA2 and in those treated with PRP, the results reached 60% responders, whereas in the control group, all patients returned to near pre-arthroscopy levels of pain and impairment and none of the patients reached the 20% MCII cutoff (Figure 3).\n\nBODY.SAFETY:\nThroughout the overall study period, patients did not experience adverse events attributable to surgery or to IA injections.\n\nBODY.DISCUSSION:\nArthroscopic surgery is frequently advocated as a treatment option to relieve symptoms of painful degenerative knee conditions. The rationale is that it may improve symptom and functions, has minimal morbidity, provides a therapeutic options, and documents the stage of the disease process. The duration of effect, however, is variable.35 Adjunctive therapies such as nonsteroidal anti-inflammatory drugs and other medications, physiotherapy, and IA steroid injections may be incorporated to improve the results.36,37 The use of arthroscopic surgery in the treatment of OA has generated much discussion. Whereas some studies14,15 reported no differences with placebo arthroscopy, other studies confirm that, in patients with preoperative mechanical symptoms secondary to loose bodies, chondral flaps, and meniscal pathology, the use of arthroscopic surgery is beneficial.38,39 The key point is to offer knee arthroscopy to selected patients. Patients with symptomatic meniscal tears, few degenerative changes, normal axis, and acute mechanical complaints can benefit from this intervention, especially if the OA is mild or moderate.40–42 Favorable results have been also shown for the management of the infrapatellar fat pad impingement and Hoffa's disease.43,44 Chronic complaints, previous surgical intervention, degenerative changes in two or three compartments, significant axis deviation, ligament imbalance, or severe cartilage destruction are among the characteristics of patients who may be candidates for unsatisfactory results after arthroscopy.42 As a result, arthroscopic knee surgery is the most common type of orthopedic surgery performed in the developed world.45 However, in patients with symptomatic OA, pain can persist after arthroscopic meniscectomy and/or debridement of chondral lesions. Treatment combining arthroscopic surgery and viscosupplementation has been widely used for patients with combined intraarticular mechanical and articular surface pathology.40 Initially, rheological characteristics that confer a lubricating effect to HA were considered the primary mechanism of action in treating OA pain. Nevertheless, rather than being a mere device, HA has been shown to exert other effects such as inhibition of tissue nociceptors, inhibition of matrix metalloproteinases, free radicals scavenging, and stimulation of endogenous HA production. Due to this, viscosupplementation has been recommended as adjunctive therapy after arthroscopic surgery as a suitable way of achieving long-term stabilization of the treatment outcome.40,44,46 The application of PRP to patients with OA was developed because of the physiological roles of several bioactive proteins and growth factors expressed in platelets, which lead to tissue regeneration. Despite the use of PRPs having been extended to the treatment of several musculoskeletal injuries the last few years, we have not found published studies analyzing the efficacy of PRP as adjunctive treatment in post-arthroscopy patients. A recent meta-analysis found that at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, the indication for knee arthroscopy was symptomatic rupture of a meniscus with symptomatic osteochondral detachment in patients with grade II OA. In these patients, surgery is the only and necessary indication, and they were given additional IA treatment with hyaluronate or PRP due to the presence of OA. In the market, there are a variety of HAs and our aim has been to verify if the differences between HA products has any clinical relevance, as well as to study the effects of PRP after knee arthroscopic debridement. Procedures for clinical practice and the dosage described by the manufacturers have been used; according to the studies published, treatments with HA usually started 3 weeks after surgery.3,35,44,48,49 In absence of recommendations in the literature, PRP injection was administered immediately after arthroscopy. The HA1 product showed the best results, with a very significant improvement of the symptomatology compared to the baseline that was maintained up to 18 months post-arthroscopy (reduction of −65.20% in total WOMAC) and with 100% of the patients reaching the MCII 20% in pain and function, followed by HA3 with a reduction of −49.57 in total WOMAC and 80% of patients reaching the 20% MCII cutoff. These products differ in origin, MW, structure, and number of injections administered and, in our study, the differences between both did not reach statistical significance. However, it should be noted that 8/10 patients in the HA3 group were KL grade I, which is associated with better response data.40,41 The HA2 group showed much more discreet and erratic results, with final reduction in WOMAC at the end of the study and nearly 30% and 60% of patients complying with MCII requirements. It should be noted that, in this group, all patients had OA grade II as well as a greater number of overweight/obese patients – a characteristic that may also have influenced the results.44 A recent meta-analysis found that, at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, patients treated with PRP constitute a diverse group who seem to follow the rule of all or nothing, so that those who improve do so in a very significant way and this improvement is maintained until the end of follow-up, whereas something similar occurs with non-responder patients; thus, at the end of the study, only 60% of patients achieved the MCII threshold. Ayhan et al50 suggest that PRP therapy would be indicated for patients <60 years and with a BMI <30. In our study, the best results in the PRP group were also seen in patients with a BMI <30, as well as in those <60 years. We did not find any another relationship between the type of response and the characteristics of the patients or the arthroscopic findings. Surprisingly, the control group showed low improvement in symptomatology at early follow-up that worsened along time with return to pre-arthroscopy levels at the 12-month follow-up that required the consumption of rescue medication. One of the weaknesses of our study is the small sample size as it was designed as a pilot evaluation with the limitations that it entails when establishing comparisons; moreover, the fact that each group was evaluated by a different person may have influenced the evaluation of the results. Finally, not all patients undertook the rehabilitation protocol in the same way, as they were treated by different people and in different clinics according to the personal needs of the patient: a more strict control of this procedure would have been desirable. In our favor, the long follow-up period allowed us to assess the evolution of the results in the long term; the criteria used for patient selection was of special relevance to select those who may benefit most from the procedure, as was the presence of a control group. The design of the study also allowed us to study the behavior of three different HAs used within routine practice and to compare them with PRP as a new therapeutic approach. Viscosupplementation is an effective treatment for the treatment of knee OA. Based on our results, we consider that its use in combination with arthroscopic surgery can provide very good results in adequately selected patients with an individualized and multidisciplinary follow-up. However, due to the importance of this type of intervention, further controlled studies with a larger sample size and/or alternative posology regimens would be of interest for the scientific community.\n\n**Question:** Compared to standard care (control) what was the result of five injections of HA1 (Suprahyal®/Adant®) on At early follow-up (3 months), total WOMAC scores Western Ontario and McMaster Universities Arthritis Index (WOMAC)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1153
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Benefits of different postoperative treatments in patients undergoing knee arthroscopic debridement\n\n ABSTRACT.PURPOSE:\nTo assess the effectiveness of viscosupplementation or platelet-rich plasma (PRP), compared to standard care, for pain relief after knee arthroscopic debridement in patients with meniscal pathology and osteoarthritis (OA), under normal clinical practice conditions.\n\nABSTRACT.PATIENTS AND METHODS:\nWe conducted a prospective, randomized, evaluator-blind, pilot study. After arthroscopy, patients were randomized to receive 1) five injections of HA1 (Suprahyal®/Adant®); 2) four injections of HA2 (Orthovisc®); 3) three injections of HA3 (Synvisc®); 4) a single injection of PRP (GPSTM II); or 5) standard care (control). Patients were followed up for 18 months. Clinical outcomes were evaluated using the Western Ontario and McMaster Universities Arthritis Index (WOMAC) at 3, 6, 12, and 18 months. Minimally Clinical Important Improvement (MCII), as relative improvement ≥20 for pain and function, was also calculated.\n\nABSTRACT.RESULTS:\nFifty patients were included. At early follow-up (3 months), total WOMAC scores improved in all groups compared to baseline with reductions of 44.79% (HA1), 24.02% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control) (p=0.002 HA1 compared to HA2). At 18 months, the higher improvement in total WOMAC was in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.55% increase over baseline (p=0.001 control compared to HA1 and HA3). The percentage of patients achieving the MCII for both pain and function at 18 months was 100% (HA1), 80% (HA3), 60% (HA2), and 60% (PRP), whereas, in the control group, all patients returned to pre-arthroscopy levels. There were no adverse events attributable to surgery or to intraarticular administration.\n\nABSTRACT.CONCLUSION:\nViscosupplementation following arthroscopy is more effective than PRP in adequately selected patients with meniscal lesions occurring concomitantly with OA. Further controlled studies with a larger sample size and/or alternative regimens would be of interest for the scientific community.\n\nBODY.INTRODUCTION:\nOsteoarthritis (OA) is the most common of all joint diseases and exacts a heavy economic toll due to its high prevalence in the general population and potential for causing progressive disability.1 The etiology is multifactorial, including a variety of risk factors (aging, genetics, trauma, malalignment, and obesity) that interact to cause this disorder.2 The primary goal of treatment is the alleviation of pain, leading to an improvement in joint function and quality of life. Treatment options for knee OA include conservative measures, pharmacological treatments, intraarticular (IA) injections, and surgical intervention.3 Viscosupplementation is a well-established treatment option in OA. Hyaluronic acid (HA) exerts a mechanical effect providing lubrication of the joint, protecting against loads and impacts, and restoring the rheological properties of the synovial fluid.4 HA also has a pharmacological effect by interacting with mediators of inflammation, inhibiting nociceptors of pain, stimulating chondrocyte growth, synthesis of extracellular matrix protein, and reduction of apoptosis in osteoarthritic cartilage.5–8 HA products that are available differ in many characteristics, including origin (animal vs biofermentation), molecular weight (MW), structure (linear, crosslinked, and mix of both), volume of injection, and posology. Clinical investigations with these products have been mainly focused on the potential differences attributable to MW. Arbitrarily, different HA products have been divided into three MW categories: low (500–730 kDa), intermediate (800–2,000 kDa), and high MW (2,000–6,000 kDa) including crosslinked formulations of HA.9 The therapeutic use of platelet-rich plasma (PRP) is based on the use of the patient's own platelets as carrier of growth factors and other proteins (eg, fibronectin and other adhesive proteins) that play an important role in cell biology. The resulting products vary in their leukocyte content, platelet concentration, subsequent activation, and, at this moment, it is not defined what should be the ideal concentration of platelets that should reach the preparations as well as the dosage regimen.10 Although it seems PRP may improve the symptomatology, one part of the scientific community considers that there still exists a lack of clarity in available evidence for this to be recommended in international guidelines.11 Knee arthroscopy has become the gold standard in the diagnosis of meniscal and ligamentous injury12 and also plays a role in their management.13 This intervention has been widely used as a treatment for OA of the knee, but recent studies14–16 with controversial design and methodology17 have questioned its usefulness. However, it should be noted that >90% of patients with symptomatic knee OA have magnetic resonance imaging findings compatible with meniscal pathology.18 Therefore, arthroscopic surgery would be indicated in those patients with meniscal injuries, loose bodies, osteochondral lesions, or synovial pathology, with a concomitant diagnosis of knee OA playing an essential role in the adequate selection of patients who can benefit from this technique.19 Arthroscopy has decreased morbidity as compared with that in open procedures, but it has not eliminated pain completely. The causative factors behind this persistent pain are unknown, although it is known that most of the IA structures of the knee, including the synovial tissue, the anterior fat pad, and the joint capsule, have free nerve endings that are capable of sensing painful stimuli and producing severe pain.20,21 Moreover, it has been postulated that the severity and extent of OA, the degree of debridement undertaken, and the skill of the surgeon could be factors involved.22 Considering the abovementioned aspects, this study was planned as a pilot to assess the effectiveness of different treatments such as viscosupplementation or PRP as compared to standard care for pain relief after knee arthroscopic debridement in patients with meniscal pathology and OA, under normal clinical practice conditions.\n\nBODY.PATIENTS AND METHODS.ETHICS:\nThe study was conducted in accordance with the Good Clinical Practice guidelines and in compliance with the principles of the Declaration of Helsinki. The study protocol was approved by the local Ethics Review Board from Hospital Español de México, and all patients gave written informed consent to participate in the study.\n\nBODY.PATIENTS AND METHODS.DESIGN:\nThis was a prospective, randomized, evaluator-blind, pilot study with four parallel arms and a control group. Eligible patients were informed about study purpose and design. Demographic characteristics and medical history of the participants were recorded, and laboratory tests were done. X-rays of both knees were performed using anteroposterior projection with support, lateral with 30° flexion, and Merchant (45°) views; bipodalic mechanical axis digitalization as well as magnetic resonance imaging of the affected knee were also done. Arthroscopies were all carried out by a single orthopedic/arthroscopic surgeon. All patients were given three doses of antibiotic (cephalothin 1 g) as prophylaxis. Postsurgical pain treatment comprised oral acetaminophen 1 g/8 h and celecoxib 200 mg/12 h for 10 days. After surgery, patients were randomized to receive: 1) five IA injections of HA1 (Suprahyal®/Adant®: biofermentation, non-crosslinked, 600–1,200 kDa, 25 mg/syringe); 2) four IA injections of HA2 (Orthovisc®: biofermentation, non-crosslinked, 1,700–2,900 kDa, 30 mg/syringe); 3) three IA injections of HA3 (Synvisc®: chicken combs, crosslinked, 7,000 kDa, 16 mg/syringe); 4) a single IA injection of PRP (GPS® II); and 5) standard care. The HAs used were considered of interest as they have different characteristics, 23, 24 to study whether these differences could have any impact on the efficacy/safety results. The number of doses specified are those authorized for each product.23 The GPS® II is a closed system that enables PRP to be collected while avoiding the risk of contamination and makes possible to concentrate the autologous platelets for reinjection locally.25 Eligible patients were randomized 1:1 using a computer-generated list of random numbers. The random sequence was created using freely accessible tools available at http://www.randomization.com, which uses the pseudo-random number generator of Wichmann and Hill, 26 modified by McLeod.27 PRP was administered immediately after arthroscopy and viscosupplements were given once a week, starting at 21 days post-arthroscopy. Rescue medication comprised acetaminophen 1 g/8 h and celecoxib 200 mg/12 h. The administration of IA treatments was conducted under aseptic conditions, inserting the needle into the patellofemoral joint space by a superolateral approach, with the patients in a supine position. In case effusion was present, it was removed before each injection. Follow-up was conducted by five orthopedic surgeons, each one assigned to a specific group and blinded to the treatment administered. Visits were scheduled at 3, 6, 12, and 18 months after arthroscopy. All patients were prescribed the same protocol of rehabilitation to be followed at the patient's convenience. The rehabilitation protocol started progressively, starting with isometric exercises and muscle stretching from Week 1 post-arthroscopy, adding weight exercises from Week 3, cardiovascular training from Week 7, and high-impact exercises from 3 months onward.\n\nBODY.PATIENTS AND METHODS.PATIENT SELECTION CRITERIA:\nEligible patients were men and women, aged 40–85, with body mass index (BMI) <35 kg/m2, and indications for knee arthroscopic surgery due to symptomatic meniscal rupture Mink et al28 grade III confirmed by magnetic resonance imaging, radiographic OA grades I–II according to Kellgren and Lawrence (KL),29 meniscal preservation >60%, ligament integrity, and ability and willingness to provide informed consent for study participation. Main exclusion criteria were infectious conditions, use of anticoagulants, history of trauma (dislocation or fracture), inflammatory arthritis, microcrystalline arthropathies, history of septic arthritis, ligament injury, nonspecific synovitis, angular deformity >10°, chondral lesions G-IV Outerbridge (>1 cm2),30 neoplasms, or allergy to any of the components of the products under study.\n\nBODY.PATIENTS AND METHODS.EVALUATION OF EFFICACY:\nEfficacy was evaluated using the Western Ontario McMaster University Osteoarthritis Index (WOMAC) total and subscales of pain, stiffness, and function using a five-point (0–4) Likert scale. Scores at the end of follow-up (18 months) were compared to baseline and the differences between treatment groups were analyzed. Secondary outcomes were efficacy rates at the different follow-up visits. The WOMAC questionnaire used was the Spanish translated version from Batlle-Gualda et al,31 validated by Escobar et al.32 The number of patients achieving the Minimal Clinically Important Improvement (MCII) in each treatment group was also calculated. The MCII is the smallest change in measurement that signifies an important improvement in a patient's symptom.33 According to Tubach et al,34 a 15 out of 100 for absolute improvement and 20% for relative improvement could be used as cutoffs for MCII estimation. Taking into account that, in our study, a Likert scale was used, the 20% of relative improvement cutoff was applied.33\n\nBODY.PATIENTS AND METHODS.EVALUATION OF SAFETY:\nThe incidence and type of adverse events was recorded throughout the study.\n\nBODY.STATISTICS:\nCategorical variables were summarized by their number and relative frequencies. For continuous variables following a normal distribution (Kolmogorov–Smirnov test), mean, standard deviation, and maximum and minimum values were used. Non-normally distributed variables were summarized with their median, interquartile range, minimum, and maximum. Changes in WOMAC total and subscales at 3, 6, 12, and 18 months were expressed as raw scores and as percentage of change from the baseline value. Comparisons between groups were done using chi-square test or Fisher's Exact test for categorical variables when necessary. Continuous data were analyzed using Mann–Whitney test. The study was designed as a pilot evaluation; therefore, real sample-size estimation was not done. The main population for analysis was the per protocol (PP) population, comprising patients who completed the 18 months follow-up according to the study protocol. Safety analysis was conducted in all patients who received at least one IA injection. Data were analyzed using SPSS V14 (SPSS Inc, Chicago Il). All tests were two-sided, and the statistical significance was set at 0.05.\n\nBODY.RESULTS.DISPOSITION OF PATIENTS AND DEMOGRAPHICS:\nPatients were recruited from March 2013 to October 2014. Ninety patients were initially screened, of whom 40 were excluded due to age (n=4), history of trauma or ligament involvement (n=5), overweight (n=6), meniscal injury > 40% (n=4), chondral lesions G-IV (Outerbridge >1 cm2; n=5), eventual total knee arthroplasty (TKA) (n=7), lost to follow-up (n=8), or death unrelated to the study (n=1), leaving a total of 50 patients who participated in the study and completed all procedures as scheduled (Figure 1). Among study subjects, 52% were male and the mean age was 64.4 years, with statistically significant differences between HA2 and HA3 (p=0.007). Mean BMI was 27.8 kg/m2 without differences among groups (p=0.693), despite HA2 comprising more obese patients (Table 1). Near all patients in the HA3 group had mild OA (KL grade I), with the differences significant versus HA2 (p=0.001) and control (p=0.002). The groups were homogeneous with respect to WOMAC total values (p=0.062) as well as pain (p=0.234), stiffness (p=0. 383), and function (p=0.089) subscales (Table 1). There were no differences among groups in arthroscopic lesions (Table 2). The following surgical procedures were conducted: partial meniscectomy of no more than 40% (35 patients), synovectomy and resection of Hoffa's fat pad (five patients), and debridement and stabilization of the chondral lesion (16 patients). More than one procedure in the same patient was possible.\n\nBODY.EFFICACY:\nAt early follow-up (3 months), total WOMAC scores improved as compared to baseline, in all groups with reductions of 44.79% (HA1), 24.0% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control), (p=0.002 HA1 compared to HA2). At 18 months, a higher improvement in total WOMAC was observed in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.5% increase over baseline despite the use of rescue medication and rehabilitation, with the differences statistically significant with respect to HA1 and HA3 (p=0.001; Table 3 and Figures 2A–D). At 3 months follow-up, the percentage of patients achieving the MCII for both pain and function (20% of relative change) in different groups was 80% (HA1), 20% (HA2), 40% (HA3), 30% PRP, and 30% (control). At the end of follow-up, a growing number of patients experienced a clinically significant improvement in HA1 with a 100% responders rate, followed by HA3 (80%). In the group treated with HA2 and in those treated with PRP, the results reached 60% responders, whereas in the control group, all patients returned to near pre-arthroscopy levels of pain and impairment and none of the patients reached the 20% MCII cutoff (Figure 3).\n\nBODY.SAFETY:\nThroughout the overall study period, patients did not experience adverse events attributable to surgery or to IA injections.\n\nBODY.DISCUSSION:\nArthroscopic surgery is frequently advocated as a treatment option to relieve symptoms of painful degenerative knee conditions. The rationale is that it may improve symptom and functions, has minimal morbidity, provides a therapeutic options, and documents the stage of the disease process. The duration of effect, however, is variable.35 Adjunctive therapies such as nonsteroidal anti-inflammatory drugs and other medications, physiotherapy, and IA steroid injections may be incorporated to improve the results.36,37 The use of arthroscopic surgery in the treatment of OA has generated much discussion. Whereas some studies14,15 reported no differences with placebo arthroscopy, other studies confirm that, in patients with preoperative mechanical symptoms secondary to loose bodies, chondral flaps, and meniscal pathology, the use of arthroscopic surgery is beneficial.38,39 The key point is to offer knee arthroscopy to selected patients. Patients with symptomatic meniscal tears, few degenerative changes, normal axis, and acute mechanical complaints can benefit from this intervention, especially if the OA is mild or moderate.40–42 Favorable results have been also shown for the management of the infrapatellar fat pad impingement and Hoffa's disease.43,44 Chronic complaints, previous surgical intervention, degenerative changes in two or three compartments, significant axis deviation, ligament imbalance, or severe cartilage destruction are among the characteristics of patients who may be candidates for unsatisfactory results after arthroscopy.42 As a result, arthroscopic knee surgery is the most common type of orthopedic surgery performed in the developed world.45 However, in patients with symptomatic OA, pain can persist after arthroscopic meniscectomy and/or debridement of chondral lesions. Treatment combining arthroscopic surgery and viscosupplementation has been widely used for patients with combined intraarticular mechanical and articular surface pathology.40 Initially, rheological characteristics that confer a lubricating effect to HA were considered the primary mechanism of action in treating OA pain. Nevertheless, rather than being a mere device, HA has been shown to exert other effects such as inhibition of tissue nociceptors, inhibition of matrix metalloproteinases, free radicals scavenging, and stimulation of endogenous HA production. Due to this, viscosupplementation has been recommended as adjunctive therapy after arthroscopic surgery as a suitable way of achieving long-term stabilization of the treatment outcome.40,44,46 The application of PRP to patients with OA was developed because of the physiological roles of several bioactive proteins and growth factors expressed in platelets, which lead to tissue regeneration. Despite the use of PRPs having been extended to the treatment of several musculoskeletal injuries the last few years, we have not found published studies analyzing the efficacy of PRP as adjunctive treatment in post-arthroscopy patients. A recent meta-analysis found that at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, the indication for knee arthroscopy was symptomatic rupture of a meniscus with symptomatic osteochondral detachment in patients with grade II OA. In these patients, surgery is the only and necessary indication, and they were given additional IA treatment with hyaluronate or PRP due to the presence of OA. In the market, there are a variety of HAs and our aim has been to verify if the differences between HA products has any clinical relevance, as well as to study the effects of PRP after knee arthroscopic debridement. Procedures for clinical practice and the dosage described by the manufacturers have been used; according to the studies published, treatments with HA usually started 3 weeks after surgery.3,35,44,48,49 In absence of recommendations in the literature, PRP injection was administered immediately after arthroscopy. The HA1 product showed the best results, with a very significant improvement of the symptomatology compared to the baseline that was maintained up to 18 months post-arthroscopy (reduction of −65.20% in total WOMAC) and with 100% of the patients reaching the MCII 20% in pain and function, followed by HA3 with a reduction of −49.57 in total WOMAC and 80% of patients reaching the 20% MCII cutoff. These products differ in origin, MW, structure, and number of injections administered and, in our study, the differences between both did not reach statistical significance. However, it should be noted that 8/10 patients in the HA3 group were KL grade I, which is associated with better response data.40,41 The HA2 group showed much more discreet and erratic results, with final reduction in WOMAC at the end of the study and nearly 30% and 60% of patients complying with MCII requirements. It should be noted that, in this group, all patients had OA grade II as well as a greater number of overweight/obese patients – a characteristic that may also have influenced the results.44 A recent meta-analysis found that, at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, patients treated with PRP constitute a diverse group who seem to follow the rule of all or nothing, so that those who improve do so in a very significant way and this improvement is maintained until the end of follow-up, whereas something similar occurs with non-responder patients; thus, at the end of the study, only 60% of patients achieved the MCII threshold. Ayhan et al50 suggest that PRP therapy would be indicated for patients <60 years and with a BMI <30. In our study, the best results in the PRP group were also seen in patients with a BMI <30, as well as in those <60 years. We did not find any another relationship between the type of response and the characteristics of the patients or the arthroscopic findings. Surprisingly, the control group showed low improvement in symptomatology at early follow-up that worsened along time with return to pre-arthroscopy levels at the 12-month follow-up that required the consumption of rescue medication. One of the weaknesses of our study is the small sample size as it was designed as a pilot evaluation with the limitations that it entails when establishing comparisons; moreover, the fact that each group was evaluated by a different person may have influenced the evaluation of the results. Finally, not all patients undertook the rehabilitation protocol in the same way, as they were treated by different people and in different clinics according to the personal needs of the patient: a more strict control of this procedure would have been desirable. In our favor, the long follow-up period allowed us to assess the evolution of the results in the long term; the criteria used for patient selection was of special relevance to select those who may benefit most from the procedure, as was the presence of a control group. The design of the study also allowed us to study the behavior of three different HAs used within routine practice and to compare them with PRP as a new therapeutic approach. Viscosupplementation is an effective treatment for the treatment of knee OA. Based on our results, we consider that its use in combination with arthroscopic surgery can provide very good results in adequately selected patients with an individualized and multidisciplinary follow-up. However, due to the importance of this type of intervention, further controlled studies with a larger sample size and/or alternative posology regimens would be of interest for the scientific community.\n\n**Question:** Compared to standard care (control) what was the result of three injections of HA3 (Synvisc®) on At 18 months, the higher improvement in total WOMAC?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1154
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Benefits of different postoperative treatments in patients undergoing knee arthroscopic debridement\n\n ABSTRACT.PURPOSE:\nTo assess the effectiveness of viscosupplementation or platelet-rich plasma (PRP), compared to standard care, for pain relief after knee arthroscopic debridement in patients with meniscal pathology and osteoarthritis (OA), under normal clinical practice conditions.\n\nABSTRACT.PATIENTS AND METHODS:\nWe conducted a prospective, randomized, evaluator-blind, pilot study. After arthroscopy, patients were randomized to receive 1) five injections of HA1 (Suprahyal®/Adant®); 2) four injections of HA2 (Orthovisc®); 3) three injections of HA3 (Synvisc®); 4) a single injection of PRP (GPSTM II); or 5) standard care (control). Patients were followed up for 18 months. Clinical outcomes were evaluated using the Western Ontario and McMaster Universities Arthritis Index (WOMAC) at 3, 6, 12, and 18 months. Minimally Clinical Important Improvement (MCII), as relative improvement ≥20 for pain and function, was also calculated.\n\nABSTRACT.RESULTS:\nFifty patients were included. At early follow-up (3 months), total WOMAC scores improved in all groups compared to baseline with reductions of 44.79% (HA1), 24.02% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control) (p=0.002 HA1 compared to HA2). At 18 months, the higher improvement in total WOMAC was in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.55% increase over baseline (p=0.001 control compared to HA1 and HA3). The percentage of patients achieving the MCII for both pain and function at 18 months was 100% (HA1), 80% (HA3), 60% (HA2), and 60% (PRP), whereas, in the control group, all patients returned to pre-arthroscopy levels. There were no adverse events attributable to surgery or to intraarticular administration.\n\nABSTRACT.CONCLUSION:\nViscosupplementation following arthroscopy is more effective than PRP in adequately selected patients with meniscal lesions occurring concomitantly with OA. Further controlled studies with a larger sample size and/or alternative regimens would be of interest for the scientific community.\n\nBODY.INTRODUCTION:\nOsteoarthritis (OA) is the most common of all joint diseases and exacts a heavy economic toll due to its high prevalence in the general population and potential for causing progressive disability.1 The etiology is multifactorial, including a variety of risk factors (aging, genetics, trauma, malalignment, and obesity) that interact to cause this disorder.2 The primary goal of treatment is the alleviation of pain, leading to an improvement in joint function and quality of life. Treatment options for knee OA include conservative measures, pharmacological treatments, intraarticular (IA) injections, and surgical intervention.3 Viscosupplementation is a well-established treatment option in OA. Hyaluronic acid (HA) exerts a mechanical effect providing lubrication of the joint, protecting against loads and impacts, and restoring the rheological properties of the synovial fluid.4 HA also has a pharmacological effect by interacting with mediators of inflammation, inhibiting nociceptors of pain, stimulating chondrocyte growth, synthesis of extracellular matrix protein, and reduction of apoptosis in osteoarthritic cartilage.5–8 HA products that are available differ in many characteristics, including origin (animal vs biofermentation), molecular weight (MW), structure (linear, crosslinked, and mix of both), volume of injection, and posology. Clinical investigations with these products have been mainly focused on the potential differences attributable to MW. Arbitrarily, different HA products have been divided into three MW categories: low (500–730 kDa), intermediate (800–2,000 kDa), and high MW (2,000–6,000 kDa) including crosslinked formulations of HA.9 The therapeutic use of platelet-rich plasma (PRP) is based on the use of the patient's own platelets as carrier of growth factors and other proteins (eg, fibronectin and other adhesive proteins) that play an important role in cell biology. The resulting products vary in their leukocyte content, platelet concentration, subsequent activation, and, at this moment, it is not defined what should be the ideal concentration of platelets that should reach the preparations as well as the dosage regimen.10 Although it seems PRP may improve the symptomatology, one part of the scientific community considers that there still exists a lack of clarity in available evidence for this to be recommended in international guidelines.11 Knee arthroscopy has become the gold standard in the diagnosis of meniscal and ligamentous injury12 and also plays a role in their management.13 This intervention has been widely used as a treatment for OA of the knee, but recent studies14–16 with controversial design and methodology17 have questioned its usefulness. However, it should be noted that >90% of patients with symptomatic knee OA have magnetic resonance imaging findings compatible with meniscal pathology.18 Therefore, arthroscopic surgery would be indicated in those patients with meniscal injuries, loose bodies, osteochondral lesions, or synovial pathology, with a concomitant diagnosis of knee OA playing an essential role in the adequate selection of patients who can benefit from this technique.19 Arthroscopy has decreased morbidity as compared with that in open procedures, but it has not eliminated pain completely. The causative factors behind this persistent pain are unknown, although it is known that most of the IA structures of the knee, including the synovial tissue, the anterior fat pad, and the joint capsule, have free nerve endings that are capable of sensing painful stimuli and producing severe pain.20,21 Moreover, it has been postulated that the severity and extent of OA, the degree of debridement undertaken, and the skill of the surgeon could be factors involved.22 Considering the abovementioned aspects, this study was planned as a pilot to assess the effectiveness of different treatments such as viscosupplementation or PRP as compared to standard care for pain relief after knee arthroscopic debridement in patients with meniscal pathology and OA, under normal clinical practice conditions.\n\nBODY.PATIENTS AND METHODS.ETHICS:\nThe study was conducted in accordance with the Good Clinical Practice guidelines and in compliance with the principles of the Declaration of Helsinki. The study protocol was approved by the local Ethics Review Board from Hospital Español de México, and all patients gave written informed consent to participate in the study.\n\nBODY.PATIENTS AND METHODS.DESIGN:\nThis was a prospective, randomized, evaluator-blind, pilot study with four parallel arms and a control group. Eligible patients were informed about study purpose and design. Demographic characteristics and medical history of the participants were recorded, and laboratory tests were done. X-rays of both knees were performed using anteroposterior projection with support, lateral with 30° flexion, and Merchant (45°) views; bipodalic mechanical axis digitalization as well as magnetic resonance imaging of the affected knee were also done. Arthroscopies were all carried out by a single orthopedic/arthroscopic surgeon. All patients were given three doses of antibiotic (cephalothin 1 g) as prophylaxis. Postsurgical pain treatment comprised oral acetaminophen 1 g/8 h and celecoxib 200 mg/12 h for 10 days. After surgery, patients were randomized to receive: 1) five IA injections of HA1 (Suprahyal®/Adant®: biofermentation, non-crosslinked, 600–1,200 kDa, 25 mg/syringe); 2) four IA injections of HA2 (Orthovisc®: biofermentation, non-crosslinked, 1,700–2,900 kDa, 30 mg/syringe); 3) three IA injections of HA3 (Synvisc®: chicken combs, crosslinked, 7,000 kDa, 16 mg/syringe); 4) a single IA injection of PRP (GPS® II); and 5) standard care. The HAs used were considered of interest as they have different characteristics, 23, 24 to study whether these differences could have any impact on the efficacy/safety results. The number of doses specified are those authorized for each product.23 The GPS® II is a closed system that enables PRP to be collected while avoiding the risk of contamination and makes possible to concentrate the autologous platelets for reinjection locally.25 Eligible patients were randomized 1:1 using a computer-generated list of random numbers. The random sequence was created using freely accessible tools available at http://www.randomization.com, which uses the pseudo-random number generator of Wichmann and Hill, 26 modified by McLeod.27 PRP was administered immediately after arthroscopy and viscosupplements were given once a week, starting at 21 days post-arthroscopy. Rescue medication comprised acetaminophen 1 g/8 h and celecoxib 200 mg/12 h. The administration of IA treatments was conducted under aseptic conditions, inserting the needle into the patellofemoral joint space by a superolateral approach, with the patients in a supine position. In case effusion was present, it was removed before each injection. Follow-up was conducted by five orthopedic surgeons, each one assigned to a specific group and blinded to the treatment administered. Visits were scheduled at 3, 6, 12, and 18 months after arthroscopy. All patients were prescribed the same protocol of rehabilitation to be followed at the patient's convenience. The rehabilitation protocol started progressively, starting with isometric exercises and muscle stretching from Week 1 post-arthroscopy, adding weight exercises from Week 3, cardiovascular training from Week 7, and high-impact exercises from 3 months onward.\n\nBODY.PATIENTS AND METHODS.PATIENT SELECTION CRITERIA:\nEligible patients were men and women, aged 40–85, with body mass index (BMI) <35 kg/m2, and indications for knee arthroscopic surgery due to symptomatic meniscal rupture Mink et al28 grade III confirmed by magnetic resonance imaging, radiographic OA grades I–II according to Kellgren and Lawrence (KL),29 meniscal preservation >60%, ligament integrity, and ability and willingness to provide informed consent for study participation. Main exclusion criteria were infectious conditions, use of anticoagulants, history of trauma (dislocation or fracture), inflammatory arthritis, microcrystalline arthropathies, history of septic arthritis, ligament injury, nonspecific synovitis, angular deformity >10°, chondral lesions G-IV Outerbridge (>1 cm2),30 neoplasms, or allergy to any of the components of the products under study.\n\nBODY.PATIENTS AND METHODS.EVALUATION OF EFFICACY:\nEfficacy was evaluated using the Western Ontario McMaster University Osteoarthritis Index (WOMAC) total and subscales of pain, stiffness, and function using a five-point (0–4) Likert scale. Scores at the end of follow-up (18 months) were compared to baseline and the differences between treatment groups were analyzed. Secondary outcomes were efficacy rates at the different follow-up visits. The WOMAC questionnaire used was the Spanish translated version from Batlle-Gualda et al,31 validated by Escobar et al.32 The number of patients achieving the Minimal Clinically Important Improvement (MCII) in each treatment group was also calculated. The MCII is the smallest change in measurement that signifies an important improvement in a patient's symptom.33 According to Tubach et al,34 a 15 out of 100 for absolute improvement and 20% for relative improvement could be used as cutoffs for MCII estimation. Taking into account that, in our study, a Likert scale was used, the 20% of relative improvement cutoff was applied.33\n\nBODY.PATIENTS AND METHODS.EVALUATION OF SAFETY:\nThe incidence and type of adverse events was recorded throughout the study.\n\nBODY.STATISTICS:\nCategorical variables were summarized by their number and relative frequencies. For continuous variables following a normal distribution (Kolmogorov–Smirnov test), mean, standard deviation, and maximum and minimum values were used. Non-normally distributed variables were summarized with their median, interquartile range, minimum, and maximum. Changes in WOMAC total and subscales at 3, 6, 12, and 18 months were expressed as raw scores and as percentage of change from the baseline value. Comparisons between groups were done using chi-square test or Fisher's Exact test for categorical variables when necessary. Continuous data were analyzed using Mann–Whitney test. The study was designed as a pilot evaluation; therefore, real sample-size estimation was not done. The main population for analysis was the per protocol (PP) population, comprising patients who completed the 18 months follow-up according to the study protocol. Safety analysis was conducted in all patients who received at least one IA injection. Data were analyzed using SPSS V14 (SPSS Inc, Chicago Il). All tests were two-sided, and the statistical significance was set at 0.05.\n\nBODY.RESULTS.DISPOSITION OF PATIENTS AND DEMOGRAPHICS:\nPatients were recruited from March 2013 to October 2014. Ninety patients were initially screened, of whom 40 were excluded due to age (n=4), history of trauma or ligament involvement (n=5), overweight (n=6), meniscal injury > 40% (n=4), chondral lesions G-IV (Outerbridge >1 cm2; n=5), eventual total knee arthroplasty (TKA) (n=7), lost to follow-up (n=8), or death unrelated to the study (n=1), leaving a total of 50 patients who participated in the study and completed all procedures as scheduled (Figure 1). Among study subjects, 52% were male and the mean age was 64.4 years, with statistically significant differences between HA2 and HA3 (p=0.007). Mean BMI was 27.8 kg/m2 without differences among groups (p=0.693), despite HA2 comprising more obese patients (Table 1). Near all patients in the HA3 group had mild OA (KL grade I), with the differences significant versus HA2 (p=0.001) and control (p=0.002). The groups were homogeneous with respect to WOMAC total values (p=0.062) as well as pain (p=0.234), stiffness (p=0. 383), and function (p=0.089) subscales (Table 1). There were no differences among groups in arthroscopic lesions (Table 2). The following surgical procedures were conducted: partial meniscectomy of no more than 40% (35 patients), synovectomy and resection of Hoffa's fat pad (five patients), and debridement and stabilization of the chondral lesion (16 patients). More than one procedure in the same patient was possible.\n\nBODY.EFFICACY:\nAt early follow-up (3 months), total WOMAC scores improved as compared to baseline, in all groups with reductions of 44.79% (HA1), 24.0% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control), (p=0.002 HA1 compared to HA2). At 18 months, a higher improvement in total WOMAC was observed in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.5% increase over baseline despite the use of rescue medication and rehabilitation, with the differences statistically significant with respect to HA1 and HA3 (p=0.001; Table 3 and Figures 2A–D). At 3 months follow-up, the percentage of patients achieving the MCII for both pain and function (20% of relative change) in different groups was 80% (HA1), 20% (HA2), 40% (HA3), 30% PRP, and 30% (control). At the end of follow-up, a growing number of patients experienced a clinically significant improvement in HA1 with a 100% responders rate, followed by HA3 (80%). In the group treated with HA2 and in those treated with PRP, the results reached 60% responders, whereas in the control group, all patients returned to near pre-arthroscopy levels of pain and impairment and none of the patients reached the 20% MCII cutoff (Figure 3).\n\nBODY.SAFETY:\nThroughout the overall study period, patients did not experience adverse events attributable to surgery or to IA injections.\n\nBODY.DISCUSSION:\nArthroscopic surgery is frequently advocated as a treatment option to relieve symptoms of painful degenerative knee conditions. The rationale is that it may improve symptom and functions, has minimal morbidity, provides a therapeutic options, and documents the stage of the disease process. The duration of effect, however, is variable.35 Adjunctive therapies such as nonsteroidal anti-inflammatory drugs and other medications, physiotherapy, and IA steroid injections may be incorporated to improve the results.36,37 The use of arthroscopic surgery in the treatment of OA has generated much discussion. Whereas some studies14,15 reported no differences with placebo arthroscopy, other studies confirm that, in patients with preoperative mechanical symptoms secondary to loose bodies, chondral flaps, and meniscal pathology, the use of arthroscopic surgery is beneficial.38,39 The key point is to offer knee arthroscopy to selected patients. Patients with symptomatic meniscal tears, few degenerative changes, normal axis, and acute mechanical complaints can benefit from this intervention, especially if the OA is mild or moderate.40–42 Favorable results have been also shown for the management of the infrapatellar fat pad impingement and Hoffa's disease.43,44 Chronic complaints, previous surgical intervention, degenerative changes in two or three compartments, significant axis deviation, ligament imbalance, or severe cartilage destruction are among the characteristics of patients who may be candidates for unsatisfactory results after arthroscopy.42 As a result, arthroscopic knee surgery is the most common type of orthopedic surgery performed in the developed world.45 However, in patients with symptomatic OA, pain can persist after arthroscopic meniscectomy and/or debridement of chondral lesions. Treatment combining arthroscopic surgery and viscosupplementation has been widely used for patients with combined intraarticular mechanical and articular surface pathology.40 Initially, rheological characteristics that confer a lubricating effect to HA were considered the primary mechanism of action in treating OA pain. Nevertheless, rather than being a mere device, HA has been shown to exert other effects such as inhibition of tissue nociceptors, inhibition of matrix metalloproteinases, free radicals scavenging, and stimulation of endogenous HA production. Due to this, viscosupplementation has been recommended as adjunctive therapy after arthroscopic surgery as a suitable way of achieving long-term stabilization of the treatment outcome.40,44,46 The application of PRP to patients with OA was developed because of the physiological roles of several bioactive proteins and growth factors expressed in platelets, which lead to tissue regeneration. Despite the use of PRPs having been extended to the treatment of several musculoskeletal injuries the last few years, we have not found published studies analyzing the efficacy of PRP as adjunctive treatment in post-arthroscopy patients. A recent meta-analysis found that at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, the indication for knee arthroscopy was symptomatic rupture of a meniscus with symptomatic osteochondral detachment in patients with grade II OA. In these patients, surgery is the only and necessary indication, and they were given additional IA treatment with hyaluronate or PRP due to the presence of OA. In the market, there are a variety of HAs and our aim has been to verify if the differences between HA products has any clinical relevance, as well as to study the effects of PRP after knee arthroscopic debridement. Procedures for clinical practice and the dosage described by the manufacturers have been used; according to the studies published, treatments with HA usually started 3 weeks after surgery.3,35,44,48,49 In absence of recommendations in the literature, PRP injection was administered immediately after arthroscopy. The HA1 product showed the best results, with a very significant improvement of the symptomatology compared to the baseline that was maintained up to 18 months post-arthroscopy (reduction of −65.20% in total WOMAC) and with 100% of the patients reaching the MCII 20% in pain and function, followed by HA3 with a reduction of −49.57 in total WOMAC and 80% of patients reaching the 20% MCII cutoff. These products differ in origin, MW, structure, and number of injections administered and, in our study, the differences between both did not reach statistical significance. However, it should be noted that 8/10 patients in the HA3 group were KL grade I, which is associated with better response data.40,41 The HA2 group showed much more discreet and erratic results, with final reduction in WOMAC at the end of the study and nearly 30% and 60% of patients complying with MCII requirements. It should be noted that, in this group, all patients had OA grade II as well as a greater number of overweight/obese patients – a characteristic that may also have influenced the results.44 A recent meta-analysis found that, at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, patients treated with PRP constitute a diverse group who seem to follow the rule of all or nothing, so that those who improve do so in a very significant way and this improvement is maintained until the end of follow-up, whereas something similar occurs with non-responder patients; thus, at the end of the study, only 60% of patients achieved the MCII threshold. Ayhan et al50 suggest that PRP therapy would be indicated for patients <60 years and with a BMI <30. In our study, the best results in the PRP group were also seen in patients with a BMI <30, as well as in those <60 years. We did not find any another relationship between the type of response and the characteristics of the patients or the arthroscopic findings. Surprisingly, the control group showed low improvement in symptomatology at early follow-up that worsened along time with return to pre-arthroscopy levels at the 12-month follow-up that required the consumption of rescue medication. One of the weaknesses of our study is the small sample size as it was designed as a pilot evaluation with the limitations that it entails when establishing comparisons; moreover, the fact that each group was evaluated by a different person may have influenced the evaluation of the results. Finally, not all patients undertook the rehabilitation protocol in the same way, as they were treated by different people and in different clinics according to the personal needs of the patient: a more strict control of this procedure would have been desirable. In our favor, the long follow-up period allowed us to assess the evolution of the results in the long term; the criteria used for patient selection was of special relevance to select those who may benefit most from the procedure, as was the presence of a control group. The design of the study also allowed us to study the behavior of three different HAs used within routine practice and to compare them with PRP as a new therapeutic approach. Viscosupplementation is an effective treatment for the treatment of knee OA. Based on our results, we consider that its use in combination with arthroscopic surgery can provide very good results in adequately selected patients with an individualized and multidisciplinary follow-up. However, due to the importance of this type of intervention, further controlled studies with a larger sample size and/or alternative posology regimens would be of interest for the scientific community.\n\n**Question:** Compared to standard care (control) what was the result of four injections of HA2 (Orthovisc®) on At 18 months, the higher improvement in total WOMAC?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1155
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Benefits of different postoperative treatments in patients undergoing knee arthroscopic debridement\n\n ABSTRACT.PURPOSE:\nTo assess the effectiveness of viscosupplementation or platelet-rich plasma (PRP), compared to standard care, for pain relief after knee arthroscopic debridement in patients with meniscal pathology and osteoarthritis (OA), under normal clinical practice conditions.\n\nABSTRACT.PATIENTS AND METHODS:\nWe conducted a prospective, randomized, evaluator-blind, pilot study. After arthroscopy, patients were randomized to receive 1) five injections of HA1 (Suprahyal®/Adant®); 2) four injections of HA2 (Orthovisc®); 3) three injections of HA3 (Synvisc®); 4) a single injection of PRP (GPSTM II); or 5) standard care (control). Patients were followed up for 18 months. Clinical outcomes were evaluated using the Western Ontario and McMaster Universities Arthritis Index (WOMAC) at 3, 6, 12, and 18 months. Minimally Clinical Important Improvement (MCII), as relative improvement ≥20 for pain and function, was also calculated.\n\nABSTRACT.RESULTS:\nFifty patients were included. At early follow-up (3 months), total WOMAC scores improved in all groups compared to baseline with reductions of 44.79% (HA1), 24.02% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control) (p=0.002 HA1 compared to HA2). At 18 months, the higher improvement in total WOMAC was in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.55% increase over baseline (p=0.001 control compared to HA1 and HA3). The percentage of patients achieving the MCII for both pain and function at 18 months was 100% (HA1), 80% (HA3), 60% (HA2), and 60% (PRP), whereas, in the control group, all patients returned to pre-arthroscopy levels. There were no adverse events attributable to surgery or to intraarticular administration.\n\nABSTRACT.CONCLUSION:\nViscosupplementation following arthroscopy is more effective than PRP in adequately selected patients with meniscal lesions occurring concomitantly with OA. Further controlled studies with a larger sample size and/or alternative regimens would be of interest for the scientific community.\n\nBODY.INTRODUCTION:\nOsteoarthritis (OA) is the most common of all joint diseases and exacts a heavy economic toll due to its high prevalence in the general population and potential for causing progressive disability.1 The etiology is multifactorial, including a variety of risk factors (aging, genetics, trauma, malalignment, and obesity) that interact to cause this disorder.2 The primary goal of treatment is the alleviation of pain, leading to an improvement in joint function and quality of life. Treatment options for knee OA include conservative measures, pharmacological treatments, intraarticular (IA) injections, and surgical intervention.3 Viscosupplementation is a well-established treatment option in OA. Hyaluronic acid (HA) exerts a mechanical effect providing lubrication of the joint, protecting against loads and impacts, and restoring the rheological properties of the synovial fluid.4 HA also has a pharmacological effect by interacting with mediators of inflammation, inhibiting nociceptors of pain, stimulating chondrocyte growth, synthesis of extracellular matrix protein, and reduction of apoptosis in osteoarthritic cartilage.5–8 HA products that are available differ in many characteristics, including origin (animal vs biofermentation), molecular weight (MW), structure (linear, crosslinked, and mix of both), volume of injection, and posology. Clinical investigations with these products have been mainly focused on the potential differences attributable to MW. Arbitrarily, different HA products have been divided into three MW categories: low (500–730 kDa), intermediate (800–2,000 kDa), and high MW (2,000–6,000 kDa) including crosslinked formulations of HA.9 The therapeutic use of platelet-rich plasma (PRP) is based on the use of the patient's own platelets as carrier of growth factors and other proteins (eg, fibronectin and other adhesive proteins) that play an important role in cell biology. The resulting products vary in their leukocyte content, platelet concentration, subsequent activation, and, at this moment, it is not defined what should be the ideal concentration of platelets that should reach the preparations as well as the dosage regimen.10 Although it seems PRP may improve the symptomatology, one part of the scientific community considers that there still exists a lack of clarity in available evidence for this to be recommended in international guidelines.11 Knee arthroscopy has become the gold standard in the diagnosis of meniscal and ligamentous injury12 and also plays a role in their management.13 This intervention has been widely used as a treatment for OA of the knee, but recent studies14–16 with controversial design and methodology17 have questioned its usefulness. However, it should be noted that >90% of patients with symptomatic knee OA have magnetic resonance imaging findings compatible with meniscal pathology.18 Therefore, arthroscopic surgery would be indicated in those patients with meniscal injuries, loose bodies, osteochondral lesions, or synovial pathology, with a concomitant diagnosis of knee OA playing an essential role in the adequate selection of patients who can benefit from this technique.19 Arthroscopy has decreased morbidity as compared with that in open procedures, but it has not eliminated pain completely. The causative factors behind this persistent pain are unknown, although it is known that most of the IA structures of the knee, including the synovial tissue, the anterior fat pad, and the joint capsule, have free nerve endings that are capable of sensing painful stimuli and producing severe pain.20,21 Moreover, it has been postulated that the severity and extent of OA, the degree of debridement undertaken, and the skill of the surgeon could be factors involved.22 Considering the abovementioned aspects, this study was planned as a pilot to assess the effectiveness of different treatments such as viscosupplementation or PRP as compared to standard care for pain relief after knee arthroscopic debridement in patients with meniscal pathology and OA, under normal clinical practice conditions.\n\nBODY.PATIENTS AND METHODS.ETHICS:\nThe study was conducted in accordance with the Good Clinical Practice guidelines and in compliance with the principles of the Declaration of Helsinki. The study protocol was approved by the local Ethics Review Board from Hospital Español de México, and all patients gave written informed consent to participate in the study.\n\nBODY.PATIENTS AND METHODS.DESIGN:\nThis was a prospective, randomized, evaluator-blind, pilot study with four parallel arms and a control group. Eligible patients were informed about study purpose and design. Demographic characteristics and medical history of the participants were recorded, and laboratory tests were done. X-rays of both knees were performed using anteroposterior projection with support, lateral with 30° flexion, and Merchant (45°) views; bipodalic mechanical axis digitalization as well as magnetic resonance imaging of the affected knee were also done. Arthroscopies were all carried out by a single orthopedic/arthroscopic surgeon. All patients were given three doses of antibiotic (cephalothin 1 g) as prophylaxis. Postsurgical pain treatment comprised oral acetaminophen 1 g/8 h and celecoxib 200 mg/12 h for 10 days. After surgery, patients were randomized to receive: 1) five IA injections of HA1 (Suprahyal®/Adant®: biofermentation, non-crosslinked, 600–1,200 kDa, 25 mg/syringe); 2) four IA injections of HA2 (Orthovisc®: biofermentation, non-crosslinked, 1,700–2,900 kDa, 30 mg/syringe); 3) three IA injections of HA3 (Synvisc®: chicken combs, crosslinked, 7,000 kDa, 16 mg/syringe); 4) a single IA injection of PRP (GPS® II); and 5) standard care. The HAs used were considered of interest as they have different characteristics, 23, 24 to study whether these differences could have any impact on the efficacy/safety results. The number of doses specified are those authorized for each product.23 The GPS® II is a closed system that enables PRP to be collected while avoiding the risk of contamination and makes possible to concentrate the autologous platelets for reinjection locally.25 Eligible patients were randomized 1:1 using a computer-generated list of random numbers. The random sequence was created using freely accessible tools available at http://www.randomization.com, which uses the pseudo-random number generator of Wichmann and Hill, 26 modified by McLeod.27 PRP was administered immediately after arthroscopy and viscosupplements were given once a week, starting at 21 days post-arthroscopy. Rescue medication comprised acetaminophen 1 g/8 h and celecoxib 200 mg/12 h. The administration of IA treatments was conducted under aseptic conditions, inserting the needle into the patellofemoral joint space by a superolateral approach, with the patients in a supine position. In case effusion was present, it was removed before each injection. Follow-up was conducted by five orthopedic surgeons, each one assigned to a specific group and blinded to the treatment administered. Visits were scheduled at 3, 6, 12, and 18 months after arthroscopy. All patients were prescribed the same protocol of rehabilitation to be followed at the patient's convenience. The rehabilitation protocol started progressively, starting with isometric exercises and muscle stretching from Week 1 post-arthroscopy, adding weight exercises from Week 3, cardiovascular training from Week 7, and high-impact exercises from 3 months onward.\n\nBODY.PATIENTS AND METHODS.PATIENT SELECTION CRITERIA:\nEligible patients were men and women, aged 40–85, with body mass index (BMI) <35 kg/m2, and indications for knee arthroscopic surgery due to symptomatic meniscal rupture Mink et al28 grade III confirmed by magnetic resonance imaging, radiographic OA grades I–II according to Kellgren and Lawrence (KL),29 meniscal preservation >60%, ligament integrity, and ability and willingness to provide informed consent for study participation. Main exclusion criteria were infectious conditions, use of anticoagulants, history of trauma (dislocation or fracture), inflammatory arthritis, microcrystalline arthropathies, history of septic arthritis, ligament injury, nonspecific synovitis, angular deformity >10°, chondral lesions G-IV Outerbridge (>1 cm2),30 neoplasms, or allergy to any of the components of the products under study.\n\nBODY.PATIENTS AND METHODS.EVALUATION OF EFFICACY:\nEfficacy was evaluated using the Western Ontario McMaster University Osteoarthritis Index (WOMAC) total and subscales of pain, stiffness, and function using a five-point (0–4) Likert scale. Scores at the end of follow-up (18 months) were compared to baseline and the differences between treatment groups were analyzed. Secondary outcomes were efficacy rates at the different follow-up visits. The WOMAC questionnaire used was the Spanish translated version from Batlle-Gualda et al,31 validated by Escobar et al.32 The number of patients achieving the Minimal Clinically Important Improvement (MCII) in each treatment group was also calculated. The MCII is the smallest change in measurement that signifies an important improvement in a patient's symptom.33 According to Tubach et al,34 a 15 out of 100 for absolute improvement and 20% for relative improvement could be used as cutoffs for MCII estimation. Taking into account that, in our study, a Likert scale was used, the 20% of relative improvement cutoff was applied.33\n\nBODY.PATIENTS AND METHODS.EVALUATION OF SAFETY:\nThe incidence and type of adverse events was recorded throughout the study.\n\nBODY.STATISTICS:\nCategorical variables were summarized by their number and relative frequencies. For continuous variables following a normal distribution (Kolmogorov–Smirnov test), mean, standard deviation, and maximum and minimum values were used. Non-normally distributed variables were summarized with their median, interquartile range, minimum, and maximum. Changes in WOMAC total and subscales at 3, 6, 12, and 18 months were expressed as raw scores and as percentage of change from the baseline value. Comparisons between groups were done using chi-square test or Fisher's Exact test for categorical variables when necessary. Continuous data were analyzed using Mann–Whitney test. The study was designed as a pilot evaluation; therefore, real sample-size estimation was not done. The main population for analysis was the per protocol (PP) population, comprising patients who completed the 18 months follow-up according to the study protocol. Safety analysis was conducted in all patients who received at least one IA injection. Data were analyzed using SPSS V14 (SPSS Inc, Chicago Il). All tests were two-sided, and the statistical significance was set at 0.05.\n\nBODY.RESULTS.DISPOSITION OF PATIENTS AND DEMOGRAPHICS:\nPatients were recruited from March 2013 to October 2014. Ninety patients were initially screened, of whom 40 were excluded due to age (n=4), history of trauma or ligament involvement (n=5), overweight (n=6), meniscal injury > 40% (n=4), chondral lesions G-IV (Outerbridge >1 cm2; n=5), eventual total knee arthroplasty (TKA) (n=7), lost to follow-up (n=8), or death unrelated to the study (n=1), leaving a total of 50 patients who participated in the study and completed all procedures as scheduled (Figure 1). Among study subjects, 52% were male and the mean age was 64.4 years, with statistically significant differences between HA2 and HA3 (p=0.007). Mean BMI was 27.8 kg/m2 without differences among groups (p=0.693), despite HA2 comprising more obese patients (Table 1). Near all patients in the HA3 group had mild OA (KL grade I), with the differences significant versus HA2 (p=0.001) and control (p=0.002). The groups were homogeneous with respect to WOMAC total values (p=0.062) as well as pain (p=0.234), stiffness (p=0. 383), and function (p=0.089) subscales (Table 1). There were no differences among groups in arthroscopic lesions (Table 2). The following surgical procedures were conducted: partial meniscectomy of no more than 40% (35 patients), synovectomy and resection of Hoffa's fat pad (five patients), and debridement and stabilization of the chondral lesion (16 patients). More than one procedure in the same patient was possible.\n\nBODY.EFFICACY:\nAt early follow-up (3 months), total WOMAC scores improved as compared to baseline, in all groups with reductions of 44.79% (HA1), 24.0% (HA2), 40.38% (HA3), 39.77% (PRP), and 27.64% (control), (p=0.002 HA1 compared to HA2). At 18 months, a higher improvement in total WOMAC was observed in HA1 with a 65.20% reduction, followed by PRP (55.01%), HA3 (49.57%), and HA2 (29.82%), whereas the control group had a 14.5% increase over baseline despite the use of rescue medication and rehabilitation, with the differences statistically significant with respect to HA1 and HA3 (p=0.001; Table 3 and Figures 2A–D). At 3 months follow-up, the percentage of patients achieving the MCII for both pain and function (20% of relative change) in different groups was 80% (HA1), 20% (HA2), 40% (HA3), 30% PRP, and 30% (control). At the end of follow-up, a growing number of patients experienced a clinically significant improvement in HA1 with a 100% responders rate, followed by HA3 (80%). In the group treated with HA2 and in those treated with PRP, the results reached 60% responders, whereas in the control group, all patients returned to near pre-arthroscopy levels of pain and impairment and none of the patients reached the 20% MCII cutoff (Figure 3).\n\nBODY.SAFETY:\nThroughout the overall study period, patients did not experience adverse events attributable to surgery or to IA injections.\n\nBODY.DISCUSSION:\nArthroscopic surgery is frequently advocated as a treatment option to relieve symptoms of painful degenerative knee conditions. The rationale is that it may improve symptom and functions, has minimal morbidity, provides a therapeutic options, and documents the stage of the disease process. The duration of effect, however, is variable.35 Adjunctive therapies such as nonsteroidal anti-inflammatory drugs and other medications, physiotherapy, and IA steroid injections may be incorporated to improve the results.36,37 The use of arthroscopic surgery in the treatment of OA has generated much discussion. Whereas some studies14,15 reported no differences with placebo arthroscopy, other studies confirm that, in patients with preoperative mechanical symptoms secondary to loose bodies, chondral flaps, and meniscal pathology, the use of arthroscopic surgery is beneficial.38,39 The key point is to offer knee arthroscopy to selected patients. Patients with symptomatic meniscal tears, few degenerative changes, normal axis, and acute mechanical complaints can benefit from this intervention, especially if the OA is mild or moderate.40–42 Favorable results have been also shown for the management of the infrapatellar fat pad impingement and Hoffa's disease.43,44 Chronic complaints, previous surgical intervention, degenerative changes in two or three compartments, significant axis deviation, ligament imbalance, or severe cartilage destruction are among the characteristics of patients who may be candidates for unsatisfactory results after arthroscopy.42 As a result, arthroscopic knee surgery is the most common type of orthopedic surgery performed in the developed world.45 However, in patients with symptomatic OA, pain can persist after arthroscopic meniscectomy and/or debridement of chondral lesions. Treatment combining arthroscopic surgery and viscosupplementation has been widely used for patients with combined intraarticular mechanical and articular surface pathology.40 Initially, rheological characteristics that confer a lubricating effect to HA were considered the primary mechanism of action in treating OA pain. Nevertheless, rather than being a mere device, HA has been shown to exert other effects such as inhibition of tissue nociceptors, inhibition of matrix metalloproteinases, free radicals scavenging, and stimulation of endogenous HA production. Due to this, viscosupplementation has been recommended as adjunctive therapy after arthroscopic surgery as a suitable way of achieving long-term stabilization of the treatment outcome.40,44,46 The application of PRP to patients with OA was developed because of the physiological roles of several bioactive proteins and growth factors expressed in platelets, which lead to tissue regeneration. Despite the use of PRPs having been extended to the treatment of several musculoskeletal injuries the last few years, we have not found published studies analyzing the efficacy of PRP as adjunctive treatment in post-arthroscopy patients. A recent meta-analysis found that at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, the indication for knee arthroscopy was symptomatic rupture of a meniscus with symptomatic osteochondral detachment in patients with grade II OA. In these patients, surgery is the only and necessary indication, and they were given additional IA treatment with hyaluronate or PRP due to the presence of OA. In the market, there are a variety of HAs and our aim has been to verify if the differences between HA products has any clinical relevance, as well as to study the effects of PRP after knee arthroscopic debridement. Procedures for clinical practice and the dosage described by the manufacturers have been used; according to the studies published, treatments with HA usually started 3 weeks after surgery.3,35,44,48,49 In absence of recommendations in the literature, PRP injection was administered immediately after arthroscopy. The HA1 product showed the best results, with a very significant improvement of the symptomatology compared to the baseline that was maintained up to 18 months post-arthroscopy (reduction of −65.20% in total WOMAC) and with 100% of the patients reaching the MCII 20% in pain and function, followed by HA3 with a reduction of −49.57 in total WOMAC and 80% of patients reaching the 20% MCII cutoff. These products differ in origin, MW, structure, and number of injections administered and, in our study, the differences between both did not reach statistical significance. However, it should be noted that 8/10 patients in the HA3 group were KL grade I, which is associated with better response data.40,41 The HA2 group showed much more discreet and erratic results, with final reduction in WOMAC at the end of the study and nearly 30% and 60% of patients complying with MCII requirements. It should be noted that, in this group, all patients had OA grade II as well as a greater number of overweight/obese patients – a characteristic that may also have influenced the results.44 A recent meta-analysis found that, at 6 months post injection, PRP and HA had similar effects with respect to pain relief and functional impairment. However, at 12 months, PRP was associated with better improvement scores.47 In our study, patients treated with PRP constitute a diverse group who seem to follow the rule of all or nothing, so that those who improve do so in a very significant way and this improvement is maintained until the end of follow-up, whereas something similar occurs with non-responder patients; thus, at the end of the study, only 60% of patients achieved the MCII threshold. Ayhan et al50 suggest that PRP therapy would be indicated for patients <60 years and with a BMI <30. In our study, the best results in the PRP group were also seen in patients with a BMI <30, as well as in those <60 years. We did not find any another relationship between the type of response and the characteristics of the patients or the arthroscopic findings. Surprisingly, the control group showed low improvement in symptomatology at early follow-up that worsened along time with return to pre-arthroscopy levels at the 12-month follow-up that required the consumption of rescue medication. One of the weaknesses of our study is the small sample size as it was designed as a pilot evaluation with the limitations that it entails when establishing comparisons; moreover, the fact that each group was evaluated by a different person may have influenced the evaluation of the results. Finally, not all patients undertook the rehabilitation protocol in the same way, as they were treated by different people and in different clinics according to the personal needs of the patient: a more strict control of this procedure would have been desirable. In our favor, the long follow-up period allowed us to assess the evolution of the results in the long term; the criteria used for patient selection was of special relevance to select those who may benefit most from the procedure, as was the presence of a control group. The design of the study also allowed us to study the behavior of three different HAs used within routine practice and to compare them with PRP as a new therapeutic approach. Viscosupplementation is an effective treatment for the treatment of knee OA. Based on our results, we consider that its use in combination with arthroscopic surgery can provide very good results in adequately selected patients with an individualized and multidisciplinary follow-up. However, due to the importance of this type of intervention, further controlled studies with a larger sample size and/or alternative posology regimens would be of interest for the scientific community.\n\n**Question:** Compared to standard care (control) what was the result of 1) five injections of HA1 (Suprahyal®/Adant®); 2) four injections of HA2 (Orthovisc®); 3) three injections of HA3 (Synvisc®); 4) a single injection of PRP (GPSTM II); on The percentage of patients achieving the MCII for both pain and function at 18 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1170
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Sleep quality in mechanically ventilated patients: comparison between NAVA and PSV modes\n\n ABSTRACT.BACKGROUND:\nMechanical ventilation seems to occupy a major source in alteration in the quality and quantity of sleep among patients in intensive care. Quality of sleep is negatively affected with frequent patient-ventilator asynchronies and more specifically with modes of ventilation. The quality of sleep among ventilated patients seems to be related in part to the alteration between the capacities of the ventilator to meet patient demand. The objective of this study was to compare the impact of two modes of ventilation and patient-ventilator interaction on sleep architecture.\n\nABSTRACT.METHODS:\nProspective, comparative crossover study in 14 conscious, nonsedated, mechanically ventilated adults, during weaning in a university hospital medical intensive care unit. Patients were successively ventilated in a random ordered cross-over sequence with neurally adjusted ventilatory assist (NAVA) and pressure support ventilation (PSV). Sleep polysomnography was performed during four 4-hour periods, two with each mode in random order.\n\nABSTRACT.RESULTS:\nThe tracings of the flow, airway pressure, and electrical activity of the diaphragm were used to diagnose central apneas and ineffective efforts. The main abnormalities were a low percentage of rapid eye movement (REM) sleep, for a median (25th-75th percentiles) of 11.5% (range, 8-20%) of total sleep, and a highly fragmented sleep with 25 arousals and awakenings per hour of sleep. Proportions of REM sleep duration were different in the two ventilatory modes (4.5% (range, 3-11%) in PSV and 16.5% (range, 13-29%) during NAVA (p = 0.001)), as well as the fragmentation index, with 40 ± 20 arousals and awakenings per hour in PSV and 16 ± 9 during NAVA (p = 0.001). There were large differences in ineffective efforts (24 ± 23 per hour of sleep in PSV, and 0 during NAVA) and episodes of central apnea (10.5 ± 11 in PSV vs. 0 during NAVA). Minute ventilation was similar in both modes.\n\nABSTRACT.CONCLUSIONS:\nNAVA improves the quality of sleep over PSV in terms of REM sleep, fragmentation index, and ineffective efforts in a nonsedated adult population.\n\nBODY.BACKGROUND:\nSleep is severely disturbed in mechanically ventilated ICU patients [1-3]. Sleep alterations are known to have deleterious consequences in healthy subjects, but the paucity of data in the literature [4-7] makes it difficult to determine the impact of sleep abnormalities in ICU patients. Intensive care unit (ICU) patients present disrupted sleep with reduced sleep efficiency and a decrease in slow wave sleep and rapid eye movement (REM) sleep [8-10]. Furthermore, polysomnographic studies performed on mechanically ventilated ICU patients have demonstrated an increase in sleep fragmentation, a reduction in slow-wave and REM sleep, and an abnormal distribution of sleep, because almost half of the total sleep time occurred during the daytime [11-13]. In the Freedman et al. study [14], noise was considered a nuisance for the patients questioned; the most annoying noises were alarms and caregivers' conversations. When the same authors simultaneously recorded noise and microarousal, they identified an association between arousal and noise in only 11-17% of the cases [11]. This percentage is confirmed by Gabor et al. [3] where 21% of the arousal interruptions were explained by loud noises and 7% to patients' care. Seventy-eight percent of the microarousals were not associated with environment noises, suggesting other causes, such as patient/ventilator asynchrony [3,14]. The effects of assist control ventilation (ACV) and pressure support ventilation (PSV) on sleep fragmentation have been examined in critically ill patients receiving mechanical ventilation [15], where PSV mode was associated with increases in the number of central apneas and subsequent sleep fragmentation compared with AVC. Furthermore, the study suggested that PSV by itself or an excess of ventilator assistance with PSV could have caused such sleep alterations. Indeed, ventilatory settings adjusted during wakefulness may become excessive during sleep, as the patients' ventilatory demand is reduced while asleep [16]. Whether these results can be explained by the ventilatory mode itself or how it was adjusted is an important issue, because hyperventilation and patient ventilator asynchrony may result from PSV as well as ACV in mechanically ventilated ICU patients [17]. Fanfulla et al. [18] compared two ventilatory settings in nine patients under long-term PSV for neuromuscular disease. The initial setting was set according to clinical parameters, and the second setting was adjusted with measurement of esophageal pressure (physiological setting) to optimize patient effort. The physiological setting improved the duration and quality of sleep, decreased episodes of apnea, and the amount of inefficient efforts for ventilator triggering [18]. The level of pressure support and PEEP tended to decrease, with a lowering of intrinsic PEEP and patient-ventilator asynchronies. A recent study by Cabello et al. [19] compared the impact of three modes of ventilation (AVC, PSV, and SmartCareTM) on the quality of sleep in alert and nonsedated patients, and no difference for the architecture, fragmentation, and duration of sleep was found among the three modes. Our hypothesis is that NAVA ventilation is superior to PSV by allowing optimal patient-ventilator synchrony and thereby decreasing sleep fragmentation.\n\nBODY.METHODS:\nThis study was approved by the Ethics Committee of the Hôpital du Sacré-Coeur de Montréal, and patients or their surrogates gave written informed consent.\n\nBODY.METHODS.PATIENTS:\nThis physiologic study was conducted in a 22-bed medical ICU during a 12-month period. The weaning phase of mechanical ventilation was chosen because patient-ventilator asynchrony is common when patients are spontaneously triggering breaths. The inclusion criteria required that the patient was conscious, free from sedation and opiate analgesia for ≥ 24 hours, and ventilated in PSV mode with an FIO2 < 60%, PEEP = 5 cmH2O, and SpO2 ≥ 90%. Exclusion criteria consisted of the presence of a central nervous system disorder, Glasgow Coma Scale score < 11, hemodynamic instability, renal and/or hepatic insufficiency, and ongoing sepsis.\n\nBODY.METHODS.METHODS:\nAll patients were ventilated through an endotracheal tube or a tracheostomy; once they met the inclusion criteria, they were connected to a Servo i ventilator (Maquet critical Care, Sölna, Sweden), equipped with a neurally adjusted ventilator assist system (NAVA). The electrical activity of the diaphragm (EAdi) is captured with the EAdi catheter (Maquet Critical Care, Sölna, Sweden) consisting of a 16-Fr gastric tube equipped with electrodes. End-tidal CO2 was monitored with the Servo-i Volumetric CO2 module. The two different ventilatory modes were delivered in a randomized order using a closed-envelope technique during four periods of 4 hours: a daytime period from 7 to 11 a.m. and 12 to 4 p.m., and a nocturnal period from 10 p.m. to 2 a.m. and 3 to 7 a.m. To prevent possible data contamination from the previous mode of ventilation, a 1-hour washout period after a ventilator change was introduced before data acquisition (Figure 1; Study Protocol). Figure 1Patients were studied for a period of 4 hours for each recording sequences and for more than 19 consecutive hours. During periods of wakefulness, PSV and NAVA were clinically adjusted by the attending physician to obtain a tidal volume of 8 mL/kg of predicted body weight and a respiratory rate ≤ 35 breaths/min. For both modes of ventilation, inspiratory triggering sensitivity was set at thresholds that would not allow auto-triggering for both modes of ventilation: 0.5 mV in NAVA and 5 in PSV. EEG was recorded from standard locations: left frontal/right mastoïd reference (F3/M2 or F3/A2), right frontal/left mastoid reference (F4/M1or F4/A1), left central/right mastoïd reference (C3/M2 or C3/A2), right central/left mastoïd reference (C4/M1 or C4/A1), left occipital/right mastoïd reference (O1/M2 or O1/A2), and right occipital/left mastoïd reference (O2/M1or O2/A1), according to the International 10-20 System for electrode placement [20]. The standard reference used was the left mastoid lead [20]. Two electro-oculogram and three chin electromyogram leads were used to score REM and non-REM sleep. The electroencephalogram, the right and left electro-oculogram, and the submental electromyogram signals were amplified and recorded in the data acquisition system (Alice 5 polysomnography system using Alice® SleepwareTM 2.5 software, Respironics, Nantes, France). Sleep recordings were manually read and scored by an independent pulmonologist blinded to the study, using the criteria of Rechtschaffen and Kales [21,22] and the criteria of the American Sleep Disorder Association for arousals and awakenings [23,24]. Diagnosis of central apnea was based on international recommendations [24]. The diagnosis of central apnea is characterized by absent breathing and respiratory effort for a period of at least 10 seconds. Arousals and awakenings were considered secondary to apnea when occurring within three cycles and/or 15 sec after a respiratory event [25,26]. Ineffective efforts were defined as an inspiratory effort observed by a peak electrical activity of the diaphragm (EAdi peak) without a simultaneously triggered ventilator cycle. Airflow, Paw, and EAdi were acquired from the ventilator through a RS232 interface at a sampling rate of 100 Hz, recorded by a dedicated software (Nava Tracker V. 2.0, Maquet Critical Care, Sölna, Sweden), and an analyzer using software Analysis V 1.0 (Maquet Critical Care) and a customized software based for Microsoft Excel. An arousal or awakening event was considered secondary to ineffective triggering when it occurred within 15 seconds after the asynchrony [19]. Noise was measured with a portable noise meter at the level of patient's head (Quest Technologies, Oconomowoc, WI). Arousals and awakenings were associated with the noise when they occurred 3 seconds after or within noise increase ≥10 dB [3,11]. Inspiratory trigger delay was calculated as the time difference between the onset of EAdi peak and Paw inspiratory swings. Cycling-off delay was calculated as the time difference between the end of the inspiratory EAdi peak deflection and the onset of expiratory flow.\n\nBODY.METHODS.STATISTICS ANALYSIS:\nStatistical analysis was performed using SPSS statistical software (SPSS 17.0). Continuous variables were expressed as median (25th-75th percentile) or mean ± SD. Data were compared using the general linear model for repeated measures (GLM). The small sample of patients led us to use Wilcoxon's t test for paired samples, and the p values for multiple comparisons were corrected for the Bonferroni inequality. A two-tailed p value < 0.05, corrected as needed, was retained to indicate statistical significance.\n\nBODY.RESULTS.PATIENTS:\nFourteen patients were selected and none were excluded during the study. Their main characteristics are shown in Table 1. Acute respiratory failure was the most frequent reason to initiate mechanical ventilation in ten patients, postoperative complications in three patients, and septic shock in one patient. Table 1 Characteristics of patients Characteristics of patients Sex (M/F) (8/6) Age (yr ± SD) 64 ± 11 SAPS II ± SD 46 ± 12 Duration of MV (days ± SD) 17 ± 9 Tracheotomy (%) 2 (14) Cause for initial MV (%) Acute respiratory failure 10 (71.5%) Postoperative complication 3 (21.5%) Septic shock 1 (7%) M = male; F = female; SAPS = Simplified Acute Physiology score; MV = mechanical ventilation. \n\nBODY.RESULTS.SLEEP RECORDINGS:\nAll patients completed the study, and recordings were well tolerated. Individual sleep data are shown in Table 2. The median total sleep time was 564 (range, 391-722) minutes. The median sleep efficiency (i.e., the percentage of sleep during the study) was 59% (range, 41-75%). The main abnormalities observed on each patient were a diminished percentage of REM sleep, counting for only 11.5% (range, 8-20%) of total sleep time, and a high fragmentation index with 25 arousals and awakenings per hour (range, 18-51). Although interindividual variability was large, the median quantity of slow-wave sleep (stages 3 and 4 or NREM3 stage) was normal, with a median of 18.5 (range, 11.5-22; Table 2). Table 2 Sleep architecture and fragmentation during the study (16 hours) Patient Stage 1 (%) Stage 2 (%) Stages 3 and 4 (%) Rapid eye movement (%) Fragmentation index 1 5 72.5 19.5 2.5 23.5 2 2.5 67 22.5 7 35.5 3 4 61 24.5 9 30.5 4 10 57 11 20 68 5 9 61 24 5.5 16.5 6 6 57 24 12.5 15 7 5 63 22 7.5 13.5 8 11 66.5 9 10.5 64.5 9 5.5 58 17.5 19 15.5 10 11 60.5 9.5 17 56.5 11 3 66 21 9 26 12 3.5 61.5 13 22 23 13 5.5 60 13 21.5 22 14 10 59 10.5 20.5 67.5 Median [25-75 th percentiles] 5.5 [4-10] 61 [59-65] 18.5 [11.5-22] 11.5 [8-20] 25 [18-51] \n\nBODY.RESULTS.VENTILATORY MODES AND SLEEP DISTRIBUTION:\nSleep efficiency and architecture appeared very different for both modes of ventilation (NAVA and PSV). Stage 1 (NREM1 stage) lasted longer during PSV compared with NAVA 7.5% (range, 4-15%) vs. 4% (range, 3-5%; p = 0.006). Stage 2 (NREM2 stage) also lasted longer in PSV than NAVA 68% (range, 66-75%) vs. 55% (range, 52-58%; p = 0.001). Stage 3-4 (NREM3 stage) was shorter in PSV as opposed to NAVA 16.5% (range, 17-20%) vs. 20.5% (range, 16-25%; p = 0.001). REM stage (R stage) was much shorter in PSV than in NAVA 4.5% (range, 3-11%) vs. 16.5% (range, 13-29%; p = 0.001). The fragmentation index was different between the two ventilation modes, with 40 ± 20 arousals and awakenings per hour in PSV and 16 ± 9 during NAVA (p = 0.001; Figure 2 Sleep stage (percent of total sleep) during two ventilatory modes; Table 3). Figure 2Sleep stages (percent of total sleep) during the two ventilator modes: pressure support ventilation (PSV), and neurally adjusted ventilatory assist (NAVA). REM = rapid eye movement. Table 3 Comparison of sleep quality between the ventilatory modes PSV NAVA \n p \n Stage 1, % 7.5 [4-15] 4 [3-5] 0.006* Stage 2, % 68 [66-75] 55 [52-58] 0.001* Stage 3 and 4, % 16.5 [17-20] 20.5 [16-25] 0.001* REM, % 4.5 [3-11] 16.5 [13-29] 0.001* Fragmentation index, (n/h) 33.5 [25-54] 17.5 [8-21.5] 0.001* Sleep efficacy, % 44 [29-73.5] 73.5 [52.5-77] 0.001* PSV = pressure support ventilation; NAVA = neurally adjusted ventilatory assist; REM = rapid eye movement; Fragmentation Index = number of arousals and awakenings per hour of sleep; Sleep efficiency = duration of sleep/total duration of recording. Values are expressed as median [interquartile range]. * p < 0.05. Minute ventilation did not significantly differ between PSV and NAVA with median values of 9.8 L/min (range, 8.0-10.9), and 9.6 L/min (range, 7.5-11.0) respectively (p = 0.51). The median respiratory rates were 17 breaths/min (range, 14-21), and 20 breaths/min (range, 15-23) during PSV and NAVA (p = 0.14). Median tidal volume was 420 mL (8.1 mL/Kg of predicted body weight; range, 375-479 mL), and 378 mL (7.3 mL/Kg of predicted body weight; range, 370-448 mL) during PSV and NAVA, respectively (p = 0.36). The mean PSV level was 15 ± 5 cmH2O, and the mean NAVA level was 1.6 ± 1.4 cmH2O/μV. Positive end-expiratory pressure was kept at 5 cmH2O for all patients.\n\nBODY.RESULTS.APNEAS AND INEFFECTIVE EFFORTS:\nTen of the 14 patients presented sleep apnea, and 11 exhibited ineffective efforts. The mean index of sleep apneas (number of apneas per hour of sleep) was 10.5 ± 11 apneas during PSV and 0 during NAVA (p = 0.005) and ineffective efforts (number of ineffective efforts per hour of sleep) was 24 ± 23 ineffective efforts during PSV and 0 during NAVA (p = 0.001). Over-assistance during sleep is sensed on the previous three cycles preceding central apnea. Tidal volume and minute ventilation increased, whereas ETCO2 and EAdi decreased over the three cycles preceding central apnea Table 4. Table 4 Oscillatory behaviour of various ventilator parameters for stages 3-4 with PSV mode of ventilation Respiratory variables Baseline Pre-apneas PSV V T (mL) 425 ± 67 585 ± 70 RR (breath/min) 13 ± 2 12 ± 1 VE (L/min) 5.2 ± 0.5 6.8 ± 0.8 ETCO 2 (mmHg) 46 ± 1.4 42 ± 1.0 EAdi (mVolt) 15 ± 4 10 ± 2 V T = tidal volume; RR = respiratory rate; VE = minute ventilation; ETCO 2 = end-tidal carbon dioxyde; PSV = pressure support ventilation. \n\nBODY.RESULTS.TRIGGER DELAY AND CYCLING-OFF DELAY:\nDuring N-REM sleep in PSV, the trigger delay increased on average by 80 ± 26 (msec) during stage 1 versus 158 ± 42 (msec) during stage 3 and 4. The expiratory trigger (cycling-off) increased in PSV by 158 ± 103 (msec) and 258 ± 87 (msec) during stage 1 and stages 3 and 4, respectively. In NAVA, the trigger delay remained stable during sleep, 68 ± 24 (msec) during stage 1 and 72 ± 32 (msec) during stages 3 and 4. The expiratory trigger also remained stable in NAVA: 39 ± 28 (msec) during stage 1 and 41 ± 34 (msec) during stages 3 and 4.\n\nBODY.RESULTS.NOISE:\nIn ICU, we recorded the average baseline ambient noise level and evaluated arousals from this baseline to a peak noise level ≥ 10 dB above ambient noise level. The mean noise level was recorded at 64 ± 8 dB, with the peak level recorded at 111 dB and the minimal level at 52 dB. No differences were observed between the two different ventilatory modes concerning the index of fragmentation associated with noise: 7.5 ± 3 during PSV and 6 ± 3.5 during NAVA (p = 0.19). These data indicate that 18% during PSV and 21% during NAVA of the fragmentation was associated with sudden increases in noise.\n\nBODY.RESULTS.SLEEP DISTRIBUTION AMONG STUDY PERIODS:\nThe cross-over pattern was balanced with an equal number of patients from each sequence initiating the rotation. Independent of the ventilatory mode, sleep efficiency and sleep architecture had a significantly different distribution based on the study period considered (Figure 3--sleep stage (percent of total sleep) during the four daily time periods). Sleep efficiency was the same in the two daytime periods (2 periods during the day): 52% (range, 26-67%) during the first day period (7 h-11 h a.m.) and 51.5% (range, 27-67%) during the second day period (12 h-4 h p.m.; p = 0.18). Sleep efficiency also did not differ between the two night periods: 65.5% (range, 37-82%) during the first night period (10 h p.m.-2 h a.m.) and 65% (range, 45-82.5%) during the second nighttime period (3 h-7 h a.m.; p = 0.11). Figure 3First daytime period (7 h-11 h a.m.), second daytime period (12 h-4 h p.m.), first nighttime period (10 h p.m. to 2 h a.m.) and second nighttime period (3 h-7 h a.m.). There was no statistical difference between stage 1 and 2 recording periods. A greater duration of slow-wave sleep (stage 3-4) was found during the first nocturnal period with a median percentage 22.5% (range, 20-33.5%) vs. 15.5% (range, 7-19.5%) during first day period (p = 0.03), vs. 15% (range, 7-18%) during second day period (p = 0.01) and vs. 18% (range, 13-21%) during second nighttime period (p = 0.001). The proportion of REM sleep was longer during the second nocturnal period, with a median percentage of 16.5% (range, 15-25%) vs. 11.5% (range, 5-15%) during first day period (p = 0.001) vs. 9% (range, 5-15%) during second day period (p = 0.001) and vs. 10.5% (range, 7-21%) during first nighttime period (p = 0.02). The fragmentation index did not differ with 26 (range, 20-65) arousals and awakenings/hour during first daytime vs. 24 (range, 19-55), 23 (range, 18-57), and 19 (range, 15-53) during the second day period and first and second night period, respectively (p = 0.08). Ineffective effort indexes per hour also were similar across the four periods.\n\nBODY.DISCUSSION:\nIn a study where spontaneously breathing patients were conscious and under mechanical ventilation, proportions of sleep fragmentation sleep architecture and sleep quality were positively influenced by NAVA. In the PSV mode, a low percentage of REM sleep and a high degree of fragmentation were present. NAVA showed a normal percentage of REM sleep with an important decrease in fragmentation. Less than 15% of the sleep fragmentations in the PSV mode were attributed to apneas and ineffective efforts, whereas in NAVA, no asynchrony (no apnea and no ineffective patient efforts) were recorded. Environmental noise is responsible for 18% of the arousals and awakenings in PSV compared with 21% in NAVA, respectively. We observed results similar to the Cabello et al. [19] study concerning the rate of fragmentation, the number of central apneas, and the number of ineffective patient efforts during PSV. Another similar finding concerned the increased percentage of REM sleep during the second nighttime period recordings. However, one major difference between our study and the Cabello study is that they did not allocate an even distribution for each of the study periods and ventilatory strategies. Also, they did not allow washout periods between the ventilatory modes, which could possibly contaminate the recordings at the beginning of the next study period. Detecting asynchronies also was different; they used the airway pressure-flow signal and the thoracoabdominal plethysmography, whereas we observed the EAdi signal. Parthasarathy and Tobin [15] found a lower rate of sleep fragmentation during ACV compared with PSV. This was explained by the central apneas induced by over-assistance during PSV. In fact, tidal volume was much greater during PSV compared with ACV. This was validated by the addition of a dead space to the 11 patients showing central apneas, which significantly decreased the number of apneas. In the Toublanc et al. study [27], no difference was found in terms of quantity, quality of sleep, and in terms of arousal index between the AC and a low level of PSV assistance for the whole night. Toublanc et al. found that ACV was superior in terms of percentage of slow-wave sleep but not during REM sleep [27]. It is very difficult to compare the results for PSV because of a lack of information on expiratory triggering with Evita 4, number of asynchronies (tidal volume, respiratory rate, and minute ventilation). In the Toublanc study, the majority of patients were affected with COPD and pressure support was adjusted to 6 cmH2O. According to Brochard et al., it is suggested that for COPD patients, the pressure support needed to overcome resistance imposed by the endotracheal tube is higher than non-COPD patients: 12 ± 1.9 vs 5.7 ± 1.5 cmH2O respectively [28]. In the Leleu et al. study, pressure support must be superior to 6 cmH2O, particularly in COPD if the intention is to compensate work of breathing imposed by the endotracheal tube, ventilator circuit, and patient effort to trigger the demand valve during pressure support [29]. A low-pressure support only allows for partial relieve of imposed work of breathing without modifying the work necessary to trigger the demand valve. In the Toublanc study, pressure support set too low in COPD patients resulted in an increase in imposed work of breathing, which can be accounted for in the decrease in SWS and REM. The Toublanc study offers no information on expiratory triggering, which is somewhat important in COPD patients. Tassaux et al. recently have evaluated the positive impact of shortening inspiratory time in PSV on patient-ventilator asynchronies and the work of breathing in COPD patients. This study also demonstrated that the increase in expiratory trigger up to 70% of peak flow improved synchrony and decreased ineffective efforts without modifying work of breathing or minute ventilation [30]. Bosma et al. evaluated the impact on sleep with other modes of ventilation, such as the proportional assist ventilation (PAV). The objective of PAV, such as NAVA, is to improve patient ventilator synchrony by delivering ventilator assist proportional to patient effort. The study by Bosma et al. shows an improvement in the quality of sleep using PAV compared with PSV during one night sleep [31]. There are similarities between the Bosma study and ours. More specifically, PAV appeared superior to PSV in terms of decrease in arousals, improvement in sleep quality, decrease in amounts of arousals, awakenings per hour, and improved SWS and REM. With NAVA, we observed a decrease in tidal volume by up to 15% during REM sleep, which increased end-tidal CO2 by approximately 4 mmHg. Bosma et al. observed a tidal volume slightly more elevated in PSV compared with PAV (despite similar offloading of the work of breathing), resulting in a higher morning PaCO2 with PAV attributed to lower tidal volume and minute ventilation [31], thus offering perhaps a protection against central apneas. Finally, fewer patient-ventilator asynchronies were observed with PAV with fewer awakenings per hour [31]. Contrary to NAVA, PAV cannot eliminate wasted or ineffective efforts. There was a nonstatistically significant difference in ineffective triggering during inspiration; 19.6 n/hr for PSV vs. 11.6 n/hr for PAV [31]. According to Thille et al. ineffective efforts and double triggering are among the most frequent asynchronies: 85 and 13% respectively [32], which is somewhat contradictory to Bosma et al. who identify auto triggering as the most frequent asynchrony in PSV. We observed that the absence of central apnea and ineffective efforts in NAVA do not totally explain the great improvement in the SWS and REM sleep. This improvement may be explained in part by a microanalysis of the sleep architecture. The microanalysis suggests an over-assistance with PSV during the N-REM stages, because 100% of the fragmentations in PSV occurred during this stage. The tidal volume decrease in NAVA follows the respiratory physiologic changes during sleep, whereas in PSV we find a tidal volume oscillatory behavior due to constant inspiratory efforts, independent of the sleep stage and produces sequential over-assistance during N-REM sleep leading to a decrease in end-tidal CO2. It is our assumption that improvement of the slow-wave sleep and REM is most probably explained by better patient comfort through better neuromechanical coupling. During sleep, the respiratory accessory muscles (intercostals, scalene, and abdominals) decrease their muscle tone and the mechanical response of the diaphragm is, in part, spent in the production of a mechanical distortion of the chest wall, secondary to a lack of synchronization between diaphragmatic contraction and the accessory muscles. NAVA improves this mechanical distortion, whereas PSV worsens this distortion by a tidal volume oscillation (overshoot) during sleep, with a constant patient effort. Patient comfort is not only directly related to inefficient efforts and central apneas; the microanalysis showed that during N-REM sleep in PSV, the trigger delay increased during stage 1 versus during stage 3 and 4. The expiratory trigger increased in PSV during stage 1 and stages 3 and 4, respectively. In NAVA, the trigger delay remained stable during stage 1 and during stages 3 and 4. The expiratory trigger also remained stable in NAVA, during stage 1 and during stages 3 and 4. NAVA allows optimizing the neuromechanical coupling and therefore patient-ventilator synchrony [33] and allows for optimized adequacy between ventilatory load and patient breathing ability, thereby providing beneficial effects on sleep in ICU patients. It appeared to us that the EAdi tracing is much more efficient than flow and pressure tracings to detect asynchronies. Our study has some limitations; one is the open space between patients. This study included only 14 patients, which could favor the possibility of a type II error. Patients' heterogeneity implies that patients required bedside care, such as suctioning or other care, which could perhaps influence sleep fragmentation. The study by Cabello found that suctioning was associated with < 1% arousals and awakenings [19]. The choice for a 15-second interval between asynchrony and the occurrence of arousal was chosen based on one previous study on the same topic [19]. Literature on this specific time interval to choose is very scarce. In one study, it was shown that the breathing response to a complete airway occlusion was 20.4 ± 2.3 sec during NREM and 6.2 ± 1.2 sec during REM [34]. The choice of a 15-second interval seems very reasonable but may need further investigation. In a sleep laboratory, it is a lot easier to control the baseline ambient noise level. In a clinical environment, such as an ICU, we recorded the average baseline ambient noise level and evaluated arousals from this baseline to a peak noise level ≥10 dB above ambient noise level. There is therefore a potential for statistical inaccuracies. The fact that we stopped sedation 24 hours before beginning the study does not imply an absence of cumulative sedation. However, every patient had a Ramsay Score of 2 or less and a Glasgow Score of 11 (the maximum score for an intubated patient).\n\nBODY.CONCLUSIONS:\nThe ventilatory mode NAVA improves the quality of sleep by increasing the slow-wave sleep and REM and by decreasing fragmentation. NAVA improves patient comfort through better neuromechanical coupling during N-REM sleep, by a shorter trigger delay, and more efficient expiratory triggering. To minimize sleep fragmentation, optimal setting of pressure support level and expiratory trigger are paramount in PSV. However, proportional assistance modes of ventilation according to patient inspiratory effort, such as NAVA, appear to be a better choice to minimize sleep fragmentation.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nSD and PO drafted the manuscript, and PB, JPT, and PA revised the manuscript.\n\n**Question:** Compared to pressure support ventilation (PSV) what was the result of neurally adjusted ventilatory assist (NAVA) on Proportions of REM sleep duration?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
668
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Repair of Giant Midline Abdominal Wall Hernias: “Components Separation Technique” versus Prosthetic Repair\n\n ABSTRACT.BACKGROUND:\nReconstruction of giant midline abdominal wall hernias is difficult, and no data are available to decide which technique should be used. It was the aim of this study to compare the \"components separation technique\" (CST) versus prosthetic repair with e-PTFE patch (PR).\n\nABSTRACT.METHOD:\nPatients with giant midline abdominal wall hernias were randomized for CST or PR. Patients underwent operation following standard procedures. Postoperative morbidity was scored on a standard form, and patients were followed for 36 months after operation for recurrent hernia.\n\nABSTRACT.RESULTS:\nBetween November 1999 and June 2001, 39 patients were randomized for the study, 19 for CST and 18 for PR. Two patients were excluded perioperatively because of gross contamination of the operative field. No differences were found between the groups at baseline with respect to demographic details, co-morbidity, and size of the defect. There was no in-hospital mortality. Wound complications were found in 10 of 19 patients after CST and 13 of 18 patients after PR. Seroma was found more frequently after PR. In 7 of 18 patients after PR, the prosthesis had to be removed as a consequence of early or late infection. Reherniation occurred in 10 patients after CST and in 4 patients after PR.\n\nABSTRACT.CONCLUSIONS:\nRepair of abdominal wall hernias with the component separation technique compares favorably with prosthetic repair. Although the reherniation rate after CST is relatively high, the consequences of wound healing disturbances in the presence of e-PTFE patch are far-reaching, often resulting in loss of the prosthesis.\n\nBODY:\nReconstruction of giant midline abdominal wall hernias that cannot be closed primarily is a technical challenge to a surgeon. Many surgeons discourage abdominal wall reconstruction because of the technical difficulties, the high morbidity, and the relatively high recurrence rate associated with these procedures. However, many patients with large hernias have invalidating complaints such as bulging of the abdominal wall, chronic wounds, immobility, and back pain, necessitating surgical treatment. The lack of sufficient tissue requires the insertion of prosthetic material or transposition of autologous material to bridge the fascial gap. Reconstruction using pre-peritoneally placed prosthetic material is still the most frequently applied method of reconstruction.1 The increased risk of infection in case of wound complications is a relative contraindication against the use of prosthetic materials. Moreover, interposition of either peritoneum or greater omentum between the bowel and the prosthesis is often impossible, which is another reason to avoid the use of prosthetic material. In 1990 Ramirez, Ruas, and Dellon introduced the \"components separation technique (CST)\" to bridge the fascial gap without the use of prosthetic material.2 The technique is based on enlargement of the abdominal wall surface by separation and advancement of the muscular layers. In this way, defects of up to 20 cm at the waistline can be bridged. Retrospective series report promising results, but no prospective study has been published until now.3–12 It was the aim of this prospective study to compare the results of prosthetic repair with CST in patients with giant abdominal wall hernias that cannot be closed primarily. The primary endpoint of the study was reherniation; secondary endpoints were operation time and postoperative wound complications. In the present report the results of an interim analysis are presented.\n\nBODY.PATIENTS AND METHODS:\nAdult patients (18–80 years) with an incisional hernia after midline laparotomy with a craniocaudal length of at least 20 cm that could not be closed primarily, in whom the repair could be performed under clean or clean-contaminated conditions, and who were not using corticosteroid therapy were asked to participate in the study. Patients with perioperative gross contamination of the operative field were excluded from the study. After written informed consent the patients were randomized between CST and prosthetic repair by the data center of the Radboud University Nijmegen Medical Centre using envelope, the day before operation. Fully trained abdominal wall surgeons who had done at least five procedures of both techniques before the start of the study performed the operations. H.v.G. and R.P.B. performed supervision in centers not having this expertise. Before each procedure, a preoperative chest x-ray was made. Demographic data, co-morbidity (COPD, cardiovascular disease, or diabetes), body mass index, condition of the skin, size of the hernia at the time of the operative procedure, kind of anesthesia, operation time, perioperative blood loss, postoperative ICU stay, analgesia use, complications, hospital stay, and follow-up were recorded on a standard form. The study protocol was reviewed and approved by the institutional ethics commissions of all the participating hospitals. All patients gave written informed consent after receiving a thorough explanation of the study.\n\nBODY.PATIENTS AND METHODS.OPERATIVE TECHNIQUE:\nStandard thrombosis (Nadroparine 2,850 IE) and antibiotic prophylaxis (cefazoline 3 × 1 g and metronidazole 3 × 500 mg) were started preoperatively. After induction of anesthesia (combined general and epidural, if possible) and disinfection of the skin with iodine tincture, an adhesive drape was applied on the skin, if possible. The abdomen was entered via a midline laparotomy or at the lateral edge of the graft if the bowels were covered with a split skin graft. Adhesions between the ventral abdominal wall and the intra-abdominal viscera were cut, after which the length and width of the defect were measured.\n\nBODY.PATIENTS AND METHODS.COMPONENTS SEPARATION TECHNIQUE (CST GROUP):\nThe component separation technique was performed as described in detail in former publications.2,12,13 Briefly, the skin and subcutaneous fat are dissected free from the anterior rectus sheath and the aponeurosis of the external oblique muscle (Figure 1A). The aponeurosis of the external oblique muscle is transected longitudinally about 2 cm lateral from the rectus sheath, including the muscular part that inserts on the thoracic wall, which extends at least 5–7 cm cranially of the costal margin (Figure 1B). The external oblique muscle is separated from the internal oblique muscle as far laterally as possible (Figure 1B). The posterior rectal sheath is separated from the rectus abdominis muscle if tension-free closure is impossible (Figure 1C). The fascia is closed in the midline with a running polydioxanone suture (PDS-loop, Johnson & Johnson, Ltd.) of at least 4 times the length of the incision. The skin is closed over at least two closed suction drains.\nFigure 1.Operative technique of the \"components separation technique.\" 1 = rectus abdominis muscle; 2 = external oblique muscle; 3 = internal oblique muscle; 4 = transversus abdominis muscle; 5 = posterior rectal sheath. A. Dissection of skin and subcutaneous fat. B. Transaction of aponeurosis of external oblique muscle and separation of internal oblique muscle. C. Mobilization of posterior rectal sheath and closure in the midline. Adapted from Bleichrodt et al.13, with permission of Elsevier.\n\nBODY.PATIENTS AND METHODS.PROSTHETIC REPAIR (E-PTFE GROUP):\nThe skin and subcutaneous tissue are mobilized from the underlying fascia of the rectus abdominis muscle. As a consequence all epigastric perforating arteries supplying the overlying skin are separated. After adhesiolysis , a 20 × 30 cm, 1.5-mm-thick e-PTFE patch (Gore-Tex dual mesh plus with holes, W. L. Gore and associates Inc., Flagstaff, AZ, USA) is shaped in size and implanted intra-abdominally as underlay with an overlap of at least 4 cm to the aponeurosis, as described elsewhere.14 The mesh is placed intra-abdominally as an underlay and is sutured under slight tension to the ventral abdominal wall using a double row of interrupted sutures of e-PTFE 1/0 (Gore-Tex 1/0, W. L. Gore and associates Inc., Flagstaff, AZ, USA) that passed the rectus abdominis muscle. The prosthesis is implanted with the microporous side facing the intra-abdominal viscera and the macroporous side facing the fascia. (As a consequence of the large size of the hernias, the fascia could not be closed over the prosthesis in any of the patients in our series.) After implantation, the skin is closed over at least two closed suction drains.\n\nBODY.PATIENTS AND METHODS.POSTOPERATIVE CARE:\nAntibiotic prophylaxis, cefazoline 3 × 1 g and metronidazole 3 × 500 mg was started preoperatively and continued for the first 24 h postoperatively. All patients have epidural anesthesia if possible. Wounds were inspected on a daily basis with respect to hematoma, seroma, skin necrosis, and wound infection. Hematoma was defined as an accumulation of blood in the operative field for which a surgical intervention (puncture or drainage) was needed; seroma as an accumulation of fluid in the operative field for which an intervention (puncture or drainage) was needed in case of mechanical or physical limitations. Skin edge necrosis was defined as necrotic loss of full thickness skin for which surgical intervention was needed. The wound was scored on a daily basis according to CDC criteria, as follows15: grade 1: normal wound, grade 2: erythema and swelling, grade 3: purulent effluent; or grade 4: open wound. Drains were removed after 5 days or if production was less than 50 ml/24 h. The thorax was examined daily by physical examination, and a routine x-ray of the thorax was performed on the second and seventh days after the operation, to detect pneumonia and atelectasis. No specific instructions were given to the patients after operation and patients had no restriction of physical activity except heavy lifting. Follow-up was done in the outpatient clinic at 3, 6, 12, 24, and 36 months after operation. At each visit a physical examination was done to diagnose recurrent hernia. Ultrasonography or computed tomography (CT) scanning was performed on indication, especially to detect small recurrences.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nPatients were analyzed as per intention to treat. Hernia recurrence-free survival was compared using the Kaplan-Meier methods according to the intention-to-treat principle.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS.POWER ANALYSIS:\nType I and II errors were set to 0.05 and 0.1, respectively. The minimum relevant difference in reherniation between groups was set to 30%, in advantage of CST. Accordingly, a minimum of 84 patients was required (two groups of 42 patients). An interim analysis was planned to evaluate the results of the trial after inclusion of 40 patients. Differences between groups were analyzed using the Fisher exact test for demographic data, preoperative and perioperative data, wound complications, reoperation, and reherniation (Table 1).\nTable 1.Study characteristic of patients with prosthetic repair or components separation techniqueGroup 1: prosthetic repairGroup 2: components separation techniqueSignificance(t-test)Age (mean)58.7 (range: 42–82)53.9 (range: 33–37)NS(Fisher exact test)Gender (women/men)6/126/13NS(t-test)BMI28.7 (range: 21.5–39.6)28.2 (range: 23.9–38.7)NSDefect (median) (cm)(t-test)Length25 (range: 20–30)25 (range: 20–33)NSWidth17 (range: 9–30)15 (range: 7–25)NSSkin (n)(t-test) Intact, full thickness1412NS Intact, split skin47NSAnesthesia (n)(Fisher exact test) General35NS Epidural and general1514NS(t-test)Operative time (min)183 (range 135–254)113 (range 63–175)p < 0.001(t-test)Blood loss (ml)420 (range 100–900)289 (range 50–1000)NSICU stay(Mann-Whitney U-test) Patients (n)63NS Time (days)2 (range: 1–6)5 (range: 1–10)NS(t-test)Pulmonary complications (n)24NS Pneumonia21NS Atelectasis03NSAnalgesia(t-test) Epidural (days)2.4 (range: 0–5)2.4 (range: 0–6)NS Morphine (days)3.3 (range: 0–8)3.6 (range: 0–10)NS(Fisher exact test)Wound complication (n)11NS Hematoma74NS Seroma32NS Skin necrosis23NSInfected mesh (n)70(t-test)Reoperation (in OR) for wound complications (n)72p = 0.05Recurrence (n)1110NSBMI: body mass index; ICU: intensive care unit; NS: not statistically significant; OR: Operation room.\n\nBODY.RESULTS:\nBetween November 1999 and June 2001, 39 patients were included in the study and were operated on by one of 5 surgeons. Two patients were excluded from the study because of gross contamination during operation. Nineteen patients, 6 women and 13 men, were randomized to the CST group: reconstruction using the components separation technique. The mean age of these 19 patients was 53.9 years (range: 33–73 years). Eighteen patients, 6 women and 12 men, were randomized to the e-PTFE group (prosthetic repair). Their mean age was 58.7 years (range: 42–82 years). In the CST group, closure of the fascia was accomplished in 18 of the 19 patients (Figure 2). In one patient the abdominal wall hernia was too large and had to be repaired using a combination of the CST and prosthetic repair. In the e-PTFE group the procedure was successful in 17 of the 18 patients. In one patient the abdominal wall hernia was too large and was reconstructed using a combination of prosthetic repair and CST. No differences were found between the groups with respect to demographic data (Table 1), co-morbidity, length and width of the defect, skin coverage, anesthesia, blood loss, and ICU stay (Table 1). All operations were performed without major intraoperative complications, except for the two excluded patients with gross perioperative contamination. The operation time for prosthetic repair was significantly longer as compared with the components separation technique (p < 0.001, Fisher exact test) (Table 1). This is mainly due to the time-consuming fixation of the patch to the fascia with a double row of single sutures.\nFigure 2.A. Preoperative view of a giant abdominal wall hernia covered with a split skin. B. Postoperative view of the same abdominal wall after reconstruction using the components separation technique.\n\nBODY.RESULTS.POSTOPERATIVE MORTALITY AND MORBIDITY:\nThere was no 30-day mortality. Major wound complications were found in 10 of the 19 patients in the CST group: wound infection (n = 3), skin necrosis (n = 2), hematoma (n = 1). Four patients developed seroma; these were not associated with the aforementioned complications. Major wound complications were found in 13 of the 18 patients in the e-PTFE group: wound infection (n = 2), skin necrosis (n = 3), hematoma (n = 1). Both wound infection and skin necrosis ultimately resulted in loss of the prosthesis (Table 1). Seven patients developed a seroma. In two of these patients seroma puncture was performed to prevent spontaneous evacuation via the midline wound; this resulted in infection and, ultimately, loss of the patch. Seven patches were removed after a median period of 94 days (range: 30–262 days). In the cases where the prosthesis was removed, the abdominal wall defect was reconstructed using CST. Pulmonary complications were found in 4 patients in the CST group and 2 in the e-PTFE group (not significant, Fisher exact test) (Table 1).\n\nBODY.RESULTS.REHERNIATION:\nFollow-up was complete in all patients. Four patients in the CST group died before the end of the follow-up period 5, 9, 10, and 12 months after the operation from unrelated causes. Two had a reherniation at the time of death. Of the remaining 15 patients, 8 had a reherniation. Recurrences occurred after a mean period of 7 months (range: 0.5–12 months). Recurrences were all located in the midline in the upper abdomen and were small. Two patients underwent reconstruction of their recurrence. One patient in whom the reconstruction was performed with a combination of CST and prosthetic bridging had a recurrent hernia at the edge of the prosthesis (Figure 3).\nFigure 3.Kaplan-Meier for recurrent hernia after prosthetic repair (n = 18) and components separation technique (n = 19). Seven of 18 prostheses were removed during the first 7 months after implantation. Reherniation rates after 36 months are similar in both groups.\n\nBODY.RESULTS.PROSTHETIC REPAIR:\nOne patient in the e-PTFE group died 6 months after the operation from an unrelated cause. Of the remaining 17 patients, 7 had an infected prosthesis that had to be removed. The abdominal wall defect was then reconstructed using CST repair. Four other patients had a small recurrent hernia after prosthetic repair, without complaints. Recurrences occurred after a mean period of 22 months (range: 6–36 months). None of these four patients underwent reoperation for their recurrence (Figure 3).\n\nBODY.DISCUSSION:\nThe present study is the first randomized controlled trial comparing different techniques to repair giant midline hernias and the first prospective trial regarding the \"component separation technique.\" Although our series is relatively small, the results suggest that repair of giant abdominal wall defects with the component separation technique compares favourably with prosthetic repair, because wound infection in patients in whom a prosthetic repair was performed had major consequences, resulting in removal of the prosthesis in 7, whereas wound infection in patients after CST had only minor consequences. Disturbed wound healing frequently complicates repair of large abdominal wall hernias. Wound complications such as hematoma, seroma, skin necrosis, and infection are reported in 12%–67% of patients after CST2–5,7–12,16 and in 12%–27% after prosthetic repair. Wound complications are associated with the extensive dissection needed in both procedures, which are often performed after intra-abdominal catastrophes. The risk is further increased by the long duration of the operative procedure and the need to mobilize the skin in dividing the epigastric perforating arteries (Figure 4). This endangers the blood supply of the skin, because then it solely depends on the intercostal arteries, which may have been damaged during former operations by introduction of drains, or by stoma construction and other procedures needed in patients with intra-abdominal sepsis.17–19 Wound complications in our series were rather frequent. Although they are mentioned in most other publications about CST, the method of follow-up is mentioned in only one other study from our own group.12Figure 4.The operation wound after performing a components separation technique for abdominal wall reconstruction, showing the large wound surface and the extensive skin dissection needed. Loss of the prosthesis may also be associated with the choice of the prosthetic material used. Several materials have been developed for hernia repair. In the present series only patients with giant and often complex hernias were included. In the majority of these patients the peritoneum or greater omentum was not available to interpose between the prosthesis and the intra-abdominal viscera. Therefore, an e-PTFE dual patch was used to bridge the fascial gap. The expanded-PTFE dual patch has significantly better mechanical properties than polypropylene-mesh. It is a soft pliable microporous material that causes no mechanical trauma to the viscera. The micropores on both sides of the patch are too small to allow ingrowth of fibrocollagenous tissue, thus preventing fibrous adhesions on the visceral side of the patch. Lack of ingrowth results in insufficient anchorage of the patch to the adjacent fascia, however, and this is a major disadvantage of e-PTFE patches.14,20,21 The patch should be placed as underlay with an overlap of at least 4 cm and fixed to the aponeurosis with a double row of single sutures.14 The e-PTFE patch is prone to infection because of its hydrophobic characteristics. To reduce the infection risk, the e-PTFE patch used is impregnated with silver salts and chlorhexidine, which both have anti-microbial properties and work synergistically.22 Moreover, antibiotic prophylaxis was given to all patients and an adhesive drape was applied to the skin. Nevertheless, 40% of our patients had an early or late infection resulting in removal of the patch. In a recent experimental study in rats with a large abdominal wall defect, it was found that impregnation with silver salts resulted in an aggravated inflammatory response around the patch and an increased reherniation rate.23 This observation may explain the increased risk for seroma formation, which is associated with prosthetic loss in this study. Some patients (n = 3, 16%) were operated under clean-contaminated condition, which means they had an accidental bowel lesion during adhesiolysis without gross contamination. We suspect that most surgeons still place a prosthetic patch for abdominal wall reconstruction in these situations, which is supported by some small series in the literature.24,25 In our opinion polypropylene, which is still the most widely used material for hernia repair, is contraindicated because of its propensity for inducing extensive visceral adhesions and occasional fistula formation.26–28 If large areas of polypropylene mesh are exposed, scar contraction will result in wrinkling of the polypropylene mesh, causing mechanical irritation, which promotes infection and carries the risk of mesh erosion into the skin or the intestine.29 If the polypropylene mesh cannot be covered with full-thickness skin, chronic infection and sinus formation will ultimately result in loss of the mesh.27 Therefore the results probably would not have been better if polypropylene mesh or polypropylene mesh based prosthesis was used. Recurrent hernia still is a major problem The only randomized controlled trial comparing open suture and mesh repair of small ventral hernias was reported by Luijendijk et al. reporting recurrence rates of 46% and 23%, respectively, after a follow-up of 36 months and 63% and 32%, respectively, after a follow-up of 75 months.1,30 In retrospective studies recurrence rates of 25%–63% in suture repair and 8%–25% in mesh repair are reported. Tension-free repair of incisional hernia is a prerequisite to prevent recurrence. In CST a tension-free repair was accomplished. In the literature recurrence rates of 0%–28% have been reported for CST, although how follow-up was accomplished is not well documented in most series.2–12 But it seems impossible to have a reherniation rate, in series of large abdominal wall defects, that is far below the reherniation rate of reconstruction of small abdominal wall defects in a well performed randomized controlled trial.1,30 Despite the high recurrence rate in the present study and our retrospective study, CST remains an attractive technique for repair of giant ventral hernias. Most recurrent hernias are small and asymptomatic and need no further treatment. In addition, the functional and cosmetic results are good and patients were satisfied. In a recent other study in 39 patients undergoing CST repair for heavily contaminated abdominal wall defects, similar results were found with respect to complications and reherniation rate (36%).31 All but one patient indicated satisfaction with the result when compared to their situation before operation. In that study, postoperative quality of life was assessed using the SF 36 questionnaire. When compared to the general population, patients had an average score or higher on pain, vitality, social functioning, and role limitations (emotional problems); the score was below average on physical functioning, role limitations (physical problems), in general health perception, and in mental health.31 On the basis of the interim analysis, the trial was discontinued because the frequency of wound complications resulting in subsequent prosthetic loss was unacceptably high. Because underlay repair necessitates transection of the perforating epigastric arteries in patient with prosthetic repair it was expected that this complication could not be prevented, whereas CST remains possible if the epigastric perforators are spared. Impregnation of the e-PTFE patch with silver salts and chlorhexidine might have contributed to this.23 Recently, a prospective randomized controlled trial has started comparing CST with CST + preperitoneal polypropylene mesh support, combining the advantages of CST and prosthetic repair.\n\n**Question:** Compared to prosthetic repair with e-PTFE patch (PR) what was the result of components separation technique (CST) on Reherniation?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1179
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Mirabegron for the treatment of overactive bladder: a prespecified pooled efficacy analysis and pooled safety analysis of three randomised, double-blind, placebo-controlled, phase III studies\n\n ABSTRACT.INTRODUCTION:\nTo examine pooled efficacy data from three, large phase III studies comparing mirabegron (50 and 100 mg) with placebo, and pooled safety data including additional mirabegron 25 mg and tolterodine extended release (ER) 4 mg results.\n\nABSTRACT.METHODS:\nThis prespecified pooled analysis of three randomised, double-blind, placebo-controlled, 12-week studies, evaluated efficacy and safety of once-daily mirabegron 25 mg (safety analysis), 50 or 100 mg (efficacy and safety analyses) and tolterodine ER 4 mg (safety analysis) for the treatment of symptoms of overactive bladder (OAB). Co-primary efficacy measures were change from baseline to Final Visit in the mean number of incontinence episodes/24 h and mean number of micturitions/24 h. Key secondary efficacy end-points included mean number of urgency episodes/24 h and mean volume voided/micturitions, while other end-points included patient-reported outcomes according to the Treatment Satisfaction-Visual Analogue Scale (TS-VAS) and responder analyses [dry rate (posttreatment), ≥ 50% reduction in incontinence episodes/24 h, ≤ 8 micturitions/24 h (post hoc analysis)]. The safety analysis included adverse event (AE) reporting, laboratory assessments, ECG, postvoid residual volume and vital signs (blood pressure, pulse rate).\n\nABSTRACT.RESULTS:\nMirabegron (50 and 100 mg once daily) demonstrated statistically significant improvements compared with placebo for the co-primary end-points, key secondary efficacy variables, TS-VAS and responder analyses (all comparisons p < 0.05). Mirabegron is well tolerated and demonstrates a good safety profile. The most common AEs (≥ 3%) included hypertension, nasopharyngitis and urinary tract infection (UTI); the incidence of hypertensive events and UTIs decreased with increasing dose. For mirabegron, the incidence of the bothersome antimuscarinic AE, dry mouth, was at placebo level and of a lesser magnitude than tolterodine.\n\nABSTRACT.CONCLUSION:\nThe efficacy and safety of mirabegron are demonstrated in this large pooled clinical trial dataset in patients with OAB.\n\nBODY:\nWhat's knownOveractive bladder (OAB) is a chronic condition that is increasingly prevalent with age and represents an increasing healthcare burden to society.Antimuscarinic drugs are the current mainstay of pharmacologic treatment for OAB, but are commonly associated with adherence-limiting adverse effects including dry mouth, constipation and blurred vision.Many patients discontinue antimuscarinic therapy or seek alternative approaches to control OAB as a result of adverse effects or suboptimal efficacy.What's newMirabegron is a β3-adrenoceptor agonist, the first in a new class of drugs with a mode of action that is different from antimuscarinic agents.Mirabegron stimulates the β3-adrenoceptors in the bladder leading to detrusor relaxation and increased bladder capacity.In three international, multicentre, phase III studies, comparing mirabegron, at doses of 25, 50 and 100 mg, with placebo, mirabegron was associated with significant improvements in incontinence episodes and micturition frequency, and was well tolerated.\n\nBODY.INTRODUCTION:\nOveractive bladder (OAB) is defined by the International Continence Society as urinary urgency, with or without urinary incontinence, usually with frequency and nocturia, with no proven infection or other obvious pathology 1,2. Lower urinary tract symptoms, including those associated with OAB, are prevalent in approximately half of the general population 3, are often debilitating and have a significant impact on patients' daily routines and quality of life 3–5. OAB affects 12–16% of the adult population across Europe, the USA and Japan 6–8, and its prevalence increases with age, affecting approximately 30% of individuals > 65 years of age 9, therefore representing a substantive health problem for the ageing population. Normal bladder function is regulated by the synergistic actions of the autonomic nervous system: parasympathetic nerves stimulate contraction of the detrusor muscle via muscarinic receptors while stimulation of the sympathetic nerves leads to detrusor relaxation via β-adrenoceptors. The central pathophysiology of OAB involves involuntary contractions of the detrusor muscle as a result of increased afferent activity, decreased inhibitory control from the central nervous system, and/or increased sensitivity of the detrusor muscle to parasympathetic stimulation 10,11. Because of the role of β-adrenoceptors in relaxation of the detrusor, research was aimed at identifying an effective and well-tolerated agent directed at the β-adrenoceptor. The β3-adrenoceptor was recognised as the predominant β-receptor subtype in the human urinary bladder, representing 97% of total β-adrenoceptor mRNA expression in the human bladder 12. The importance of its role in the relaxation of the detrusor has been confirmed by in vitro pharmacological studies using human bladder strips 13–15. In animal models, activation of the β-adrenoceptor in the bladder facilitates urine storage through flattening and lengthening of the bladder base 16. Furthermore, stimulation of the β3-adrenoceptor is associated with increased bladder capacity, without change in micturition pressure, residual volume or voiding contraction 17–19. As a consequence, targeting the β3-adrenoceptor increases the storage capacity of the bladder without affecting the amplitude of the voiding contraction. Mirabegron, a β3-adrenoceptor agonist approved in Japan, the USA, Canada and Europe, for the treatment of OAB symptoms, is the first of a new class of compounds with a mechanism of action that is different from antimuscarinic agents 20. The recommended starting dose of mirabegron is 25 mg in the USA, which can be increased to 50 mg based on individual efficacy and tolerability, and 50 mg in Japan and Europe; the 100 mg dose is not approved for use. Three large-scale, 12-week, phase III studies conducted in the USA/Canada (study 047; NCT00662909) 21; Europe and Australia (study 046; NCT00689104) 22; and Europe/USA/Canada (study 074; NCT00912964)23 have demonstrated the safety, tolerability and superior efficacy of mirabegron at doses of 25 mg (NCT00912964 only), 50 mg (all three studies) and 100 mg (NCT00662909 and NCT00689104) compared with placebo for the treatment of OAB symptoms. The pooled efficacy analysis was prespecified and excluded the 25 mg mirabegron dose because it was limited to a single study. The safety analysis was an aggregate of all data from the three phase III studies and included the 25, 50 and 100 mg mirabegron doses. Pooling these data allows for additional analyses of efficacy and safety in a large dataset representative of the general OAB population in terms of demographics, previous OAB history and prevalence of comorbidities. In particular, it is important to examine the efficacy, safety and tolerability profile demonstrated in the individual studies in the pooled population for overall effect size as well as safety data reporting across the treatment groups. These pooled mirabegron data evaluations inform benefit–risk assessments and support judicious dose selection.\n\nBODY.METHODS.STUDY DESIGN:\nThis was a prespecified pooled analysis of three randomised, double-blind, placebo-controlled, 12-week studies, which evaluated the efficacy of mirabegron 50 and 100 mg, and safety of mirabegron 25 mg (NCT00912964 only), 50 mg or 100 mg for the treatment of symptoms of OAB. Each study shared a similar design, although the assessed doses of mirabegron and inclusion of an active control arm with tolterodine extended release (ER) 4 mg varied across trials (Figure 1). Male and female patients aged ≥ 18 years with OAB symptoms for ≥ 3 months were enrolled. Following a 2-week, single-blind, placebo run-in period to determine baseline symptoms and patient eligibility, patients were randomised if, during a 3-day micturition diary period, they recorded ≥ 8 micturitions/24 h and ≥ 3 urgency episodes – based on urgency grade of 3 or 4 according to the Patient Perception of Intensity and Urgency Scale [PPIUS 24] – with or without urgency incontinence. Patients were excluded with stress incontinence or mixed incontinence with stress predominance at screening or had an average total daily urine volume > 3000 ml. Figure 1Overview of the individual phase III studies included in the pooled analyses. †Screening from weeks −3 to −2; *Evaluation of adverse events and concomitant medication by telephone contact or visit 30 days after Final Visit in studies 046 and 047 and 2 weeks after Final Visit in Study 074. n, number of patients randomised Patients were randomised to receive once-daily oral placebo, mirabegron 25 mg (NCT00912964 only), mirabegron 50 mg (NCT00912964/NCT00689104/NCT00662909), mirabegron 100 mg (NCT00689104/NCT00662909) or tolterodine ER 4 mg (NCT00689104 only) for 12 weeks using a computer-generated randomisation scheme; after assignment to treatment, both patient and study investigator were blinded to the identity of the study drug. Efficacy data for treatment arms that were represented in only one study (i.e. mirabegron 25 mg or tolterodine ER 4 mg) were not pooled as these data were only available from one trial. In summary, the efficacy data in these pooled analyses relate to the placebo and 50 mg mirabegron groups from all three phase III studies, and the 100 mg mirabegron group from two of the studies [NCT00662909 21/NCT00689104 22]. In contrast, the pooled 12-week phase III safety analysis includes data for the placebo, 25, 50 and 100 mg mirabegron doses and the tolterodine ER 4 mg active control arm.\n\nBODY.METHODS.EFFICACY END-POINTS:\nEfficacy measures were recorded in a patient micturition diary over a 3-day period prior to clinic visits which occurred at baseline and 4, 8 and 12 weeks/Final Visit (end of treatment, i.e. last on-treatment assessment including patients who did not complete week 12 visit) after randomisation to treatment. The co-primary end-points in all three studies were change from baseline to Final Visit in the mean number of incontinence episodes/24 h and mean number of micturitions/24 h. Key secondary efficacy end-points in all three studies were change from baseline to Final Visit in the mean volume voided/micturition, change from baseline to week 4 in the mean number of incontinence episodes/24 h, and change from baseline to week 4 in the mean number of micturitions/24 h. Study 074 (NCT00912964) had three additional key secondary end-points – mean level of urgency (based on the PPIUS), mean number of urgency episodes (PPIUS grade 3 or 4)/24 h, and the mean number of urgency incontinence episodes/24 h – which were also included as additional secondary end-points in Study 046 (NCT00689104) and Study 047 (NCT00662909) but did not undergo the hierarchical testing procedure for multiple end-points. Further secondary end-points included change from baseline to Final Visit in mean number of nocturia episodes/24 h and patients considered 'responders' to treatment, i.e. the proportion of patients with incontinence episode(s) at baseline with zero incontinence at Final Visit, the proportion of patients with a ≥ 50% decrease from baseline in mean number of incontinence episodes/24 h, and a post hoc analysis of the proportion of patients with ≤ 8 micturitions/24 h at Final Visit. Patient-reported outcomes were assessed using the Treatment Satisfaction-Visual Analogue Scale (TS-VAS), a quantitative instrument measuring subjective improvement in patients with OAB. Complete satisfaction was indicated by a score of 10, with positive change from baseline indicating improvement.\n\nBODY.METHODS.SAFETY END-POINTS:\nThe analysis of the pooled safety data included adverse event (AE) reporting, laboratory parameters (haematology and serum chemistry), vital signs (pulse rate and blood pressure), ECG data and postvoid residual (PVR) volume. AEs of interest were based on potential or theoretical risk related to the pharmacology of β3-agonists, observed findings in non-clinical or clinical data, or recommended monitoring for specific events. The most common treatment-emergent AEs (TEAEs), according to the preferred term, are reported herein as TEAEs in ≥ 3% of the total mirabegron population. Hypertension was reported as an AE if average systolic blood pressure (SBP) was ≥ 140 mmHg and/or average diastolic blood pressure (DBP) ≥ 90 mmHg at two consecutive postbaseline visits for normotensive patients at baseline; average SBP increased ≥ 20 mmHg and/or average DBP increased ≥ 10 mmHg at two consecutive postbaseline visits for patients with hypertension at baseline; initiation or increase in antihypertensive medication; or according to clinical assessment by the investigator. Tachycardia was to be reported as an AE if average AM or PM resting pulse rate measured by the patient over 3 days was > 100 bpm (unless in the investigator's opinion this did not constitute an AE of tachycardia). Urinary retention was summarised as a reported TEAE based on spontaneous reporting by the investigator, without a prespecified definition per protocol, and assessed concurrently with changes in PVR volume (change from baseline of ≥ 150 ml).\n\nBODY.METHODS.STATISTICAL ANALYSES:\nThe safety analysis set (SAF) comprised all randomised patients who took ≥ 1 dose of double-blind study drug; the full analysis set (FAS) comprised SAF patients who had ≥ 1 micturition measurement at baseline and ≥ 1 postbaseline micturition measurement; the FAS-incontinence (FAS-I) set comprised FAS patients who reported ≥ 1 incontinence episode at baseline. Efficacy analyses were performed using the FAS except for incontinence episode end-points which used the FAS-I. Safety analyses were performed using the SAF. Descriptive statistics were used for baseline demographic and clinical characteristics. For the efficacy analysis, data for the placebo and 50 mg mirabegron groups from all three studies were pooled, and data for the 100 mg mirabegron groups from two studies were pooled. Hypothesis testing on the change from baseline in incontinence episodes was performed using a separate stratified rank analysis of covariance (ANCOVA) for each pairwise treatment group difference (mirabegron 50 and 100 mg vs. placebo). For all other efficacy end-points, hypothesis testing comparing mirabegron 50 and 100 mg vs. placebo on the change from baseline values was performed using an ANCOVA model including treatment, gender and study as fixed factors, and baseline value as a covariate. Based on the ANCOVA, least squares mean estimates and two-sided 95% confidence intervals (CIs) for mean changes from baseline were derived within treatment groups and between each mirabegron treatment group and placebo on all efficacy end-points. Multiplicity between the two primary and six key secondary efficacy end-points was controlled at the α = 0.05 type I error rate using a stepwise parallel gatekeeping procedure in which the placebo-corrected difference in mean change from baseline for each mirabegron dose had to be statistically significant to allow hypothesis testing to proceed to the next end-point step. The Hochberg procedure was performed at the α = 0.05 significance level for comparisons of the 50 and 100 mg mirabegron treatment groups with placebo to adjust for multiplicity within each stage. If only one mirabegron dose group proceeded to the next stage for any efficacy end-point, the comparison with placebo was assessed at the α = 0.025 level. Responder end-points were analysed using a logistic regression model which included treatment, gender, study and baseline value. For the pooled safety analysis, data from all three studies were included: placebo, 25, 50 and 100 mg mirabegron groups, and tolterodine ER 4 mg arm. All AEs were coded using the Medical Dictionary for Regulatory Activities version 12.1. TEAEs were defined as any AE which started, or worsened, after the first dose of study drug through 30 days after the last dose of study drug. Drug-related TEAEs included those AEs reported as probably or possibly related to study drug as assessed by the investigator or if the AE had a missing relationship. Descriptive statistics were used for measurements of laboratory parameters, ECGs and PVR volume. Change from baseline values for vital sign variables was analysed using the ANCOVA model described above.\n\nBODY.RESULTS.DEMOGRAPHIC AND BASELINE CLINICAL CHARACTERISTICS:\nOverall, there were a total of 3542 patients (placebo, n = 1328; mirabegron 50 mg, n = 1324; mirabegron 100 mg, n = 890) and 2317 patients (placebo, n = 878; mirabegron 50 mg, n = 862; mirabegron 100 mg, n = 577), respectively, in the FAS and FAS-I population for the pooled efficacy analysis. The pooled safety analysis included the additional 25 mg mirabegron dose and tolterodine active control arm and consisted of 4611 patients (mirabegron 25 mg, n = 432; mirabegron 50 mg, n = 1375; mirabegron 100 mg, n = 929; placebo, n = 1380; tolterodine, n = 495) who took at least one dose of double-blind study medication. In the FAS and SAF analyses, the disposition and reasons for discontinuation of study drug were similar for all treatment groups (Figure 2). Figure 2Patient disposition in the pooled efficacy and safety analyses Demographic and baseline characteristics were similar between the FAS and FAS-I population with the exception of gender; approximately 72% and 82% of patients were female in the FAS and FAS-I population, respectively (Table 1). Of the patients who had received prior antimuscarinic OAB medication, approximately two-thirds discontinued prior OAB treatment because of insufficient efficacy, and approximately one-quarter because of poor tolerability. The proportion discontinuing prior OAB treatment for either reason was similar across treatment groups (Table 1). Table 1 Patient demographics and baseline clinical characteristics by treatment group (FAS) Placebo ( n = 1328) Mirabegron 50 mg ( n = 1324) Mirabegron 100 mg ( n = 890) Gender, n (%)  Male 362 (27.3) 382 (28.9) 241 (27.1)  Female 966 (72.7) 942 (71.1) 649 (72.9) Age (years)  Mean (SD) 59.2 (13.2) 59.7 (12.6) 59.8 (12.9)  Range 20–95 21–91 19–90 Race, n (%)  White 1227 (92.4) 1235 (93.3) 838 (94.2)  Black or African American 80 (6.0) 61 (4.6) 36 (4.0)  Asian 13 (1.0) 17 (1.3) 8 (0.9)  Other 8 (0.6) 11 (0.8) 8 (0.9) BMI (kg/m 2 ) *  Mean 29.1 (6.3) 29.0 (6.1) 29.1 (6.1)  Range 15.9–58.1 16.3–60.5 16.2–62.6 Type of OAB, n (%)  Urgency incontinence 442 (33.3) 491 (37.1) 297 (33.4)  Mixed 415 (31.3) 412 (31.1) 271 (30.4)  Frequency 471 (35.5) 421 (31.8) 322 (36.2) Duration of OAB (months)  Mean (SD) 86.3 (99.1) 85.2 (93.1) 88.3 (101.6) Previous OAB drug, n (%)  Yes 704 (53.0) 688 (52.0) 460 (51.7) Reasons for previous OAB drug discontinuation, n (%) †  Insufficient effect   Yes 466 (66.2) 464 (67.4) 296 (64.3)   No 238 (33.8) 224 (32.6) 164 (35.7)  Poor tolerability   Yes 185 (26.3) 173 (25.1) 113 (24.6)   No 519 (73.7) 515 (74.9) 347 (75.4) Mean number of incontinence episodes/24 h  Mean (SD) 1.8 (2.5) 1.8 (2.5) 1.8 (2.4)  Range 0–26 0–18 0–15 Mean number of micturitions/24 h  Mean (SD) 11.6 (3.1) 11.7 (3.2) 11.6 (3.0)  Range 4–40 6–37 7–35 FAS, full analysis set; SD, standard deviation; BMI, body mass index; OAB, overactive bladder. * For BMI, placebo ( n = 1327) and mirabegron 100 mg ( n = 889). † Percentages are based on patients who had taken previous OAB medication. Patients could choose more than one reason for discontinuation. \n\nBODY.RESULTS.EFFICACY OUTCOMES:\nMirabegron at doses of 50 and 100 mg once daily demonstrated statistically significant improvement from baseline to Final Visit compared with placebo in reducing the mean number of incontinence episodes/24 h and the mean number of micturitions/24 h (p < 0.05 for all comparisons; Table 2 and Figures 3 and 4, respectively). The adjusted mean (95% CI) change from baseline to Final Visit in the mean number of incontinence episodes/24 h was −1.10 (−1.23, −0.97), −1.49 (−1.63, −1.36) and −1.50 (−1.67, −1.34) in the placebo, mirabegron 50 and 100 mg groups, respectively, and −1.20 (−1.34, −1.06), −1.75 (−1.89, −1.61), −1.74 (−1.91, −1.56) for the adjusted mean change from baseline to Final Visit in the mean number of micturitions/24. Table 2 Summary of primary and secondary efficacy end-points (FAS and FAS-I) Placebo Mirabegron 50 mg Mirabegron 100 mg Co-primary end-points  Change from baseline to Final Visit in the mean number of incontinence episodes/24 h (FAS-I)   Baseline 2.73 (0.09) 2.71 (0.09) 2.79 (0.10)   Final Visit 1.64 (0.09) 1.23 (0.08) 1.25 (0.09)   Change from baseline −1.09 (0.09) −1.48 (0.08) −1.54 (0.09)   Adjusted change from baseline * (95% CI) −1.10 (−1.23, −0.97) −1.49 (−1.63, −1.36) −1.50 (−1.67, −1.34)   Adjusted difference vs. placebo * (95% CI) – −0.40 (−0.58, −0.21) 3 −0.41 (−0.62, −0.19) 3  Change from baseline to Final Visit in the mean number of micturitions/24 h (FAS)   Baseline 11.58 (0.09) 11.70 (0.09) 11.58 (0.10)   Final Visit 10.39 (0.09) 9.93 (0.09) 9.83 (0.11)   Change from baseline −1.18 (0.08) −1.77 (0.08) −1.75 (0.09)   Adjusted change from baseline * (95% CI) −1.20 (−1.34, −1.06) −1.75 (−1.89, −1.61) −1.74 (−1.91, −1.56)   Adjusted difference vs. placebo * (95% CI) – −0.55 (−0.75, −0.36) † −0.54 (−0.77, −0.31) † Key secondary end-points  Change from baseline to Final Visit in the mean volume voided/micturition (FAS)   Baseline 159.2 (1.54) 159.0 (1.55) 157.9 (1.89)   Final Visit 168.6 (1.90) 180.2 (2.01) 179.9 (2.39)   Change from baseline 9.4 (1.31) 21.2 (1.31) 22.0 (1.52)   Adjusted change from baseline * (95% CI) 9.4 (6.9, 12.0) 21.4 (18.8, 23.9) 21.7 (18.5, 24.9)   Adjusted difference vs. placebo * (95% CI) – 11.9 (8.3, 15.5) † 12.3 (8.1, 16.5) †  Change from baseline to week 4 in the mean number of incontinence episodes/24 h (FAS-I)   Baseline 2.73 (0.09) 2.71 (0.09) 2.79 (0.10)   Week 4 2.06 (0.10) 1.59 (0.08) 1.69 (0.10)   Change from baseline −0.67 (0.08) −1.12 (0.08) −1.10 (0.09)   Adjusted change from baseline * (95% CI) −0.67 (−0.81, −0.54) −1.12 (−1.26, −0.98) −1.09 (−1.27, −0.92)   Adjusted difference vs. placebo * (95% CI) – −0.45 (−0.64, −0.26) † −0.42 (−0.65, −0.20) †  Change from baseline to week 4 in the mean number of micturitions/24 h (FAS)   Baseline 11.58 (0.09) 11.71 (0.09) 11.58 (0.10)   Week 4 10.82 (0.09) 10.52 (0.10) 10.26 (0.11)   Change from baseline −0.76 (0.07) −1.19 (0.07) −1.32 (0.09)   Adjusted change from baseline * (95% CI) −0.77 (−0.90, −0.64) −1.17 (−1.30, −1.04) −1.33 (−1.50, −1.16)   Adjusted difference vs. placebo * (95% CI) – −0.40 (−0.59, −0.22) † −0.56 (−0.78, −0.35) †  Mean level of urgency (FAS)   Baseline 2.39 (0.02) 2.42 (0.02) 2.46 (0.02)   Final Visit 2.25 (0.02) 2.15 (0.02) 2.19 (0.02)   Change from baseline −0.14 (0.02) −0.27 (0.02) −0.27 (0.02)   Adjusted change from baseline * (95% CI) −0.15 (−0.18, −0.12) −0.26 (−0.30, −0.23) −0.26 (−0.30, −0.22)   Adjusted difference vs. placebo * (95% CI) – −0.11 (−0.16, −0.07) † −0.11 (−0.16, −0.06) †  Mean number of urgency incontinence episodes/24 h (FAS−I)   Baseline 2.42 (0.08) 2.42 (0.08) 2.53 (0.10)   Final Visit 1.46 (0.08) 1.05 (0.07) 1.11 (0.09)   Change from baseline −0.96 (0.08) −1.37 (0.07) −1.42 (0.09)   Adjusted change from baseline * (95% CI) −0.98 (−1.10, −0.86) −1.38 (−1.50, −1.26) −1.38 (−1.53, −1.23)   Adjusted difference vs. placebo * (95% CI) – −0.40 (−0.57, −0.23) † −0.40 (−0.60, −0.20) †  Mean number of urgency episodes (PPIUS grade 3 or 4)/24 h (FAS)   Baseline 5.61 (0.10) 5.80 (0.10) 5.96 (0.12)   Final Visit 4.38 (0.11) 3.85 (0.11) 4.02 (0.13)   Change from baseline −1.23 (0.10) −1.95 (0.10) −1.94 (0.12)   Adjusted change from baseline * (95% CI) −1.29 (−1.47, −1.11) −1.93 (−2.11, −1.75) −1.89 (−2.11, −1.66)   Adjusted difference vs. placebo * (95% CI) – −0.64 (−0.89, −0.39) † −0.60 (−0.89, −0.31) † Additional secondary end-points  Change from baseline to Final Visit in mean number of nocturia episodes/24 h (FAS)   Baseline 2.18 (0.04) 2.22 (0.04) 2.25 (0.05)   Final Visit 1.78 (0.04) 1.66 (0.04) 1.71 (0.05)   Change from baseline −0.41 (0.04) −0.56 (0.04) −0.54 (0.04)   Adjusted change from baseline * (95% CI) −0.42 (−0.48, −0.35) −0.55 (−0.62, −0.49) −0.54 (−0.62, −0.46)   Adjusted difference vs. placebo * (95% CI) – −0.14 (−0.23, −0.05) ‡ −0.12 (−0.23, −0.02) ‡  Change from baseline to Final Visit in TS-VAS (FAS)   Baseline 4.87 (0.11) 4.82 (0.11) 4.43 (0.13)   Final Visit 6.05 (0.10) 6.79 (0.09) 6.94 (0.11)   Change from baseline 1.18 (0.11) 1.96 (0.12) 2.51 (0.14)   Adjusted change from baseline * (95% CI) 1.25 (1.08, 1.42) 2.01 (1.84, 2.19) 2.33 (2.11, 2.55)   Adjusted difference vs. placebo * (95% CI) – 0.76 (0.52, 1.01) ‡ 1.08 (0.80, 1.37) ‡ All values mean (SE) unless otherwise stated. * Estimates are based on an analysis of covariance model, which included treatment group, gender and study as fixed factors, and baseline as a covariate. † p < 0.05 vs. placebo with multiplicity adjustment. ‡ p < 0.05 vs. placebo without multiplicity adjustment. Figure 3Adjusted mean change from baseline (±SE) by time point in the mean number of incontinence episodes/24 h for the pooled placebo, mirabegron 50 and 100 mg groups (FAS-I). #Statistically significant treatment benefit relative to placebo (p < 0.05) with multiplicity adjustment. *Statistically significant treatment benefit relative to placebo (p < 0.05) without multiplicity adjustment. SE, standard error; FAS-I, full analysis set-incontinence. Figure 4Adjusted mean change from baseline (±SE) by time point in mean number of micturitions/24 h for the pooled placebo, mirabegron 50 and 100 mg groups (FAS). #Statistically significant treatment benefit relative to placebo (p < 0.05) with multiplicity adjustment. *Statistically significant treatment benefit relative to placebo (p < 0.05) without multiplicity adjustment. SE, standard error; FAS, full analysis set. The key secondary end-points, change from baseline to week 4 (first time point assessed) in the mean number of incontinence episodes/24 h and mean number of micturitions/24 h, were significantly improved for both mirabegron doses compared with placebo (p < 0.05, all comparisons; Table 2 and Figures 3 and 4, respectively). The adjusted mean (95% CI) change from baseline to week 4 in the mean number of incontinence episodes/24 h was −0.67 (−0.81, −0.54), −1.12 (−1.26, −0.98) and −1.09 (−1.27, −0.92) in the placebo, mirabegron 50 and 100 mg groups, respectively. The adjusted mean (95% CI) change from baseline to week 4 in the mean number of micturitions/24 h was −0.77 (−0.90, −0.64), −1.17 (−1.30, −1.04) and −1.33 (−1.50, −1.16) in the placebo, mirabegron 50 and 100 mg groups, respectively. Both 50 and 100 mg mirabegron doses were associated with statistically significant improvements from baseline to Final Visit compared with placebo for the key secondary efficacy end-points of mean volume voided/micturition (p < 0.05 for all comparisons; Table 2), mean level of urgency, mean number of urgency episodes (grade 3 or 4)/24 h and mean number of urgency incontinence episodes/24 h (p < 0.05 for all comparisons; Table 2 and Figure 5). Figure 5Adjusted mean change from baseline to Final Visit (SE) in key secondary end-points: (A) mean level of urgency (FAS) (B) number of urgency episodes/24 h (FAS) and number of urgency incontinence episodes/24 h (FAS-I). #Statistically significant treatment benefit compared with placebo (p < 0.05) with multiplicity adjustment. SE, standard error; FAS, full analysis set; FAS-I, full analysis set-incontinence. Significant improvements were also evident in the pooled efficacy analysis with respect to the change from baseline to Final Visit (95% CI) in the mean number of nocturia episodes/24 h in the mirabegron 50 mg [−0.55 (−0.62, −0.49)] and mirabegron 100 mg groups [−0.54 (−0.62, −0.46)] vs. placebo [−0.42 (−0.48, −0.35)] (p < 0.05; Table 2). For the responder analyses, the percentage of patients with zero incontinence episodes at Final Visit ('dry rate') was significantly higher in the mirabegron 50 and 100 mg groups (44.1% and 46.4%, respectively) compared with placebo (37.8%). The corresponding odds ratios (95% CI) for the mirabegron 50 and 100 mg groups were 1.32 (1.08, 1.61) and 1.58 (1.25, 2.00), respectively (p < 0.05). The percentage of patients with a ≥ 50% reduction from baseline to Final Visit in mean number of incontinence episodes/24 h was also significantly higher in the mirabegron 50 and 100 mg groups (69.5% and 70.5%, respectively) compared with placebo (59.6%). The corresponding odds ratios (95% CI) for the mirabegron 50 and 100 mg groups were 1.54 (1.26, 1.89) and 1.64 (1.29, 2.07), respectively (p < 0.05). In the post hoc responder analysis, both 50 and 100 mg mirabegron doses were also associated with a significant improvement in 'micturition normalisation' (i.e. the percentage of patients with ≤ 8 micturitions/24 h) at Final Visit (31.6% and 34.0%) compared with placebo (24.6%). The corresponding odds ratios (95% CI) for the mirabegron 50 and 100 mg groups were 1.57 (1.30, 1.89) and 1.69 (1.37, 2.09), respectively (p < 0.05). In health-related quality of life outcomes, mirabegron 50 and 100 mg was associated with a statistically significant improvement in the TS-VAS score at Final Visit compared with placebo (p < 0.05; Table 2). The adjusted mean (95% CI) changes from baseline to Final Visit were 1.25 (1.08, 1.42), 2.01 (1.84, 2.19) and 2.33 (2.11, 2.55) for the placebo, mirabegron 50 and 100 mg groups, respectively.\n\nBODY.RESULTS.EFFICACY IN PATIENTS WITH PRIOR ANTIMUSCARINIC USE AND TREATMENT-NAÏVE PATIENTS:\nBoth mirabegron 50 and 100 mg doses improved the co-primary end-points in patients who had discontinued previous antimuscarinic medication and in treatment-naïve patients. The adjusted mean difference (95% CI) from baseline to Final Visit in the mean number of incontinence episodes/24 h in patients who had discontinued previous antimuscarinic medication was −0.92 (−1.09, −0.75), −1.49 (−1.66, −1.32) and −1.42 (−.64, −1.21) in the placebo, mirabegron 50 mg and mirabegron 100 mg groups, respectively, and −1.35 (−1.55, −1.14), −1.50 (−1.71, −1.29) and −1.62 (−1.87, −1.36) in treatment-naïve patients. The adjusted mean difference (95% CI) in the mean number of micturitions/24 h from baseline to Final Visit in the placebo, mirabegron 50 and 100 mg groups, respectively, was −0.93 (−1.12, −0.74), −1.67 (−1.86, −1.48) and −1.61 (−1.85, −1.37) in patients who discontinued previous antimuscarinic therapy, and −1.51 (−1.71, −1.31), −1.84 (−2.04, −1.64) and −1.87 (−2.12, −1.63) in treatment-naïve patients. The adjusted mean difference vs. placebo in patients who discontinued previous antimuscarinic therapy was of a greater magnitude than in treatment-naïve patients because of the higher placebo response in treatment-naïve patients.\n\nBODY.RESULTS.SAFETY OUTCOMES:\nThe overall incidence of TEAEs was similar across treatment groups (Table 3) and there was no evidence of a dose–response relationship across the mirabegron treatment groups for overall rates of TEAEs: placebo (47.7%), mirabegron 25 mg (48.6%), mirabegron 50 mg (47.1%), mirabegron 100 mg (43.3%), total mirabegron (46.0%) and tolterodine ER 4 mg (46.7%). The most common TEAEs reported in ≥ 3% of the total mirabegron group were hypertension, nasopharyngitis and urinary tract infection (Table 3). The incidence of hypertension and urinary tract infection decreased as the dose of mirabegron increased from 25 to 100 mg. The most common drug-related TEAEs in the mirabegron groups were hypertension and headache, which were of similar incidence with placebo and tolterodine, with the exception of dry mouth, which was at placebo level for mirabegron but of a higher magnitude in the tolterodine group (Table 3). The majority of AEs were of mild or moderate severity across treatment groups. Table 3 Overview of treatment-emergent adverse events in the pooled safety analysis (SAF) Mirabegron Number of patients (%) Placebo ( n = 1380) 25 mg ( n = 432) 50 mg ( n = 1375) 100 mg ( n = 929) Total ( n = 2736) Tolterodine ER 4 mg ( n = 495) Any TEAE 658 (47.7) 210 (48.6) 647 (47.1) 402 (43.3) 1259 (46.0) 231 (46.7) Drug-related TEAE 232 (16.8) 87 (20.1) 256 (18.6) 172 (18.5) 515 (18.8) 131 (26.5) TEAE leading to discontinuation 46 (3.3) 17 (3.9) 53 (3.9) 34 (3.7) 104 (3.8) 22 (4.4) Drug-related TEAE leading to discontinuation 27 (2.0) 11 (2.5) 35 (2.5) 25 (2.7) 71 (2.6) 20 (4.0) SAE 29 (2.1) 7 (1.6) 29 (2.1) 26 (2.8) 62 (2.3) 11 (2.2) Drug-related SAE 6 (0.4) 3 (0.7) 7 (0.5) 3 (0.3) 13 (0.5) 6 (1.2) Common TEAEs by preferred term (reported by ≥ 3% in total mirabegron group)  Hypertension 105 (7.6) 49 (11.3) 103 (7.5) 48 (5.2) 200 (7.3) 40 (8.1)  Nasopharyngitis 35 (2.5) 15 (3.5) 54 (3.9) 25 (2.7) 94 (3.4) 14 (2.8)  Urinary tract infection 25 (1.8) 18 (4.2) 40 (2.9) 25 (2.7) 83 (3.0) 10 (2.0) Antimuscarinic AEs of interest by preferred term (reported by ≥ 2% in any group)  Headache 43 (3.1) 10 (2.3) 47 (3.4) 23 (2.5) 80 (2.9) 18 (3.6)  Dry mouth 29 (2.1) 8 (1.9) 23 (1.7) 23 (2.5) 54 (2.0) 50 (10.1)  Constipation 20 (1.4) 7 (1.6) 22 (1.6) 15 (1.6) 44 (1.6) 10 (2.0) Drug-related * \n TEAEs by preferred term (reported by ≥ 2% in any group)  Hypertension 63 (4.6) 30 (6.9) 65 (4.7) 32 (3.4) 127 (4.6) 30 (6.1)  Headache 18 (1.3) 4 (0.9) 28 (2.0) 12 (1.3) 44 (1.6) 11 (2.2)  Dry mouth 22 (1.6) 7 (1.6) 13 (0.9) 20 (2.2) 40 (1.5) 47 (9.5) SAF, safety analysis set; ER, extended release; TEAE, treatment-emergent adverse event; SAE, serious adverse event. * Possible or probable, as assessed by the investigator, or records where relationship was missing. The proportion of patients discontinuing study drug because of TEAEs was low and similar across each group: placebo (3.3%), mirabegron 25 mg (3.9%), mirabegron 50 mg (3.9%), mirabegron 100 mg (3.7%), total mirabegron (3.8%) and tolterodine ER 4 mg (4.4%). The incidence of serious AEs was also low and similar between placebo and mirabegron groups: placebo (2.1%), mirabegron 25 mg (1.6%), mirabegron 50 mg (2.1%), mirabegron 100 mg (2.8%), total mirabegron (2.3%) and tolterodine ER 4 mg (2.2%). Three deaths were reported in this pooled 12-week safety analysis: in the placebo group, a 76-year-old woman died as a result of cardiac arrest 56 days after the last dose of study drug; in the mirabegron 100 mg group, a 66-year-old woman died because of metastatic colon cancer 30 days after the last dose of study drug – neither case was considered to be related to study drug – and in the tolterodine ER 4 mg group, a 74-year-old man died because of a ruptured cerebral aneurysm on day 70 (treatment-emergent), 10 days after the last dose of study drug was administered, and was considered as possibly related to study drug. In terms of vital sign measurements for blood pressure and pulse rate, the adjusted mean difference vs. placebo for change from baseline to Final Visit in SBP and DBP (AM and PM measurements) was negligible and comparable across each mirabegron group and the tolterodine ER 4 mg group. The adjusted mean difference vs. placebo for change from baseline to Final Visit in pulse rate (AM and PM measurements) increased dose-dependently by 0.6–2.3 bpm in the mirabegron groups and by 1.0–2.1 bpm in the tolterodine ER 4 mg group (Table 4). The overall occurrence of tachycardia events, either based on TEAE and/or observations of pulse rate ≥ 100 bpm captured by patient diary, was less than 5% in each treatment group and comparable with placebo (Table 4). ECG findings revealed no overt trends in central tendency or categorical outliers for QTc interval assessment across treatment groups. Changes in haematology and serum chemistry, including renal parameters, were small and comparable across treatment groups. Table 4 Summary of changes from baseline to Final Visit in blood pressure and pulse rate Placebo ( n = 1380) Mirabegron 25 mg ( n = 432) Mirabegron 50 mg ( n = 1375) Mirabegron 100 mg ( n = 929) Tolterodine ER 4 mg ( n = 495) AM PM AM PM AM PM AM PM AM PM Blood pressure (mmHg)  SBP   N 1329 1326 410 410 1327 1327 891 890 476 476  Baseline, mean (SE) 125.9 (0.47) 125.0 (0.41) 129.2 (0.81) 129.0 (0.71) 126.4 (0.47) 125.6 (0.43) 125.0 (0.55) 123.7 (0.49) 128.2 (0.75) 127.4 (0.64)  Final Visit, mean (SE) 126.2 (0.45) 125.7 (0.41) 128.8 (0.75) 128.3 (0.67) 127.2 (0.46) 126.6 (0.41) 125.6 (0.51) 125.2 (0.46) 128.4 (0.70) 127.9 (0.63)  Adjusted change from baseline, mean (SE), (95% CI) † 0.2 (0.25), (−0.3, 0.7) 0.6 (0.25), (0.1, 1.1) −0.3 (0.52), (−1.3, 0.7) −0.5 (0.53), (−1.5, 0.6) 0.8 (0.25), (0.3, 1.3) 1.1 (0.25), (0.6, 1.6) 0.6 (0.32), (−0.0, 1.2) 1.4 (0.33), (0.8, 2.1) 0.1 (0.45), (−0.8, 1.0) 0.5 (0.46), (−0.4, 1.4)  Difference vs. placebo, mean (SE), (95% CI) ‡ – – −0.5 (0.57), (−1.6, 0.6) −1.0 (0.58), (−2.2, 0.1) 0.6 (0.35), (−0.1, 1.3) 0.5 (0.36), (−0.2, 1.2) 0.4 (0.41), (−0.4, 1.2) 0.9 (0.42), (0.1, 1.7) −0.1 (0.52), (−1.1, 1.0) −0.0 (0.53), (−1.1, 1.0)  DBP   N 1329 1326 410 410 1327 1327 890 890 476 476  Baseline, mean (SE) 77.1 (0.26) 75.3 (0.23) 78.2 (0.48) 76.1 (0.46) 77.2 (0.25) 75.4 (0.24) 77.4 (0.30) 75.3 (0.28) 76.8 (0.40) 75.4 (0.38)  Final Visit, mean (SE) 77.2 (0.25) 75.7 (0.24) 77.6 (0.43) 75.7 (0.42) 77.6 (0.24) 76.2 (0.24) 77.7 (0.28) 76.3 (0.28) 77.7 (0.39) 76.6 (0.36)  Adjusted change from baseline, mean (SE), (95% CI) † 0.0 (0.16), (−0.3, 0.3) 0.4 (0.16), (0.1, 0.7) −0.1 (0.33), (−0.8, 0.5) 0.1 (0.34), (−0.6, 0.7) 0.4 (0.16), (0.1, 0.7) 0.7 (0.16), (0.4, 1.1) 0.2 (0.21), (−0.2, 0.6) 0.9 (0.21), (0.5, 1.3) 0.8 (0.29), (0.2, 1.3) 1.4 (0.30), (0.8, 2.0)  Difference vs. placebo, mean (SE), (95% CI) ‡ – – −0.1 (0.37), (−0.9, 0.6) −0.3 (0.37), (−1.0, 0.4) 0.4 (0.22), (−0.1, 0.8) 0.4 (0.23), (−0.1, 0.8) 0.2 (0.26), (−0.3, 0.7) 0.5 (0.27), (−0.0, 1.0) 0.7 (0.33), (0.1, 1.4) 1.0 (0.34), (0.4, 1.7) Pulse rate (bpm)   N 1329 1326 410 410 1327 1327 891 890 476 476  Baseline, mean (SE) 70.5 (0.28) 75.3 (0.29) 71.0 (0.50) 75.5 (0.51) 70.4 (0.28) 74.9 (0.28) 70.4 (0.34) 74.4 (0.34) 69.8 (0.44) 73.9 (0.45)  Final Visit, mean (SE) 70.9 (0.29) 74.7 (0.29) 71.7 (0.52) 75.3 (0.53) 71.8 (0.29) 75.5 (0.28) 72.9 (0.34) 76.5 (0.34) 71.4 (0.44) 76.0 (0.45)  Adjusted change from baseline, mean (SE), (95% CI) † 0.4 (0.17), (0.1, 0.8) −0.4 (0.18), (−0.8, −0.1) 1.3 (0.36), (0.6, 2.0) 0.2 (0.37), (−0.6, 0.9) 1.4 (0.17), (1.1, 1.8) 0.6 (0.18), (0.2, 0.9) 2.3 (0.22), (1.9, 2.8) 1.9 (0.23), (1.4, 2.3) 1.4 (0.31), (0.8, 2.0) 1.7 (0.32), (1.1, 2.3)  Difference vs. placebo, mean (SE), (95% CI) ‡ – – 0.9 (0.40), (0.1, 1.6) 0.6 (0.41), (−0.2, 1.4) 1.0 (0.24), (0.5, 1.5) 1.0 (0.25), (0.5, 1.5) 1.9 (0.28), (1.3, 2.5) 2.3 (0.29), (1.7, 2.9) 1.0 (0.36), (0.3, 1.7) 2.1 (0.37), (1.4, 2.8) Incidence of tachycardia (%) *  Tachycardia events, N (%) 43 (3.1) 21 (4.9) 52 (3.8) 43 (4.6) 16 (3.2) SBP, systolic blood pressure; DBP, diastolic blood pressure. * Based on TEAE and/or observations of pulse rate ≥ 100 bpm captured by patient diary. † Adjusted change from baseline is generated from the ANCOVA model with treatment group, gender and study as fixed factors, and baseline as a covariate. ‡ Differences in the adjusted means are calculated by subtracting the adjusted mean of placebo from the adjusted mean of treatment group. With respect to other events of interest, the incidence of urinary retention was infrequent in the studies and less in mirabegron-treated patients than in placebo or tolterodine-treated patients: placebo (n = 7; 0.5%), mirabegron 25 mg (n = 0), mirabegron 50 mg (n = 1; 0.1%), mirabegron 100 mg (n = 0), total mirabegron (n = 1; < 0.1%) and tolterodine (n = 3; 0.6%). The mean change from baseline to Final Visit in PVR volume was unremarkable across treatment groups: placebo (−1.6 ml), mirabegron 25 mg (−3.0 ml), mirabegron 50 mg (−0.9 ml), mirabegron 100 mg (−0.7 ml) and tolterodine ER 4 mg (+ 0.1 ml). The proportion of patients experiencing ≥ 150 ml change from baseline in PVR volume was lower in the mirabegron groups compared with placebo and tolterodine: placebo (0.7%), mirabegron 25 mg (0%), mirabegron 50 mg (0.3%), mirabegron 100 mg (0.4%) and tolterodine ER 4 mg (0.8%).\n\nBODY.DISCUSSION:\nMirabegron, a β3-adrenoceptor agonist, is approved for the treatment of OAB symptoms in Japan, the USA, Canada and Europe, and has a different mechanism of action from the antimuscarinic agents currently available. Mirabegron facilitates detrusor relaxation during the storage phase of the micturition cycle and, in contrast with antimuscarinic agents, improves the storage capacity of the bladder without impairing the amplitude of the contraction during the voiding phase of the micturition cycle 25. In this pooled analysis of three international phase III studies 21–23, a total of 3542 OAB patients were included in the efficacy analysis, and 4611 OAB patients in the safety analysis, reflective of the general OAB population. More than half of these patients were randomised to mirabegron for 12 weeks at doses of 25, 50 and 100 mg. Furthermore, this pooled dataset included approximately 50% of OAB patients who had been previously treated with antimuscarinic agents but had discontinued either because of insufficient efficacy and/or poor tolerability. In OAB patients, mirabegron 50 and 100 mg demonstrated significant and clinically meaningful efficacy in terms of improving the symptoms of OAB, including incontinence episodes, micturition frequency and urgency, which are recognised by international guidelines as important outcome measures for OAB therapies 26. The benefit of mirabegron is evident in both treatment-naïve and previous OAB-treated patients. This is a clinically relevant observation, given the poor persistence rate associated with antimuscarinic drugs – discontinuation rates above 50% and 75% have been reported at 6 months and 1 year, respectively 27,28 – and the fact that, in addition to bothersome AEs (e.g., dry mouth), inadequate response to treatment is an underlying reason for discontinuation of antimuscarinic treatment 27,29–31. Both 50 and 100 mg doses of mirabegron demonstrated significant improvements compared with placebo for all of the key secondary end-points, including measures of urgency (the core symptom of OAB), and mean volume voided/micturition. Across primary and key secondary end-points, significant improvements with both mirabegron doses were evident at the first assessment period (week 4) and were maintained throughout the 12-week treatment period – an earlier benefit cannot be ruled out but was not assessed prior to the initial 4-week evaluation. Responder analyses showed a significant improvement with mirabegron 50 and 100 mg in terms of dry rates, ≥ 50% reduction in mean number of incontinence episodes/24 h and the proportion of patients with ≤ 8 micturitions/24 h at Final Visit. Taken together, these data demonstrate a clinically meaningful benefit with mirabegron in the objective end-points of OAB. As OAB is a symptom complex, it can be difficult to determine the efficacy of treatments using objective measures alone. Hence, patient-reported outcomes are increasingly important in the assessment of treatments for symptom-based conditions such as OAB 4. The TS-VAS scores for the 50 and 100 mg mirabegron doses were significantly better and approximately one point higher than patients receiving placebo and reflect the directional parallelism of the subjective and objective improvements in OAB symptoms. The mirabegron 100 mg dose does not appear to afford a greater efficacy benefit than the 50 mg dose for reduction in incontinence episodes or micturition frequency vs. placebo. A dose–response effect was not evident with the 50 and 100 mg doses across the co-primary and secondary end-points. Although data from the 25 mg mirabegron arm 23 were not included in the pooled efficacy analysis, 25 mg is the recommended starting dose in the USA. In the NCT00912964 phase III study, mirabegron 25 mg showed clinically significant efficacy compared with placebo for the co-primary end-points; however, for the majority of the secondary end-points it did not demonstrate significance vs. placebo 23. In terms of the safety and tolerability profile of mirabegron, the pooled analysis confirms the results of the individual phase III studies 21–23. The most common AEs (> 3% of total mirabegron group) were hypertension, nasopharyngitis and urinary tract infection. The incidence of hypertension and urinary tract infection decreased as the mirabegron dose increased and both incidences in the total mirabegron group were similar to placebo or tolterodine. The incidence of the most bothersome AE for antimuscarinic drugs, dry mouth, was at placebo level with each mirabegron dose and lower than tolterodine. This finding is consistent with previous randomised, controlled trials of other antimuscarinic therapies, which have reported even higher incidences of dry mouth (9–93%); these studies have also reported high incidences of constipation (4–21%) 32–41. Given that dry mouth is the chief cause of discontinuation because of poor tolerability with antimuscarinic drugs 27,29, the pooled safety data indicate that mirabegron may be a valuable treatment option for patients with OAB. The pooled safety analysis offers more precise estimates of certain safety parameters, such as vital signs and AEs related to cardiovascular safety, than the individual phase III studies. In this pooled analysis, mirabegron was associated with approximate increases of ≤ 1 mmHg in blood pressure and ≥ 1 bpm in pulse rate vs. placebo. The incidence of hypertension was similar among the total mirabegron group, placebo and tolterodine. The incidence of urinary retention was greater in the placebo and tolterodine groups than in mirabegron-treated patients. ECG, PVR volume and laboratory data were unremarkable across each treatment group. Overall, the data support the good safety profile of mirabegron based on this pooled phase III clinical study dataset.\n\nBODY.CONCLUSIONS:\nMirabegron, a β3-adrenoceptor agonist, and the first in its class for the treatment of OAB symptoms, has a distinct mechanism of action from antimuscarinic agents, the current pharmacological mainstay in treating OAB. In these pooled analyses of efficacy and safety, mirabegron administered at daily doses of 50 and 100 mg for 12 weeks demonstrated significant and clinically meaningful improvement in treating the symptoms of OAB, including urinary incontinence, micturition frequency and urgency. Mirabegron was beneficial in both treatment-naïve patients and in patients who used prior antimuscarinic treatment. The improvement in OAB symptoms measured objectively was reflected in the improvement in the patient-reported outcome related to treatment satisfaction. Mirabegron was well tolerated with an incidence of dry mouth at placebo level and lower than tolterodine. Mirabegron represents a new therapeutic in a new class of medication for the treatment of OAB and has demonstrable efficacy and tolerability in a large clinical trial dataset in OAB patients. In clinical practice, mirabegron will provide a valuable treatment option for OAB syndrome.\n\n**Question:** Compared to placebo what was the result of mirabegron (50 and 100 mg) on the Treatment Satisfaction-Visual Analogue Scale (TS-VAS)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1181
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of a balance-based exergaming intervention using the Kinect sensor on posture stability in individuals with Parkinson’s disease: a single-blinded randomized controlled trial\n\n ABSTRACT.BACKGROUND:\nThe present study examined the effects of a balance-based exergaming intervention using the Kinect sensor on postural stability and balance in people with Parkinson's disease (PD).\n\nABSTRACT.METHODS:\nWe conducted a subject-blinded, randomized controlled study. Twenty people with PD (Hoehn and Yahr stages I through III) were recruited and randomly assigned to either a balance-based exergaming group (N = 10) or a balance training group (N = 10) for an 8-week balance training period. Postural stability was assessed using the limits of stability (LOS) and one-leg stance (OLS) tests. Balance was assessed using the Berg Balance Scale (BBS) and the timed up and go (TUG) test. Participants were assessed pre- and post-training.\n\nABSTRACT.RESULTS:\nAfter training, participants in the balance-based exergaming group showed significant improvements in LOS performance, and in the eyes-closed condition of the OLS test. Both training programs led to improvements in BBS and TUG performance. Furthermore, balance-based exergaming training resulted in significantly better performance in directional control in the LOS test (78.9 ± 7.65 %) compared with conventional balance training (70.6 ± 9.37 %).\n\nABSTRACT.CONCLUSIONS:\nBalance-based exergaming training resulted in a greater improvement in postural stability compared with conventional balance training. Our results support the therapeutic use of exergaming aided by the Kinect sensor in people with PD.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov.NCT02671396\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/s12984-016-0185-y) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nPeople with idiopathic Parkinson's disease (PD) commonly exhibit postural instability during daily activities [1]. PD-related balance impairment is associated with a loss of mobility and increased likelihood of falls, and can cause marked disability [2, 3]. To ameliorate postural instability, techniques using external feedback with cueing or sensory stimuli have been investigated [4, 5]. Several studies suggest that external feedback may initiate other neural pathways and play a significant role in the volitional control of movements for people with PD [6, 7]. Virtual reality (VR) technologies such as exergaming may have therapeutic value in the treatment of postural instability [8–10]. VR is a technology that allows the user to interact directly with a computer-simulated environment [11]. Exergames are computer games that are controlled by body movements. VR and exergaming can provide augmented feedback in real time, while a person performs specific motor tasks [12]. Opportunities for repeated accurate performance can be incorporated into VR and exergaming to enhance motor learning [7, 13]. Moreover, VR games can be effective for retaining participants' interest and motivation. A recent meta-analysis suggested that exergaming may provide an appropriate training approach to improve balance and functional mobility in healthy older people [14]. These findings raise the possibility that exergaming might also provide an approach for improving postural instability for people with PD. A previous study examined the effects a 6-week home-based balance training program using the Wii Fit game for a total of 18 training sessions on balance and functional abilities in people with PD, compared with a group of paired healthy participants [15]. Another study investigated the effects of Wii-based training compared with conventional balance training for 7 weeks (a total of 14 training sessions) on activities of daily living in people with PD [16]. Both studies revealed positive effects of exergaming on balance, functional abilities and activities of daily living among people with PD. However, positive effects were found only within groups, with no between-group differences observed in a comparison with the control group. The absence of between-group differences may have resulted from an inability to capture the full-body motion involved in postural control, or the lack of a sufficiently sensitive sensor to accurately measure motion. The shortcomings of the Wii system's sensors may limit its potential as an effective intervention [17]. A new exergaming system was recently developed using the Kinect sensor. The Kinect sensor is a low-cost device that can provide measurements for most of the main human joints. Previous studies reported that a kinematic measurement method using the Kinect sensor was accurate and reliable for measuring postural control [18, 19]. These findings suggest that the Kinect sensor could provide a useful tool for therapeutic use. However, there has been little research into the therapeutic use of the Kinect sensor to date. The present study sought to test a therapeutic application of exergaming using the Kinect sensor. We examined the effects of an 8-week balance-based exergaming program developed in our lab, compared with an 8-week period of conventional balance training (16 training sessions), on postural stability and balance in people with PD. We hypothesized that participants who underwent an 8-week balance-based exergaming intervention would demonstrate superior performance on measures of postural stability and balance, compared with those who received balance training.\n\nBODY.METHODS.PARTICIPANTS:\nParticipants were recruited from Mackay Memorial Hospital in Taipei. Outpatients with PD were informed about the study by a neurologist. Eligibility required a diagnosis of idiopathic PD according to the United Kingdom Brain Bank Criteria [20] by the same neurologist. Information on age, gender, the more affected side, and disease duration were obtained through patient interviews and from medical charts. All participants met the following inclusion criteria: (1) Hoehn and Yahr stages I through III, (2) a score of ≥ 24 on the mini-mental state examination, (3) stable medication usage and (4) standing unaided to perform the measurement and training. The exclusion criteria were as follows: (1) histories of other neurological, cardiovascular, or orthopedic diseases affecting postural stability and (2) uncontrolled chronic diseases. In total, 48 individuals were identified as potential participants for this study. Of these, 22 participants gave informed consent and participated in the study.\n\nBODY.METHODS.STUDY DESIGN:\nThis study was a subject-blinded, randomized controlled trial. The study protocol was approved by the Institutional Review Board of Mackay Memorial Hospital (reference number: 13MMHIS120) and was explained to all participants before their participation. The study was performed in accordance with the Declaration of Helsinki. Block randomization was used to assign participants to either the balance-based exergaming (BE) or the conventional balance training (BT) group. Assignment was performed by an independent person who selected one of a set of sealed envelopes 30 min before the intervention began. Participants in the BE and BT groups received an 8-week balance-based exergaming intervention, and conventional balance training, respectively. Measures of postural stability and functional balance were measured pre- and post-training. The measurement and intervention were conducted with participants in the \"on\" state, when they were moving freely and easily without dystonia, excessive rigidity or tremor. The data were collected in a university laboratory.\n\nBODY.METHODS.INTERVENTION:\nParticipants in both groups underwent balance training for 50 min per session, two sessions every week, for 8 weeks. Each training session began with a 10-min warm-up and ended with a 10-min cool-down. Both the warm-up and cool-down periods focused on stretching exercises of the trunk and extremities. Participants in the BE group received a 30-min balance-based exergaming intervention using the Kinect sensor (Microsoft Corporation, Redmond, WA, USA). The Kinect sensor incorporates infrared light and a video camera, which creates a 3D map of the area in front of it. This device provides full-body 3D motion capture. Four exergaming programs were used for training (Fig. 1), designed to incorporate an appropriate level of challenge to match the ability and fitness of people with PD. The first program was called Reaching task 1. In this task, participants were asked to reach toward a stationary target at a given location. The second program was called Reaching task 2. Participants were asked to track a moving object by lengthening the arm and immersing the hand into the object as it flew in 3D space. The third program was called Obstacle avoidance. Participants were instructed to avoid upcoming obstacles that approached from varying directions at random, by moving the body right/left or up/down. The final task was called Marching. Participants were instructed to step alternately without going forward or backward while following dynamic bars that were automatically rising and falling at a predetermined speed and frequency. During the training duration, the challenge level was increased progressively by adjusting the amplitude, frequency, speed, complexity and number of hints. The details of the exergaming programs are shown in Table 1.Fig. 1Screen shots of interaction with the exergaming program. Four exergaming programs, Reaching task 1 (a), Reaching task 2 (b), Obstacle avoidance (c) and Marching (d), were designed and used for trainingTable 1Program of balance-based exergaming interventionProgramActionProgressionMotor demandReaching task 1Standing in a given area and reaching toward a stationary target at different heights, depths and in different directions• Reaching length• Number of targets• Range of distribution• Amount of repetition• Weight shifting• Challenging limits of stability• Functional transitionsReaching task 2Standing in a given area and tracking a moving object while extending arm and immersing the hand into the object as it flew in 3D space• Speed• Moving range• Pathway pattern• Remembered sequence or course of trajectory• Weight shifting• Arm coordination• Advance motor planningObstacle avoidanceStanding in a given area and preparing to avoid upcoming obstacles that randomly approached from varying directions by moving body sideways or up/down• Obstacle hitting ratio• Speed• Dual task• Hitting direction• Quick change strategy• Movement adaption• AgilityMarchingAlternating steps without going forward while following dynamic bars that automatically rose and fell at a predetermined speed and frequency• Frequency• Gap between steps• Functional stepping• Leg coordination• Single limb support Participants in the BT group underwent a 30-min conventional balance training session. The training program included reaching activities, weight-shifting activities and marching activities. The general training protocols used for the BT group were the same as those used for the BE group. The challenge level was increased progressively by changing the base of support, speed, complexity and deprivation of sensory inputs.\n\nBODY.METHODS.OUTCOME MEASURES.POSTURAL STABILITY:\nThe limits of stability (LOS) and one-leg stance (OLS) tests were used to assess postural stability in this study. Participants were harnessed into a suspension system to prevent falls when performing the tasks. LOS performance was measured using the Smart Balance Master (NeuroCom International Inc., Clackamas, OR, USA) instrument to extract quantitative data [21–24]. The LOS test provides an assessment of the ability to intentionally displace the center of gravity (COG) to the participant's stability limits without losing balance. In this task, participants were asked to quickly transfer their COG, while standing on stable force plates, toward eight targets spaced at 45° intervals around the COG, represented on a computer monitor. All participants underwent one practice trial followed by one test trial. In the LOS test, we measured reaction time (the time from the presentation of a start cue to the onset of the voluntary shifting of the participant's COG toward the target position), movement velocity (average speed of COG movement based on the middle 90 % of the distance, measured in degrees per second), end point excursion (percentage of the distance achieved toward a target on the initial movement) and directional control (100 % being a straight line from the center of pressure to the intended target). The validity and reliability of the LOS test in people with neurological disease has been well established [25–27]. The OLS test is an assessment of postural steadiness [15, 28–31]. Participants were asked to cross their arms over the chest, and to stand on either the less or more affected leg, with the other leg raised so that the raised foot was near but not touching the ankle of the stance leg. The assessor timed the OLS test until participants either: (1) uncrossed the arms, (2) moved the stance leg, (3) moved the raised leg touching the floor or the stance leg, (4) opened the eyes on eyes-closed trials or (5) reached a maximum of 30 s. Each participant performed three trials with the eyes open, and three trials with the eyes closed. Data were averaged from the three trials. A previous study found a high degree of reliability (ICC = 0.87) in the OLS test in older adults [32].\n\nBODY.METHODS.OUTCOME MEASURES.FUNCTIONAL BALANCE:\nThe Berg Balance Scale (BBS) and the timed up and go (TUG) test were used to assess functional balance. The BBS comprises a set of 14 balance-related tasks, ranging from standing up from a sitting position, to standing on one foot. The degree of success in each task is given a score from zero (unable) to four (independent), and the final measure is the sum of all scores. The highest possible score on the BBS is 56, which indicates excellent balance. The validity and reliability (ICC > 0.95) of BBS scores in people with PD has been established in several studies [33–35]. The TUG test is a mobility test requiring both static and dynamic balance. During the test, the assessors measured the time participants took to rise from a chair, walk 3 meters, turn around, walk back to the chair, and sit down. Each participant performed three trials of the TUG test. Data were averaged from the three trials. The TUG test has previously been found to have high validity and reliability (ICC > 0.87) for assessing balance in people with PD [36, 37].\n\nBODY.METHODS.SAMPLE SIZE:\nThe sample size calculation was based on a pilot study that tested eight participants at Hoehn and Yahr stages 1 and 2, indicating a difference of 0.2 s between pre- and post-training on reaction time in the LOS test. Based on this difference, a sample size calculation indicated that 20 participants would be sufficient for 85 % power (α = 0.05).\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nAll analyses were performed using the SPSS 20.0 statistical package (SPSS Inc., Chicago, IL, USA). Descriptive statistics were generated for all variables, and distributions of variables were expressed as the mean ± standard deviation. Because of the relatively small number of participants included in the current study (N < 30) and since the results of a Shapiro-Wilk test did not allow us to assume that the data were normally distributed, nonparametric tests were employed. Comparison of two groups for general characteristics was made using chi-square or Mann-Whitney U test for categorical or continuous variables, respectively. The Friedman test, followed by a post hoc test, was used to determine differences in each dependent variable. The Wilcoxon signed-rank post hoc test was performed for within-group comparisons and the Mann-Whitney U post hoc test was performed for between-group comparisons. The statistical significance was set at P ≤ 0.05.\n\nBODY.RESULTS:\nA total of 48 individuals were screened and 22 enrolled between 2013 and 2014. Of these, 11 were assigned to the BT group, and 11 were assigned to the BE group. Of 22 participants, two did not complete the intervention (one in the BT group and one in the BE group). A flow diagram of the study protocol is shown in Fig. 2. The 20 participants who completed the intervention attended all intervention sessions. None of the participants reported any adverse events.Fig. 2Flowchart of the experimental design The demographic characteristics of participants in both groups are presented in Table 2. Demographic differences between the two groups were not significant. Moreover, differences in all pre-intervention-selected outcome measures in the two groups were not significant (Table 3).Table 2Baseline demographics and clinical characteristics of the subjectsBalance-based exergaming group (N = 10)Balance training group (N = 10)\nP\nAge (years)67.5 ± 9.9668.8 ± 9.670.67Sex (male/female)9/17/30.58Disease duration (years)4.03 ± 3.745.22 ± 4.850.34Hoehn and Yahr stage1.6 ± 0.841.4 ± 0.520.73Mini-Mental State Examination27.4 ± 2.5928.2 ± 1.990.40More affected side (right/left)8/25/50.35Data are presented as the mean ± standard deviation or proportionTable 3Outcome measures for each groupBalance-based exergaming group (N = 10)Balance training group (N = 10)Friedman testPre-trainingPost-trainingPre-trainingPost-training\nP\nLimits of stability Reaction time (sec)0.96 ± 0.330.74 ± 0.24*0.88 ± 0.240.79 ± 0.18<0.001 Movement velocity (deg/sec)3.37 ± 1.353.83 ± 0.974.19 ± 1.544.57 ± 1.410.07 Endpoint excursion (%)75.2 ± 12.4884 ± 12.04*79.7 ± 13.8481.8 ± 11.370.04 Directional control (%)75.7 ± 8.7878.9 ± 7.65*,†\n70.9 ± 10.8570.6 ± 9.370.02One-leg stance Less affected with eyes open (sec)17.39 ± 12.8715.16 ± 10.539.14 ± 9.6312.98 ± 11.080.47 More affected with eyes open (sec)15.06 ± 11.2315.58 ± 11.5813.72 ± 12.4314.54 ± 9.650.09 Less affected with eyes closed (sec)3.35 ± 2.856.1 ± 8.65*2.71 ± 2.545.31 ± 7.680.002 More affected with eyes closed (sec)3.06 ± 2.554.13 ± 2.745.88 ± 7.566.66 ± 8.410.16Berg Balance Scale50.9 ± 5.3253.2 ± 2.86*50.4 ± 4.7953 ± 1.89*0.001Timed up and go (sec)9.5 ± 2.458.71 ± 1.8*10.05 ± 4.669.18 ± 3.42*0.007Data are presented as mean ± standard deviation*and †are P ≤ 0.05 for within-group and between-group comparisons, respectively The results of the interventions are presented in Table 3. Analysis of selected outcomes using the Friedman test revealed a significant effect of intervention type on reaction time, endpoint excursion and directional control in the LOS test, and in the less affected leg in the eyes-closed condition in the OLS test, the BBS and the TUG test. Within-group post hoc analysis revealed that balance-based exergaming training significantly improved LOS performance (improving reaction time from 0.96 ± 0.33 to 0.74 ± 0.24 s, end point excursion from 75.2 ± 12.48 to 84 ± 12.04 % and directional control from 75.7 ± 8.78 to 78.9 ± 7.65 %) and OLS on the less affected leg in the eyes-closed condition (from 3.35 ± 2.85 to 6.1 ± 8.65 s). Compared with the BT group (70.6 ± 9.37 %), the BE group (78.9 ± 7.65 %) exhibited better performance in directional control of LOS post-training. Functional balance in both groups, as measured by the BBS and the TUG test, was improved significantly post-training compared with pre-training. However, no significant differences were found between groups.\n\nBODY.DISCUSSION:\nThis study produced two main findings: (1) balance-based exergaming training had a greater effect on postural stability compared with conventional balance training; and (2) both training programs improved functional balance in people with PD. The current study tested two balance training programs with similar training protocols. A recent meta-analysis examined the BBS, postural sway, TUG, and Functional Reach test as measures of postural stability, reporting that exercise therapy is an important treatment option for improving postural stability in people with PD [38]. The findings suggested that exercises containing a balance component were most beneficial in improving postural stability in people with PD [38]. In the current study, we used the LOS and OLS tests to measure postural stability, and the BBS and TUG tests to measure functional balance. The current findings were in line with the findings of Klamroth et al., who reported that balance training was beneficial for performance in the BBS and TUG tests [38]. Our findings revealed that only balance-based exergaming training produced positive effects on LOS and OLS, with particularly strong effects on directional control in LOS. These findings suggest that exergaming training using the Kinect sensor contributed to the beneficial gains we observed. As a therapeutic tool, the Kinect sensor can provide specific motor practice using full-body motion capture, which offers precise real-time information to guide performance and monitor body movement. Previous clinical trials indicated that exergaming programs using the Kinect sensor resulted in accurate capture of movement components [39, 40]. Our results revealed within-group improvements on most measures of postural stability during the exergaming intervention training period. Our exergaming programs involved various balance challenges. This may have contributed to our positive findings, involving actions focused on agility, challenging postural or locomotor-like skills, and reaching away from the base of support. All of these are involved in whole-body movements. In addition, the repetitive, real-time feedback and graded complexity in our exergaming programs may have contributed to the positive effects of training reflected in LOS performance. However, the movement velocity of LOS remained unchanged after exergaming training. Persistent bradykinesia [41] and a choice to focus on improving accuracy rather than faster motor performance among people with PD are possible reasons for our movement velocity findings [21]. The current results also revealed better OLS performance in the eyes-closed condition after exergaming training. A previous study using a Wii-based system reported similar results [15]. Because participants needed to focus on each joint position while carrying out the fine motor plan necessary for many of the tasks in the exergaming training, stimulation of proprioceptive feedback or an improvement in the internal representation of balance may have enhanced OLS performance. Little evidence is available regarding the minimal clinically important differences in postural stability and balance outcomes in people with PD. Evidence of minimal clinically important differences for LOS and OLS test in PD is lacking. Steffen and Seney reported a minimal detectable change of 5 points on the BBS for people with PD [34]. In the current study, we recorded a 2.45-point improvement after balance training for BBS. The minimally detectable change in TUG performance in people with PD has previously been reported to be 3.5 s [42], which is greater than the 0.83-second improvement observed in the present study. The small but significant changes observed in this study support the therapeutic use of exergaming interventions. However, a greater evidence base is required to support the clinical significance of these results. Several important characteristics have been identified for useful interventions in PD, suggesting that interventions should be task-specific, progressive, variable in terms of practice, and highly challenging [43, 44]. The exergaming programs designed for the current study involved each of these components. For specificity, the full-body motion capture method can be tailored for the needs of balance strategies. To create an appropriate practice resource and construct the progression and variability of program, we implemented enriched setting parameters by increasing speed, repetition and the addition of tasks. Additionally, the novel motor training gave participants more experience and an opportunity to explore or learn to negotiate the new challenges. Although only directional control in the LOS test showed a significant between-group difference, exergaming training using the Kinect system may provide additional benefits. Participants are able to practice free motions without wearing a sensor that could cause discomfort and inconvenience. Reduced staff intervention and the affordability of the device are important economic benefits of the system. Finally, considering the clinical implications of our findings, the current results suggest that the Kinect system can provide an assistive modality with therapeutic potential as a training tool under the supervision of a therapist. The current study involved several limitations. First, the sample size was small, limiting the strength to interpret our results. Second, calibration variability was observed during the preparation of each exergaming session. This issue may have influenced the effect of training because calibration was used to normalize each participant's body information. This formed the basis of the exergaming programs that were tailored for individuals with varying levels of ability. Third, most participants in this study exhibited only mild impairment, and performance at baseline was relatively high. This may have limited the benefits received from training, and the generalizability of our findings to the target population. Finally, the absence of kinematic data meant we were unable to examine spatio-temporal changes in detailed movements.\n\nBODY.CONCLUSION:\nThe current study revealed that an 8-week period of balance-based exergaming training using the Kinect sensor resulted in a greater improvement of postural stability than conventional balance training. Both exergaming and conventional balance training had positive effects on functional balance. This trial supports the potential therapeutic use of exergaming aided by the Kinect sensor for people with PD. Importantly, the significant changes in BBS and TUG performance observed after both the exergaming and conventional balance training did not reach the minimal detectable change in patients with PD. Further studies on the use of exergaming are needed to verify the clinical implications of these results.\n\n**Question:** Compared to a balance training group for an 8-week balance training period. what was the result of a balance-based exergaming group for an 8-week balance training period. on directional control in the LOS test?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1186
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effect of Breathing Exercises on the Nocturnal Enuresis in the Children with the Sleep-Disordered Breathing\n\n ABSTRACT.BACKGROUND:\nThe nocturnal enuresis is one of the most common complaints of childhood. Upper airway obstruction and nocturnal snoring affect the nocturnal enuresis in children.\n\nABSTRACT.OBJECTIVES:\nThe aim of this study was to investigate the effects of breathing exercises on the nocturnal enuresis in the children with the sleep-disordered breathing.\n\nABSTRACT.PATIENTS AND METHODS:\nThis study was conducted in year of 2011 by a semi-experimental design with the control group among 40 children, aged 6 - 12 years, who had the nocturnal enuresis. Participants were examined based on the criteria of nocturnal enuresis, oral breathing, and nocturnal snoring. Subsequently, they were randomly assigned to the case and control groups. In the case group, the breathing exercises were performed for 45 minutes, and were pursued for four weeks in the morning following and prior to sleeping, and subsequently the arterial blood gases were measured and the frequency of enuresis and the respiratory rates (RR) were recorded.\n\nABSTRACT.RESULTS:\nAfter intervention the means of PaCO2 and RR in the control group were significantly higher than the case group (P < 0.0001). Likewise, O2sat, PaO2 in the case group were higher than the control group (P < 0.0001). The nocturnal enuresis decreased significantly in the case group, compared to the control group (P < 0.0001).\n\nABSTRACT.CONCLUSIONS:\nThis study suggests that the breathing exercises may reduce the frequency of nocturnal enuresis in the patients with the oral breathing and nocturnal snore. The clinical implications of these findings should be verified in the future longitudinal studies.\n\nBODY.1. BACKGROUND:\nNocturnal enuresis is defined an accidental urination with at least, twice per week for three consecutive months, and calendar age of five years old (1). It is one of the most common complaints of childhood; children from the age of five should be able to control the urine (2, 3) because at this age, bladder capacity increases and brain nerve centers control the urinary bladder contractions during sleep (4). Many children cannot reach this stage. Therefore, children and their families face severe behavioral and social problems (1). Various methods have been proposed for the treatment of nocturnal enuresis, none of which is of a definitive efficacy (2, 3, 5). One of the findings concerning this disorder is that the nocturnal enuresis in children with the sleep-disordered breathing, whose nostrils are blocked, happens more (6-8). Relaxed muscles which help to open the airways and soft palate vibrations during sleep, will cause snoring, while in the normal condition air enters through the nose during the aspiration and slip on the edge of the soft palate. If this little edge moves and makes a barrier to the air, the sound of snoring is created (8, 9). Chang et al. (10) and Yeung et al. (11) have shown that the prevalence of nocturnal enuresis in children with the habitual snoring is four times more and 46 percent of children with the sleep- disordered breathing syndrome have the nocturnal enuresis (12). In some children, recurrent upper respiratory tract persistent infections, adenoids, habitual oral breathing, blocked Eustachian tube and nocturnal snoring can cause enuresis (13, 14). Children's physical health is dependent on how they are breathing. The natural activity of respiratory tract is controlled through the neck muscles, inter-rib muscles, abdominal muscles, and diaphragm. Some of the breathing disorders occur when the transferred messages through the brain are unable to transfer to these muscles through the spinal cord (15, 16). In children with the oral breathing, the superficial and shallow breathing create the nocturnal snore due to increasing weakness of breathing muscles. This factor causes the bladder to lose its control on urination (17, 18). Ezzat waleeda et al. have shown that opening blocked airway by the surgery will reduce the nocturnal enuresis in the children with the respiratory disorders (16). Breath control is done by the aspiration and expiration. In breathing exercises, the individual capacity will increase and the equilibrium is established between the nervous and respiratory systems and the mind will work normally (19), so that by the deep breathing, the peripheral air amount will also increase substantially. Respiratory exercises, tolerance exercises and exercise activities related to the upper and lower parts improve breathing process and create a balanced ventilation system (20). The reason is that individuals under the influence of respiratory exercises show an increase in the lung functional indices, and thus, their oxygen consumption volume rises (18, 21). A few external studies are present concerning the effect of breathing exercises improvement on the children nocturnal enuresis with the conflicting results, but we were unable to find any internal reference in this regard (15, 17, 22).\n\nBODY.2. OBJECTIVES:\nTherefore, the aim of this study was to investigate the effects of breathing exercises on the reduction of the nocturnal enuresis in the children with the sleep-disordered breathing.\n\nBODY.3. PATIENTS AND METHODS.3.1. PATIENTS:\nThis study was conducted in year of 2011 by a semi-experimental study with a control group. The control and case groups were selected based on the DSM-IV-TR criteria by the convenience sampling (23). The groups were children, who had the nocturnal enuresis, and hence; were brought to the kidney and urethra clinic affiliated to the Isfahan University of Medical Sciences by their parents. The children's problem of breathing was also diagnosed by the ear, nose and throat specialists. Both groups were consisted of 40 children (6-12 years old) with the concomitant nocturnal enuresis and oral breathing. The 40 children were classified into the two equal groups. Participants were examined based on the criteria of oral breathing, nocturnal snoring, and nocturnal enuresis two times per week and the sleep-disordered breathing confirmed by the otolaryngologists. The exclusion criteria consisted of the current drug consumption (as, tetracycline, nitrofurantoin, methicillin, acethazolamide may decrease the level of PaCO2, whereas bicarbonate, hydrocortisone, metolazone may increase the level of PaO2), (24) mental and behavioral disorders, tonsillitis, physical disability, mental retardation, allergies and cigarette smoking by the parents.\n\nBODY.3. PATIENTS AND METHODS.3.2. PROCEDURE:\nFirst nocturnal enuresis frequency and the RR were recorded in the case and control groups. The RR was measured when the children were at rest and involved counting the number of breaths for one minute by counting how many times the chest rose. In addition, the patients of both groups were analyzed for the blood arterial gases. Subsequently, breathing exercises were taught to the children and their parents in the case group by the demo games and video clips by speech therapist. The patients did not drink water/any other liquids two hours before sleep. Then breathing exercises pursued in four weeks in the morning after getting-up, and also at night before sleep. Firstly, the breathing exercises of hip, anal and neck muscles and diaphragm were performed. For the evacuation of abdominal breathing toward the spine, diaphragm was drawn to the ribs. Then, the movements of the head and spinal column spin the effective energy in the body. During these exercises, breathing was done by the right nostril. Then, breathing exercises of the nose and jaw were done with closing the right nostril and pushing the chin to the chest, while expiration was done by the left nostril. Finally, during abdominal exercises, the inspiration was retained and with mouth closing, expiration expelled through the right nostril (14, 23). Then, the frequency of nocturnal enuresis and the RR were recorded. Upon the completion of breathing exercises, the arterial blood gases were also measured.\n\nBODY.3. PATIENTS AND METHODS.3.3. COLLECTION AND HANDLING OF BLOOD SAMPLES:\nA radial arterial blood of five milliliters was drawn fasting by the air-free bubbles plastic syringes (gauge 23) containing lyophilized heparin. Subsequently, the blood samples under the condition of air-sealed were placed on the ice bag instantly, and sent in a pre-cold chamber (approximate temperature 4 ̊C) to the laboratory within 10 minutes for the blood gases analysis. The laboratory also carried out the assay within 30 minutes (24). The arterial blood gas analyzer (ABL 300, Radiometer, Copenhagen, Denmark) was employed to measure the oxygen, carbon dioxide and pH levels.\n\nBODY.3. PATIENTS AND METHODS.3.4. STATISTICAL ANALYSIS:\n Statistical analyses were performed using SPSS software, version 16.0. Data presented as mean ± SD, median [IQR] and number as appropriate. At the baseline age, weight, height, body mass index (BMI), O2sat, PaO2, PaCO2, pH and RR were compared before and after the study in the control and case groups, using the independent, paired t-tests, and repeated measurements of ANOVA. In addition, the Chi-Square test was used to assess the distribution of sex between the groups. Likewise, nocturnal enuresis frequency was compared, before and after the study within and between the groups, using the Wilcoxon Signed Rank and Mann-Whitney tests, respectively. Statistical significance was accepted at the p-value less than 0.05.\n\nBODY.3. PATIENTS AND METHODS.3.5. ETHICAL CONSIDERATION:\nThis study was approved by the Ethics Committee of the Isfahan University of Medical Sciences. Written informed consent was obtained from the parents, and that the study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki as reflected in a priory approval by the institution's human research committee.\n\nBODY.4. RESULTS:\nTable 1 displays the total number of the case group consisted of three girls and 17 boys. The case group was between 6 to 10 years old, with a mean age of 7.09 ± 1.23. In addition, the control group was between 6 to 11 years old, with a mean age of 8.12 ± 1.74. The results showed that there was insignificant difference in gender between the two groups. In addition, there was insignificant difference in the BMI, weight, and height between the case and the control groups. Table 1. Demographic Characteristics of the Groups. Case group a Control group a P value \n Weight, kg \n 29.5 (2.81) 29.3 (2.16) 0.824 b \n Height, cm \n 127.4 (2.19) 126.9 (2.04) 0.945 b \n BMI, kg/m2 \n 15.92 (2.31) 16.04 (1.08) 0.129 b \n Age \n 6-11 7.09 (1.23) 8.12 (1.74) 0.421 c \n Gender \n Male 3 6 Female 17 14 0.189 b a Values are represented as mean (SD) b P values were calculated by the Chi Squared c P values were calculated by the independent t-test Table 2 displays the comparison of the means of the O 2 sat, PaO 2 , PaCO 2 , pH and RR between the groups before and after intervention. Before intervention, the means of O 2 sat, PaO 2 , PaCO 2 , pH and RR were similar in both groups (P > 0.05). After intervention, only pH was similar in both groups (P > 0.05) and the means of O 2 sat, PaO 2 in the case group were significantly higher than the control group (P < 0.0001). However, in the control group, the means of the PaCO 2 and RR were significantly higher than the case group (P < 0.0001). Table 2. Comparison of the O 2 sat, PaO 2 , PaCO 2 , pH and RR between the Groups. Control Group a Case Group a P value b Before After Before After \n O 2 sat, mm/Hg c \n 84.61 (1.13) 84.26 (1.21) 83.92 (3.29) 89.45 (2.17) 0.001 \n P value \n d \n 0.92 0.01 \n PaO, mm/Hg 2 , mm/Hg e \n 69.54 (1.26) 67.20 (2.14) 75.28 (2.43) 0.001 \n P value \n d \n 0.091 0.001 \n PaCO 2\n , mm/Hg \n 75.16 (1.56) 76.12 (1.23) 72.30 (1.49) 60.21 (1.28) 0.001 \n P value \n d \n 0.34 0.001 \n pH, mm/Hg \n 7.76 (1.34) 7.43 (1.23) 7.91 (2.28) 7.30 (2.19) 0.124 \n P value \n d \n 0.87 0.12 \n RR \n f \n 29.61 (1.21) 29.12 (1.61) 27.42 (1.06) 21.34 (1.16) 0.001 \n P value \n d \n 0.91 0.01 a Values are represented as mean (SD) b P-values were calculated by the repeated measurements of ANOVA c P-values were calculated by the Oxygen 2 saturation d P-values were calculated by the paired t-test. e P-values were calculated by the partial pressure of O 2 f P-values were calculated by the Respiratory Rates Figure 1.Depicts the Comparison of the Means of the O 2 sat, PaO 2 , PaCO 2 , pH and RR Between the Groups Before and After Intervention.Before intervention, there were insignificant alteration in the means of O2sat, PaO2, paCO2, pH and RR between both groups (P > 0.05). However, after intervention, the means of O2sat, PaO2 in the case group were significantly higher than the control group (P < 0.0001). Conversely, the means of the PaCO2 and RR in the control group were significantly higher than the case group (P < 0.0001). Table 3 displays the comparison of the median of nocturnal enuresis frequency between the groups before and after intervention. Before intervention, the median of nocturnal enuresis frequency was similar in both groups (P > 0.001). After intervention, the median of nocturnal enuresis frequency in the case group was significantly lower than the control group (P < 0.001). Table 3. Comparison of the Nocturnal Enuresis Frequency between the Groups. Before After P value a \n Case Group b \n 6 (4.25 – 8) 1 (0 – 2) < 0.0001 \n Control Group \n b \n 6 (4.25 – 7.75) 5.5 (4 – 8.75) 0.83 \n P value \n c \n 0.7 < 0.0001 a P-values were calculated by the Wilcoxon Signed Rank tests, b Values are represented as median (IQR). c P-values were calculated by the Mann-Whitney. \n\nBODY.5. DISCUSSION:\nThis study aimed to consider the effects of breathing exercises on the nocturnal enuresis in the children with the sleep-disordered breathing. Normally, in each breath, 500 ml of air enter the lungs, and then it goes out of the lungs after the exchange of oxygen and carbon dioxide. With deep breathing, the amount of air can be increased largely. These breathing exercises bring the air volume to 2500 ml in each breath. Therefore, by breathing exercises, more air will enter the lungs, arterial blood oxygen saturation will increase, and respiratory proper functioning will be possible, which promotes deep breathing improvement (11, 25, 26). Breathing exercises will also prevent the accumulation of secretions and atelectasis, strengthen the respiratory muscles, coordinate aspiration muscles and correct the breathing patterns. It also decreases the asthmatic attacks and eliminates the dyspnea (14). On the contrary, in children with the oral breathing during sleep, the upper airways endure vibration due to airflow, and breathing becomes difficult due to weakening of the respiratory muscles. In addition, the chronic superficial breathing causes the weakening of the inter-rib muscles and diaphragm muscle (17, 18). Many studies show that the removal of airways blockade even in case of surgery, reduces the frequency of nocturnal enuresis in children with the breathing disorders (16). There are some drugs and various behavioral to treat the nocturnal enuresis, but none of them is crucial, and each has side effects. On the other hand, the treatment of nocturnal enuresis with respect to the associated disorders, can be more effective/with less complications (2, 27, 28). The results of this study showed that in children with the nocturnal enuresis, who had the blockade of airways/abnormal breathing pattern; the breathing exercises can be used as a simple method with low expenses and more efficient than other approaches. The findings of this study also showed that there was no significant difference in the demographic characteristics between the two groups (Table 1). In addition, the findings of this study revealed that there was a significant difference in the mean frequency of nocturnal enuresis between the case and the control groups after the intervention (Table 3). These findings demonstrate that the frequency of breathing exercises reduce the nocturnal enuresis in the children with the sleep-disordered breathing. There are many studies concerning the relationship between the nocturnal enuresis and nocturnal snore/airways blockade ( 16 , 21 , 22 , 29 ). The nocturnal enuresis is more common in children with the oral breathing. It appears that the relative hypoxia in these children causes low-pressure oxygen in the distal tubes of kidney, unresponsiveness to the anti-diuretic hormone, and increased production of urine temporarily, particularly during the sleep. Both of these factors can increase the volume of urine, and consequently make the urine control more difficult and finally lead to the nocturnal enuresis ( 3 , 7 ). As far as we are concerned, there is no internal research concerning the impact of breathing exercises on the nocturnal enuresis in children. Nevertheless, Crouch et al. have shown that in subjects with the breathing disorders, the pulmonary rehabilitation programs including breathing exercises, endurance activities, upper and lower extremities exercises like purse lip breathing and diaphragmatic breathing, improve ventilation and in turn, have caused the adequate exchange of air pattern ( 19 ). Ghahri Sarabi et al. have also shown that in patients with the breathing disorders, breathing exercises increase the lung functional indices ( 27 ). In addition, another study has shown that in patients with the respiratory chronic diseases, pulmonary rehabilitation and breathing exercises increase the volume of oxygen consumption ( 20 ). The respiratory disorders cause ineffective breathing habits and shallow breathing. This chronic shallow breathing leads to the weakening of the diaphragm muscle and inter-costal muscles ( 8 , 19 ). In conclusion; these findings suggest that the breathing exercises may reduce the nocturnal enuresis in the children with the sleep-disordered breathing. The clinical implications of these findings should be verified in the future longitudinal studies.\n\n**Question:** Compared to no breathing exercises what was the result of breathing exercises were performed for 45 minutes, and were pursued for four weeks in the morning following and prior to sleeping on The nocturnal enuresis?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
678
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: An Open Label, Randomised Trial of Artesunate+Amodiaquine, Artesunate+Chlorproguanil-Dapsone and Artemether-Lumefantrine for the Treatment of Uncomplicated Malaria\n\n ABSTRACT.BACKGROUND:\nArtesunate+amodiaquine (AS+AQ) and artemether-lumefantrine (AL) are now the most frequently recommended first line treatments for uncomplicated malaria in Africa. Artesunate+chlorproguanil-dapsone (AS+CD) was a potential alternative for treatment of uncomplicated malaria. A comparison of the efficacy and safety of these three drug combinations was necessary to make evidence based drug treatment policies.\n\nABSTRACT.METHODS:\nFive hundred and thirty-four, glucose-6-phosphate dehydrogenase (G6PD) normal children were randomised in blocks of 15 to the AS+AQ, AL or AS+CD groups. Administration of study drugs was supervised by project staff and the children were followed up at r home on days 1,2,3,7,14 and 28 post treatment. Parasitological and clinical failures and adverse events were compared between the study groups.\n\nABSTRACT.MAIN FINDINGS:\nIn a per-protocol analysis, the parasitological and clinical failure rate at day 28 post treatment (PCF28) was lower in the AS+AQ group compared to the AL or AS+CD groups (corrected for re-infections: 6.6% vs 13.8% and 13.8% respectively, p = 0.08; uncorrected: 14.6% vs 27.6% and 28.1% respectively, p = 0.005). In the intention to treat analysis, the rate of early treatment failure was high in all three groups (AS+AQ 13.3%; AL 15.2%; and AS+CD 9.3%, p = 0.2) primarily due to vomiting. However, the PCF28 corrected for re-infection was lower, though not significantly, in the AS+AQ group compared to the AL or the AS+CD groups (AS+AQ 18.3%; AL 24.2%; AS+CD 20.8%, p = 0.4) The incidence of adverse events was comparable between the groups.\n\nABSTRACT.CONCLUSIONS:\nAS+AQ is an appropriate first line treatment for uncomplicated malaria in Ghana and possibly in the neighbouring countries in West Africa. The effectiveness of AL in routine programme conditions needs to be studied further in West Africa.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov NCT00119145\n\n\nBODY.INTRODUCTION:\nThere is now widespread acceptance that combination therapy (≥2 antimalarial drugs with different modes of action) is a better option than monotherapy for the treatment of uncomplicated malaria and artemisinin based combination therapy (ACT) is advocated as a way forward.[1] The WHO recommends artesunate+amodiaquine (AS+AQ), artemether-lumefantrine (AL) or artesunate+sulphadoxine-pyremethamine (AS+SP) as the first line treatment for uncomplicated malaria in Africa.[2] By November 2006, most malaria endemic countries in Africa (33/42) had adopted ACT as the first line treatment for uncomplicated malaria.[3] In 2002, Ghana adopted AS+AQ as first line treatment based on the evidence of its efficacy and safety observed elsewhere in Africa and on account of its potential for production by local manufacturers.[4] Randomised controlled efficacy trials in Zanzibar,[5] Uganda[6] and Angola[7] had shown that polymerase chain reaction ( PCR) corrected adequate clinical response rates at day 28 post treatment (ACPR28) were very high for both AS+AQ (94–100%) and AL (97–100%). In 2003, a four arm comparative efficacy trial in Ghana showed that the ACPR28 was 100% (n = 51) for AL and 97.5% (n = 54) for AS+AQ.[8] However, a randomised trial of the effectiveness of AS+AQ and AL in Tanzania showed that the parasitological failure at day 28 was 2.6 fold (95% CI 1.9–3.4) higher in the AS+AQ group (40%) compared to the AL group (21%).[9] Artesunate+chlorproguanil-dapsone (AS+CD) is another potential alternative for AS+AQ because chlorproguanil-dapsone has been shown to be efficacious in areas with high SP and chloroquine resistance.[10] Clinical trials of AS+CD have been carried out in several countries in Africa (www.ispub.com/ostia/index.phpxmlFilePathjournals/ijid/vol4n2/cda.xml) but the development of AS+CD was abandoned recently due to safety concerns in glucose-6-phosphate dehydrogenase (G6PD) deficient subjects. There has been no comparative studies of the efficacy and safety of AS+CD versus AS+AQ or AL. Even though most countries in Africa have adopted ACT as first line treatment, antimalarial monotherapy is still common and this can undermine the efficacy of ACT. Furthermore high levels of resistance to AQ already exists in many parts of Africa.[9] Thus, it is important to evaluate the efficacy of ACTs particularly that of AS+AQ periodically. In this paper, we report the results of a randomised trial that compared the efficacy and safety of AS+AQ, AL and AS+CD in children in Ghana.\n\nBODY.METHODS:\nThe protocol for this trial and supporting CONSORT checklist are available as supporting information; see checklist S1 and Protocol S1.\n\nBODY.METHODS.PARTICIPANTS:\nThe study was conducted in Kintampo district hospital, Brong-Ahafo region, Ghana. Malaria transmission in this forest region of Ghana is perennial but peaks in July–August and the entomological inoculation rate is high, about 270 infective bites per person per year. Although the study area has high transmission, malaria is common in older children and therefore the study included children up to 10 years of age. From June 2005 to May 2006, 1718 children aged 6 months to 10 years who attended the study hospital's outpatient department with a history of fever or an axillary temperature ≥37.5°C were examined for malaria parasitaemia and screened for their eligibility for enrolment in this study after written, informed consent had been obtained from their caretakers (Figure 1). Children who had a body weight ≤5 kg, Hb <7 g/dL, malaria parasitaemia <2000 or >200000 parasites/μL, danger signs (unable to drink, history of repeated vomiting, convulsions or other signs of severe malaria, or any other concomitant febrile disease were excluded from the study but offered standard care according to Ministry of Health guidelines. Children were screened for glucose-6-phosphate dehydrogenase (G6PD) deficiency before randomisation (the G6PD test took about 1hour). A total of 630 children were eligible; 534 were G6PD normal and 96 were G6PD deficient. The G6PD normal children were randomly assigned to the AL, AS+AQ, or AS+CD groups. The G6PD deficient children were randomised to the AS+AQ or AL group only because the Ghana Ministry of Health Ethical Committee expressed concern about the possibility of haemolysis associated with chlorproguanil-dapsone treatment in G6PD deficient children. 10.1371/journal.pone.0002530.g001Figure 1Enrolment and Follow up profile.\n\nBODY.METHODS.PROCEDURES:\nAdministration of the first dose of study drugs was supervised by a study nurse and the administration of subsequent doses was supervised by study field workers. All children were given iron along with antimalarials according to national treatment guidelines. Caretakers of children in the AL group were told about the need to give a fatty meal along with the drug. Children were actively followed up at their homes by 10 field workers on days 1, 2, 3, 7, 14 and 28 post treatment to solicit adverse events. Finger prick blood samples for parasitological observations were obtained on days 3, 7, 14, 28, for haematological measurements on days 1, 2, 3, 7, 28 and for biochemical assessments on days 2, 7, 28 (0.7ml of capillary blood) after the start of treatment. Blood samples were collected from children who attended the study clinic for suspected malaria on any other day outside the scheduled days of follow up. There were 77 unscheduled visits and those who had a positive blood slide on these unscheduled visits (n = 15) were treated with quinine and classified as treatment failures.\n\nBODY.METHODS.RANDOMISATION:\nRandomisation was done using Microsoft Excel 2003© randomisation generator. G6PD normal children were randomised in blocks of 15 to the AL, AS-AQ, or AS+CD group. G6PD deficient children were randomised in blocks of 10 to AL or AS-AQ group. Co-blistered packs of artesunate and amodiaquine (Arsucam,® Sanofi-Aventis), artemether-lumefantrine (Coartem,© Novartis) and chlorproguanil-dapsone (Lapdap,TM GlaxoSmithKline) were obtained from the respective local suppliers.\n\nBODY.METHODS.STATISTICAL METHODS:\nData were double-entered, and validated using Visual Foxpro© 6.0.and the analysis was done using StataTM 9.1. Per protocol analysis included patients who were properly randomised, had received the study drugs according to the protocol, and for whom data were available on the primary end point (ACPR28). Intention to treat (ITT) analysis involved all randomised patients irrespective of how many doses of the study drugs had been given but excluded all major day 0 protocol violations. All statistical tests were two-sided and an α-level <0.05 was considered a statistically significant result. For comparisons of continuous variables between groups, the t-test was used and for comparisons between more than two groups, one-way analysis of variance was used after assuring normality and homogeneity of variances assumptions were satisfied. For comparison of categorical variables, the chi-square test was used with the exact extension invoked when there were small numbers in the cells. A survival analysis using the life table method was done on the cumulative proportion of children having treatment failure at different time points during the post treatment period in each of the arms of the study. The end points were any of the following: failure to take the drug on any of the first three days, parasitaemia on day 2 greater than that on day 0, presence of parasitaemia on day 7, diagnosis of severe malaria at any point after day 0, recurrent parasitaemia after day 7 up to day 28. The likelihood-ratio test statistic of homogeneity and the logrank tests of homogeneity of arm were also carried out.\n\nBODY.METHODS.SAMPLE SIZE:\nThe primary endpoint was PCR corrected ACPR by day 28. A sample size of 510 children was chosen on the basis of the following assumptions: (1) the efficacy of AL would be 95%; (2) the study would have 80% power at 95% significance level to detect a 10% difference in efficacy as measured by the parasitological failure at day 28 post-treatment in the AS+AQ, and AS+CD groups compared with the AL group; (3) loss to follow up by day 28 post treatment would be <5%.\n\nBODY.METHODS.LABORATORY PROCEDURES:\nAt least 200 oil immersion fields of thick blood films stained with Giemsa stain were examined for malaria parasites before a blood film was considered negative. The parasite density per micro-litre was estimated by multiplying the number of parasites per 200 leukocytes by a factor of 40, assuming a white blood cell count of 8,000/μl. Each blood slide was read by two microscopists. If the difference in the parasite density estimated by the two microscopists was <50%, the mean of the two readings was defined as the true parasite density. If the disagreement between the two readings was ≥50%, a third microscopist examined the slide and the mean of the two closest readings was deemed to be the true parasite density. To distinguish between recrudescent and new infections, parasites identified on Days 14 and 28 were compared to those identified at enrolment using PCR amplification of msp2 P. falciparum genes. Full blood count and biochemical tests were done by calibrated automated ABX MICROS 60 (Horiba ABX, France) and Selectra E Clinical Chemistry Analyzer (Vital Scientific N.V, Netherlands) respectively. Quantitative G6PD phenotype analysis was conducted using Randox method with Humalyser Junior (Human, Germany). Plasma ALT, AST, Total Bilirubin, Creatinine and Urea were measured on the Selectra E Clinical Chemistry analyzer (Vital Scientific, The Netherlands) using reagents from Elitech Diagnostics (Sees, France). ALT and AST were measured using the kinetic, UV, IFCC method without pyridoxal phosphate, total bilirubin with the modified Evelyn-Malloy method, creatinine with the kinetic Jaffe method and urea with the kinetic urease method. Glucose-6-phosphate dehydrogenase (G6PD) activity was measured using a quantitative UV G6PD kit from Randox (Antrium, UK). The enzyme activity was determined by the measurement of the rate of absorbance change at 340 nm due to the reduction of NADP+.\n\nBODY.METHODS.ETHICS:\nThe study protocol was approved by the ethics committees of the Kintmapo Health Research Centre, Ghana Ministry of Health and the London School of Hygiene & Tropical Medicine. The study is registered at the United States National Institute of Health clinical trials register; the registration number is NCT00119145 and the URL:https://clinicaltrials.gov/ct2/show/NCT00119145/\n\n\nBODY.RESULTS.G6PD NORMAL CHILDREN:\nThere were no statistically significant differences in the demographic or baseline haematological characteristics between the three treatment groups at enrolment (Table 1). 10.1371/journal.pone.0002530.t001 Table 1 Demographic and haematological characteristics at enrolment in G6PD normal children. Characteristics AS+AQ [N = 178] AL [N = 177] AS+CD [N = 178] P n (%) n (%) n (%) \n Age (months) \n 6–11 26 (14.6) 25 (14.0) 22 (12.4) 1.0 12–59 119 (66.9) 120 (67.4) 122 (68.5) 60+ 32 (18.0) 33 (18.5) 34 (19.1) Mean (SD) 37.6 (24.3) 36.8 (26.2) 39.4 (25.0) 0.6 \n Gender \n Male 83 (46.9) 94 (52.8) 92 (51.7) 0.5 Female 94 (52.8) 84 (47.2) 86 (48.3) \n Weight (Kg) \n <9.9 50 (28.1) 54 (30.3) 46 (25.8) 0.8 10.0–19.9 117 (65.7) 112 (62.9) 118 (66.3) 20.0+ 10 (5.6) 12 (6.7) 14 (7.9) Mean (SD) 12.6 (4.3) 12.4 (4.5) 12.6 (4.3) 0.9 \n Temperature (°C) \n ≥37.5 106 (59.6) 107 (60.1) 97 (54.5) 0.5 Mean (SD) 37.9 (1.1) 39.9 (1.1) 37.7 (1.1) 0.3 \n Parasite Density/μL \n <2500 39 (21.9) 31 (17.4) 32 (18.0) 0.2 2500-<10000 42 (23.6) 28 (15.7) 36 (20.2) ≥10000 96 (53.9) 119 (66.9) 110 (61.8) Geometric mean (range) 12507 (1000, 198520) 16521 (1000, 197360) 13820 (1000,199280) Gametocytaemia 10 (5.6) 11 (6.2) 16 (9.0) 0.4 \n Haemoglobin (g/dL) \n 6-<11 163 (91.6) 167 (93.8) 165 (92.7) 0.8 11+ 14 (7.9) 11 (6.2) 13 (7.3) Mean (SD) 9.1 (1.4) 8.9 (1.3) 8.9 (1.4) 0.3 AS = artesunate; AQ = amodiaquine; AL = artemether-lumefantrine; CD = chlorproguanil-dapsone In the per-protocol (PP) analyses there were no statistically significant differences in the rates of early treatment, late treatment, parasitological or clinical failures at day 14 between the three groups (Table 2). Survival analysis showed that there was no statistically significant differences in treatment failure at different time points during the 28 day post treatment period between the three groups (Figure 2). However, the parasitological and clinical failure rate at day 28 (PCF28) uncorrected for re-infections was significantly lower in the AS+AQ group (14.6%) compared to the AL (27.6%) or AS+CD (28.1%) groups (Table 2). The PCF28 corrected for re-infection was also lower in the AS+AQ group (6.6%) than in the AL (13.8%) or AS+CD (13.8%) groups but this difference was not statistically significant. PCF28 uncorrected for re-infection was nearly 50% less in the AS+AQ group compared to AL and AS+CD groups (Table 3). PCF28 corrected for new infections was significantly lower in the AS+AQ group compared to AS+CD group (RR 0.43; 95% CI 0.20, 0.89). There was no significant difference in the PCF28 between AL and AS+CD groups (Table 3). 10.1371/journal.pone.0002530.g002Figure 2Proportion of children having treatment failures at different time points during the post treatment period Per protocol analysis of G6PD normal children. 10.1371/journal.pone.0002530.t002 Table 2 Treatment outcomes (per protocol analysis). Outcomes AS+AQ [N = 151] AL [N = 152] AS+CD [N = 160] P * \n n (%) n (%) n (%) Early treatment failure 1 (0.7) 5 (3.3) 3 (1.9) 0.257 Late treatment failure 6 (4.0) 7 (4.6) 7 (4.4) 0.960 Parasitological or clinical failure by day 14 8 (5.3) 13 (8.6) 13 (8.1) 0.482 Parasitological or clinical failure by day 28 (PCR uncorrected) 22 (14.6) 42 (27.6) 45 (28.1) 0.005 Parasitological or clinical failure by day 28 (PCR corrected) 10 (6.6) 21 (13.8) 22 (13.8) 0.083 Missing PCR by day 28 (excluded) 3 (2.0) 9 (5.9) 13 (8.1) - Gametocytaemia by day 7 3 (2.0) 0 (0) 6 (3.8) 0.043 AS = artesunate; AQ = amodiaquine; AL = artemether-lumefantrine; CD = chlorproguanil-dapsone * \n exact P values \n 10.1371/journal.pone.0002530.t003 Table 3 Comparison of treatment outcomes between groups (per protocol analysis). Outcomes AS+AQ vs AL P * \n AS+AQ vs AS+CD P * \n AS+CD vs AL P * \n RR (95% CI) RR (95% CI) RR (95% CI) Parasitological or clinical failure by day 14 0.69 (0.28,1.63) 0.4 0.67 (0.27,1.55) 0.4 1.08 (0.49,2.22) 0.8 Parasitological of clinical failure by day 28 (PCR uncorrected) 0.53 (0.32,0.85) 0.007 0.51 (0.30,0.81) 0.004 1.01 (0.68,1.41) 1.0 Parasitological of clinical failure by day 28 (PCR corrected) 0.48 (0.23,0.98) 0.044 0.43 (0.20,0.89) 0.023 1.03 (0.57,1.47) 0.9 AS = artesunate; AQ = amodiaquine; AL = artemether-lumefantrine; CD = chlorproguanil-dapsone * \n since there are three comparisons Bonfferroni correction is applied; the P value has to <0.017 to be statistically significant \n In the intention to treat analyses, the rate of early treatment failure was high in all three groups (13.3% in AS+AQ, 15.2% in AL, and 9.3% in AS+CD groups respectively) but this was primarily due to vomiting twice or more within half an hour of administration of the study drugs. Among the early treatment failures, 22/23 in AS+AQ, 22/27 in AL, and 13/16 in AS+CD groups were due to repeated vomiting of study drugs. In the intention to treat analyses the relative risk (RR) of parasitological and clinical failure by day 28 corrected for reinfection was lower, though not statistically significantly, in the AS+AQ group compared to the AL group (RR 0.75; 95% CI 0.48, 1.14) or to the AS+CD group (RR 0.81; 95% CI 0.5, 1.27). The relative risk of parasitological and clinical failure corrected for reinfection was lower in the AS+AQ group compared to the AL (RR 0.68, 95% CI 0.47, 0.94) or AS+CD group (RR 0.72, 95% CI 0.49, 1.01). There was a drop in mean Hb concentration on day 2 post treatment compared to day 0 in each of the three groups (Table 4). The mean drop in Hb on day 2 ranged from 0.8 g/dl in the AS+CD group to 0.4 g/dl in the AL group. By day 7 post treatment, the mean Hb concentration had recovered to the day 0 levels in the AS+AQ and AL groups and by day 28 the mean Hb was higher than the day 0 in all three groups. The rate of recovery of Hb was slightly slower in the AS+CD group compared to the other two groups. 10.1371/journal.pone.0002530.t004 Table 4 Haemoglobin concentration during the post treatment period in the study groups. Post treatment day AS+AQ (n = 151) AL (n = 152) AS+CD (n = 160) P * \n Mean Hb g/dl (SD) Mean Hb g/dl (SD) Mean Hb g/dl (SD) 0 9.0 (1.3) 9.2 (1.4) 8.9 (1.3) 0.2 1 8.5 (1.6) 8.7 (1.6) 8.3 (1.4) 0.08 2 8.3 (1.7) 8.8 (1.7) 8.1 (1.6) 0.002 3 8.8 (1.8) 8.9 (1.6) 8.4 (1.5) 0.045 7 9.0 (1.7) 9.3 (1.6) 8.6 (1.5) 0.001 28 9.9 (1.7) 10.0 (1.6) 9.7 (1.6) 0.3 AS = artesunate; AQ = amodiaquine; AL = artemether-lumefantrine; CD = chlorproguanil-dapsone SD = standard deviation * \n since there are five time points compared against the baseline (day 0), Bonfferroni correction is applied; the P value has to <0.008 to be statistically significant \n The reported incidences of solicited adverse events during the 7 days post treatment are shown in Table 5. The incidence of solicited adverse events was comparable between the three groups. A history of body pain was more frequent in the AS+AQ group than the other two groups (Table 5). 10.1371/journal.pone.0002530.t005 Table 5 Distribution of solicited adverse events during the 7-day post treatment period. Reported symptoms AS+AQ (n = 178) AL (n = 177) AS+CD (n = 178) P * \n (Adverse events) n (%) n (%) n (%) Unable to suck/drink 8 (4.5) 5 (2.8) 2 (1.1) 0.11 Fever 0 (0) 0 (0) 0 (0) - Runny nose 29 (16.3) 30 (17.0) 14 (7.9) 0.03 Cough 33 (18.5) 23 (13.0) 21(11.8) 0.30 Difficulty in breathing 3 (1.7) 4 (2.3) 2 (1.1) 0.48 Diarrhoea 13 (7.3) 19(10.7) 11(6.2) 0.44 Vomiting 8(4.5) 6(3.4) 12 (6.7) 0.58 Itching/pruritus 13 (7.3) 10 (5.7) 10 (5.6) 0.93 Loss of appetite 61 (34.3) 44 (24.9) 50 (28.1) 0.29 Nausea 2 (1.1) 0 (0.0) 5 (2.8) 0.17 Abdominal pain 31 (17.4) 19 (10.7) 28 (15.7) 0.34 Body pain 25 (14.0) 10 (5.7) 9 (5.1) 0.01 Difficulty in sleeping 2 (12.4) 23 (13.0) 16 (9.0) 0.75 Joint pain 4 (2.3) 1 (0.6) 0 (0.0) 0.28 Palpitation 3 (1.7) 4 (2.6) 5 (2.8) 0.89 Rash 2 (1.1) 3 (1.7) 2 (1.1) 0.97 Ulcers in mouth/tongue 15 (8.4) 12 (6.8) 10 (5.6) 0.82 Yellow eyes 4 (2.3) 3 (1.7) 4 (2.3) 1.00 AS = artesunate; AQ = amodiaquine; AL = artemether-lumefantrine; CD = chlorproguanil-dapsone * \n exact P values \n \n\nBODY.RESULTS.G6PD DEFICIENT CHILDREN:\nThere was no statistically significant differences in the parasitological or clinical failure rates by day 14 or day 28 between the AS+AQ and AL groups (Table 6). The drop in mean Hb by day 3 post treatment and the recovery in Hb by day 28 were comparable between AS+AQ and AL groups and there was no apparent difference in the distribution of Hb concentrations following treatment between the G6PD normal and deficient children (data not shown). 10.1371/journal.pone.0002530.t006 Table 6 Treatment outcomes in G6PD deficient children (Per protocol analysis) Outcomes AS+AQ [N = 44] AL [N = 42] P * \n n (%) n (%) Early treatment failure 1 (2.3) 0 (0.0) 0.3 Late treatment failure 1 (2.3) 2 (4.8) 0.5 Parasitological or clinical failure by day 14 3 (6.8) 2 (4.8) 0.7 Parasitological or clinical failure by day 28 (PCR uncorrected) 11 (25.0) 11 (26.2) 0.9 Parasitological or clinical failure by day 28 (PCR corrected) 6/43 (14.0) 4/42 (9.5) 0.5 Missing PCR by day 28 (excluded) 1 (2.3) 0 0.3 AS = artesunate; AQ = amodiaquine; AL = artemether-lumefantrine; CD = chlorproguanil-dapsone * \n exact P values \n \n\nBODY.DISCUSSION:\nIn our study population, the parasitological and clinical failure at day 28 was lower in the AS+AQ group than in AL group. Our PCR corrected parasitological failure rate at day 28 is higher than that previously reported from Ghana[8] for AS+AQ (6.6% vs 2.5%) and remarkably higher for AL (13.8% vs 0%). The reason for the substantially higher parasitological failure at day 28 for AL compared to the results of a previous study in Ghana[8] and elsewhere[9] is unclear. Dietary differences between our population and the previously studied populations is a possible explanation for this difference; poor absorption of AL leading to a higher parasitological failure compared to that reported from elsewhere. We did not follow the recommended practice of administering AL with a fatty diet in a clinical trial; instead we only advised caretakers to give fatty food at the time of drug administration. This was a pragmatic decision because in routine health care setting one can only advise a caretaker to administer AL with a fatty meal. Thus, the observed higher treatment failure of AL observed in our study compared with others may have been due to the lack of intake of fat along with AL, as is likely to happen frequently in routine clinical practice. Although there was a significant difference in the parasitological failure rate (both recrudescence and re-infections) at day 28 between AL and AS+AQ, there was no difference in the clinical failure between the two groups, perhaps because both combinations contain a fast acting artemisinin. The parasitological or clinical failure of AS+CD by day 14 was 8% and by day 28 (PCR corrected) was 13.8% in our study population. There are no other published studies of efficacy of AS+CD with which to compare these results. An earlier multicountry trial of CD monotherapy showed a parasitological or clinical failure rate of 5% by day 14 in Nigeria.[10] Our study suggests that adding AS to CD may have little value in West Africa given that the PCR corrected failure rate by day 28 has already reached 13.8%. Surprisingly, the PCR corrected failure rate by day 28 for AL was also 13.8% and this suggests that introducing AL as a replacement for AS+AQ needs to considered with caution. More evidence on the efficacy of AL in Ghana is needed urgently. In the AL group, all children who had gametocytes on day 0 (n = 11) had cleared gametocytes by day 3 post treatment; gametocytes appeared in the blood of two children between day 14 and 28. In children in the AS+AQ group who had gametocytes on day 0 (n = 10) only one child had gametocytes on day 3 and 7. Two more children developed gametocytaemia by day 7 but all children cleared gametocytes by day 14. In children in the AS+CD group who had gametocytes on day 0 (n = 16) 4 children had gametocytes on day 3 and one child had gametocytes on day 7 Gametocytes appeared in a further child by day 7 but all children had cleared gametocytes by day 14; two children had developed gametocytes between day 14 an 28. It appears that all three ACTs tested in this study are effective at clearing gametocytes and this property of ACTs should have an effect on the transmission of malaria. A slight reduction in the Hb concentration occured during the early post treatment period in all three treatment groups. Although differences were not remarkable, the recovery of Hb was slowest following treatment with AS+CD. There was no difference in the risk of haemolysis between the three ACTs in G6PD normal children. However, we cannot comment about the potential risk of haemolysis associated with AS+CD in an unselected population because G6PD deficient children were not included in the AS+CD arm. Results from recently completed multicentre trials indicate that coformulated AS+CD can cause serious haemolysis in children who are G6PD deficient so further development of this drug combination has been halted. Our study results supports the current policy of AS+AQ as first line treatment for uncomplicated malaria in Ghana. However, the slightly higher incidence of adverse events, particularly vomiting and body pain, associated with use of AS+AQ is a concern. If these adverse events are common in adults as well then they might reduce adherence to a full course of the treatment. There were some reports of adverse events of AQ in the Ghanaian mass media and this has lead to the general public being reluctant to accept AS+AQ combination therapy. Although the overall incidence of adverse events related to AS+AQ group is reasonable, in order to achieve a good compliance an appropriate communication strategy is needed in addition to the introduction of the new fixed combinations of artesunate plus amodiaquine that are currently being developed. We conclude that AS+AQ is an appropriate first line treatment for uncomplicated malaria in Ghana and possibly in neighbouring countries in West Africa. However, the deployment of AS+AQ should be linked to a clear information strategy regarding the potential mild adverse events for health care providers and the general public. The efficacy and safety of AL when given in the context of routine care need to be further studied in Ghana and elsewhere in West Africa.\n\nBODY.SUPPORTING INFORMATION:\nProtocol S1Trial Protocol.(0.29 MB DOC)Click here for additional data file. Checklist S1CONSORT Checklist.(0.06 MB DOC)Click here for additional data file.\n\n**Question:** Compared to artemether-lumefantrine (AL) and Artesunate+chlorproguanil-dapsone (AS+CD) what was the result of Artesunate+amodiaquine (AS+AQ) on the parasitological and clinical failure rate at day 28 post treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1205
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Antioxidative and antiatherogenic effects of flaxseed, α-tocopherol and their combination in diabetic hamsters fed with a high-fat diet\n\n ABSTRACT:\nOxidative stress has previously been shown to play a role in the pathogenesis of diabetes mellitus (DM) and its complications. In the present study, the effects of supplementation with dietary antioxidants, flaxseed and α-tocopherol were investigated in diabetic golden Syrian hamsters fed with a high-fat diet. Thirty-five golden Syrian hamsters were randomly divided into a control group (C) and four diabetic groups (DM, DM + flax, DM + E and DM + Flax + E). The hamsters received four different diets for a 20-week period, as follows: i) Groups C and DM received a high-fat diet (40% energy as fat), deficient in α-linolenic acid (ALA); ii) the DM + Flax group received a high-fat diet enriched with ground flaxseed 15 g/100 g of food, rich in ALA; iii) the DM + E group received a high-fat diet enriched with vitamin E, 40 mg α-tocopherol/100 g of food; and iv) the DM + Flax + E group received a high-fat diet enriched with flaxseed and vitamin E. The results of serum lipid and oxidative stress analysis suggested that the antiatherogenic effect of flaxseed, α-tocopherol and their combination added to a high-fat diet in diabetic hamsters was based primarily on their antioxidative role, demonstrated by decreased serum lipid peroxidation and increased liver glutathione content. Improvements of serum glucose and non-high-density lipoprotein cholesterol (HDL-C) levels were observed and may have contributed to the prevention of diabetic macroangiopathy evidenced in the histopathological examination. The antioxidant effect of flaxseed was similar to that of α-tocopherol in diabetic hamsters fed a high-fat diet and combined supplementation did not appear to bring more benefits than flaxseed alone. Moreover, the high dose of ground flaxseed alone may have a better cardioprotective effect than α-tocopherol in diabetic hamsters by reducing total cholesterol and non-HDL-C levels and increasing HDL-C levels.\n\nBODY.INTRODUCTION:\nOxidative stress is considered to be significant in the pathogenesis of diabetes-induced cardiovascular disease (CVD), which is associated with an abnormal blood lipid profile, insulin resistance and metabolic syndrome (1). Oxidative stress occurs due to a disturbance of the balance between antioxidant defense mechanisms and the levels of reactive oxygen species (ROS) (2,3). Numerous studies have identified significant alterations in plasma antioxidant vitamins (including vitamins C, E and A), antioxidant enzyme systems [superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GSH-Px)] and in lipid peroxidation in diabetes (4,5). In chronic diabetes mellitus (DM), the stability and reactive capacity of antioxidants seriously affect the long-term complications caused by oxidative stress (6,7). Antioxidants have been shown to reduce complications in DM by preventing free radical-mediated damage (4,8). Dietary supplementation with antioxidants, including vitamins and phenolic compounds obtained from plants, may help to maintain a desirable pro-oxidative/antioxidative balance (9,10). A variety of dietary sources are presently of considerable interest due to their potential health benefits in relation to numerous diseases, including diabetic disorders (11,12). Flaxseed (also known as linseed and Linum usitatissimum), an edible oil seed/grain from a traditional arable crop, has been acknowledged as a functional food (13), and is the focus of considerable attention due to its unique nutrient components and its potential to prevent CVD (14). In addition of being one of the richest plant sources of α-linolenic acid (ALA; ~22% of whole flaxseed), flaxseed is an essential source of lignans (in particular, secoisolariciresinol diglucoside, SDG) (15). SDG has demonstrated extreme radical scavenging activity, thereby protecting polyunsaturated fatty acids from oxidation (16–18). Despite the beneficial effects of SDG, flaxseed consumption has been found to significantly reduce α- and γ-tocopherol concentrations in rats (19) and only vitamin E supplementation was able to restore the plasma concentrations (20). α-tocopherol supplementation has previously been demonstrated to be beneficial in increasing the levels of antioxidant enzymes such as SOD and GSH-Px, and of reduced GSH in diabetes (21–23); however, little is known about the effect of adding vitamin E to a flaxseed-enriched diet on oxidative stress and aortic atherosclerotic lesions in diabetes. In the present study, the effects of dietary flaxseed, α-tocopherol and their combination were investigated on atherosclerotic progression in diabetic hamsters fed a high-fat diet by examining serum lipid concentrations, oxidative stress markers, aortic cholesterol content and aortic histological aspects.\n\nBODY.MATERIALS AND METHODS.ANIMALS:\nA total of 35 male Golden Syrian hamsters were used in the study. The hamsters were purchased from the Cantacuzino National Institute for Research and Development in Microbiology and Immunology (Bucharest, Romania). This study was approved by the Laboratory Animal Care Committee of Grigore T. Popa University of Medicine and Pharmacy (Iaşi, Romania), and the hamsters were maintained in accordance with the general guidelines for the care and use of laboratory animals recommended by the Council of European Communities (24). The hamsters were caged singly and kept in standard laboratory conditions with a controlled temperature (20±2°C) and a 12 h light/12 h dark cycle. Diabetes was induced in the hamsters using streptozotocin (STZ; catalog no. S0130; Sigma-Aldrich, St. Louis, MO, USA) (25). STZ, at a dose of 50 mg/kg body weight, freshly solved in citrate buffer (50 mM, pH 4.5), was administered intraperitoneally, as a single dose in the morning, following a night of fasting. Citrate buffer (50 mM) was prepared from sodium citrate and citric acid (S.C. Chemical Company, Iaşi, Romania) dissolved in distilled water, with adjustment of the pH to 4.5. The control group (C) received an equivalent quantity of citrate buffer (50 mM), administered intraperitoneally. All animals having a blood glucose level >126 mg/dl (7 mmol/l) were considered to be diabetic.\n\nBODY.MATERIALS AND METHODS.GROUPING AND DIETS:\nEach hamster received 15 g food/day and tap water ad libitum. The animals were randomly divided into a control group (C) and four diabetic groups (DM, DM + flax, DM + E and DM + Flax + E). The hamsters received four different diets for a 20-week period, as follows: i) Groups C and DM received a high-fat diet (40% energy as fat), that was deficient in ALA; ii) the DM + Flax group received a high-fat diet enriched with ground flaxseed (Linum usitatissimum; 15/100 g of food), that was rich in ALA; Flaxseed of the Olin variety was provided by the Department of Phytotechny of the Faculty of Agronomy (Iaşi, Romania). iii) the DM + E group received a high-fat diet enriched with vitamin E (40 mg α-tocopherol/100 g of food; Sigma-Aldrich); and iv) the DM + Flax + E group received a high-fat diet enriched with a combination of flaxseed and vitamin E. All four diets had similar carbohydrate, total fiber, protein and fat contents (Table I).\n\nBODY.MATERIALS AND METHODS.ANIMAL NECROPSY AND PROCESSING OF SAMPLES:\nAt the end of the experiment, the animals were sacrificed by cardiac puncture under ketamine anesthesia (100 mg/kg body weight). Blood samples were collected without anticoagulant. Aliquots of serum were frozen and kept at −80°C for subsequent analysis. The liver was immediately removed, rinsed with ice-cold saline, placed in a sealed container, homogenized for biochemical analysis and stored at −20°C until analyzed. The aortas were dissected, rinsed with cold saline and preserved in a phosphate buffer (pH 7.2).\n\nBODY.MATERIALS AND METHODS.SERUM GLUCOSE:\nSerum glucose was measured by enzymatic colorimetric methods on a Tecan microplate reader (Tecan SunriseTM Touchscreen, Männedorf, Switzerland) using the AD1A716 commercially available kits (Audit Diagnostics, Cork, Ireland).\n\nBODY.MATERIALS AND METHODS.SERUM LIPID PROFILE:\nSerum total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C) and triglyceride (TG) levels were measured by enzymatic colorimetric methods on a Tecan microplate reader using the AD1A704, AD1A602 and AD1A306 commercially available kits (Audit Diagnostics). Non-HDL-C levels were calculated by subtracting the level of HDL-C from that of TC.\n\nBODY.MATERIALS AND METHODS.EVALUATION OF CHOLESTEROL CONTENT IN THE AORTA:\nThe aortas of the hamsters in each group were removed. A section of the aortic arch of each animal was soaked in a 10% (v/v) formal saline solution and was stored on ice prior to being stained and fixed. The remainder of the aorta was soaked in phosphate-buffered saline (PBS) and homogenized for biochemical analysis. The lipid content was extracted by treatment with chloroform and methanol followed by centrifugation (26). The extracted lipid was dissolved in ethanol and measured using a AD1A704 cholesterol assay kit (Audit Diagnostics) utilizing absorbance spectrophotometry (Microlab 300, Vital Scientific, Spankeren, Netherlands) at 500 nm wavelength, and the amount quantified as μg/mg wet tissue.\n\nBODY.MATERIALS AND METHODS.PARAMETERS OF OXIDATIVE STRESS:\nSerum and liver thiobarbituric acid reactive substances (TBARS) are indices of lipid peroxidation and oxidative stress. TBARS were determined using the method described by Phelps and Harris (27). The quantity of the TBARS was measured using a Tecan microplate reader at a wavelength of 540 nm. Liver levels of reduced GSH were determined by an enzymatic reaction, based on the oxidation of GSH by 5,5′-dithiobis(2-nitrobenzoic acid) (DTNB), in the presence of GSH reductase and methylenetetrahydrofolate reductase, monitored at a wavelength of 405 nm (28). Liver SOD levels were determined using the method described by Minami and Yoshikawa (29).\n\nBODY.MATERIALS AND METHODS.MORPHOLOGICAL STUDY OF THE AORTA:\nFor evaluation of aortic atherosclerotic lesions by light microscopy, frozen sections of 10 μm thickness were taken from the region of the proximal aorta. The extent of atherosclerosis was visually assessed after cutting the frozen sections at a size of 8–10 mm, fixing them in formalin, briefly washing them with running tap water for 1–10 mins and rinsing with 60% isopropanol prior to staining the sections with Oil Red O (3 mg/ml Oil Red O in 60% acetone and 40% water; Sigma-Aldrich). The samples were then counterstained with hematoxylin and the samples were examined under a light microscope (Olympus BX40, Olympus, Tokyo, Japan).\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nData are expressed as the mean ± standard deviation (SD). Univariate statistical analysis was performed using the Student's t-test and Bonferroni's multiple comparison test (Statistical Software Package SPSS®, version 13, SPSS Inc., Chicago, IL, USA).\n\nBODY.RESULTS.SERUM GLUCOSE:\nThe addition of flaxseed or/and vitamin E to the high-fat diet led to significant reductions of serum glucose levels compared with those in the DM group (Table II).\n\nBODY.RESULTS.SERUM LIPID PROFILE PARAMETERS:\nDiabetes resulted in significant increases in the serum concentrations of TC, TG and non-HDL-C, and a significant reduction of HDL-C concentrations compared with those in the control group. Flaxseed supplementation significantly decreased serum TC and non-HDL-C levels and increased HDL-C levels (Table II) compared with those in the unsupplemented DM group. The addition of vitamin E to the flaxseed-rich diet significantly decreased TC, non-HDL-C and TG levels but did not improved the HDL-C level in diabetic hamsters.\n\nBODY.RESULTS.CHOLESTEROL CONTENT OF THE AORTA:\nAortic cholesterol contents were significantly increased in the DM group fed with a standard diet compared with those in the control group (Table II). Flaxseed or/and vitamin E supplementation of the diet significantly decreased the aortic cholesterol content in the hamsters with DM (Table II).\n\nBODY.RESULTS.PARAMETERS OF OXIDATIVE STRESS:\nThe main findings are summarized in Table III. Serum and liver TBARS, as indices of lipid peroxidation, were significantly increased in the DM group compared with those in the control group, while the addition of flaxseed or a combination of flaxseed and α-tocopherol to the high-fat diet of the hamsters with DM significantly decreased these two markers compared with those in the unsupplemented DM group. Liver GSH levels were significantly diminished in the DM group compared with those in the control group. Flaxseed, α-tocopherol and their combination added to the high-fat diet significantly increased the liver GSH levels in the diabetic hamsters. The increased in liver GSH content was significantly higher in the DM + Flax + E group compared with that in the DM + Flax group. No significant differences of SOD activity in the liver were identified between groups.\n\nBODY.RESULTS.HISTOLOGICAL CHANGES IN AORTIC TISSUES.CONTROL GROUP:\nThe histologic examination of aortal fragments with O-Rd staining revealed discrete thickening of the aortic intimal layer (Fig. 1A).\n\nBODY.RESULTS.HISTOLOGICAL CHANGES IN AORTIC TISSUES.DIABETIC GROUPS:\nThe histologic examination of aortic fragments with O-Rd staining from diabetic hamsters revealed morphologic changes specific to diabetic macroangiopathy (Fig. 1B). Intimal thickening, endothelial discontinuities, lipid drops in the subintimal space, and alterations to the middle layer of the arterial wall were observed. The addition of flaxseed and/or vitamin E to the high-fat diet in diabetic hamsters improved the histological aspects of aortic atherosclerosis described in the diabetic hamsters fed an unsupplemented diet. The aortal tissue in the supplemented groups was characterized by slightly swollen endothelium, with moderate intimal thickening and moderate lipid infiltration of the intima (Fig. 2).\n\nBODY.DISCUSSION:\nIn DM, oxidative stress occurs, primarily due to the increased production of oxygen free radicals and/or a reduction in the antioxidant defense. Numerous studies have demonstrated that an increase in oxidative stress is implicated in the pathogenesis and progression of diabetic vascular lesions (1,4,6). In the present study, the antioxidative and antiatherosclerotic effects of supplementation with flaxseed (15 g Linum usitatissimum/100 g food), α-tocopherol (40 mg α-tocopherol/100 g food), and their combination were investigated in an animal model that replicates human atherosclerotic lesions. Golden Syrian hamsters appear more predisposed to develop atherosclerotic lesions than rats (30), and present many similarities with the lipoprotein metabolism in humans (31). In hamsters, a 15% flaxseed-supplemented diet is similar in energetic load to the 40 g/day dosage used in human trials. In humans, high doses of 40–50 g flaxseed/day, which corresponds to ~10% of total energy intake have been used in trials (32). In the present study, it was hypothesized that adding vitamin E to the flaxseed diet would lead to improved results for oxidative stress and the prevention of atherosclerotic lesions as there is evidence that the consumption of flaxseed reduces vitamin E (19), and only vitamin E supplementation is able to restore the plasma concentration (20). The present study demonstrated that all three diets (flaxseed, α-tocopherol or their combination) had an antiatherogenic effect in diabetic hamsters, suggested by the prevention of histological lesions (reduced intimal lipid infiltration and discrete alterations to the middle layer of the arterial wall) and by the decreased aortic cholesterol content in supplemented diabetic hamsters compared with those in the unsupplemented diabetic group. The antiatherogenic mechanism of the diets used in our experiment may be associated with changes in metabolic parameters, particularly in glucose concentrations and the lipid profile, and oxidative stress. Flaxseed, α-tocopherol, and the combination of flaxseed and α-tocopherol added to the high-fat diet resulted in similar reductions in serum lipid peroxidation in the diabetic hamsters. Liver lipid peroxidation was significantly diminished by supplementation of the diet by flaxseed alone or in combination with α-tocopherol, but not by α-tocopherol supplementation alone. Moreover, all three diets increased the liver GSH content and the highest concentrations were obtained with the combined diet. These results suggest that the higher antioxidant effects of flaxseed plus vitamin E are due to their synergic antioxidant actions. Flaxseed is the richest source of the lignan SDG. SDG isolated from flaxseed has oxygen radical-scavenging properties that have been demonstrated in vitro by direct hydroxyl radical-scavenging activity (18) or by inhibition of lipid peroxidation (16). Vitamin E is known as a major antioxidant in the cellular antioxidant defense system, which acts by interrupting the propagation of free radical chain reactions. The effect of vitamin E against the prooxidant status associated with DM has been demonstrated by the suppression of diabetes-induced increases in ROS and diminishment of lipid peroxidation (33,34). The synergistic effect of flaxseed oil and vitamin E on oxidative stress has also been observed by Deng et al (35) in male Wistar rats fed a high-fat diet. Similar to the findings of previous studies (36), the diabetic hamsters had higher serum concentrations of serum glucose, TC, TG and non-HDL-C, and lower HDL-C concentrations compared with the control group. The addition of flaxseed, α-tocopherol or their combination to the high-fat diet reduced serum concentrations of glucose, TC and non-HDL in diabetic male hamsters (37,38). The hypocholesterolemic effects of flaxseed may be attributed to ALA, lignans (SDG) or the fiber content (39). In agreement with other studies (40), vitamin E significantly reduced serum TG levels in diabetic hamsters fed an atherogenic diet. In the present study, the addition of vitamin E alone or in combination with flaxseed to the high-fat diet resulted in lower serum concentrations of TC, non-HDL and TG but failed to correct the HDL-C concentrations in diabetic hamsters. Another study has reported that in hamsters with experimental STZ-induced DM fed high a saturated fat and cholesterol diet, supplementation with α-tocopherol increased TC and did not change HDL-C (40). The results of the present study suggest that the antiatherogenic effect of flaxseed, α-tocopherol and their combination added to a high-fat diet in diabetic hamsters is based primarily on the antioxidative role of these components. The improvement of serum glucose and non-HDL were evident and could contribute to the prevention of diabetic macroangiopathy. The antioxidant effect of flaxseed was similar to that of α-tocopherol in diabetic hamsters fed a high-fat diet and the combined diet (Flax + E) did not seem to confer more benefits than flaxseed alone. Moreover, the high dose of ground flaxseed alone may have a better cardioprotective effect than α-tocopherol in diabetic hamsters due to the reduction of TC and non-HDL levels and increase in HDL-C levels. Oxidative stress has been proposed as a pathogenic mechanism of diabetic macrovascular complications, and dietary strategies that ameliorate oxidative stress may represent a promising approach to the prevention of atherosclerosis. The flaxseed-enriched diet may regulate the generation of reactive oxygen species and the metabolism of glucose and lipids, and could consequently play a role in the prevention of major cardiovascular complications in diabetic states. Ground flaxseed combined with α-tocopherol does not appear to confer a clear advantage compared with flaxseed alone in the reduction of oxidative stress and vascular lesion formation in experimental diabetes.\n\n**Question:** Compared to received a high-fat diet (40% energy as fat), deficient in ?-linolenic acid (ALA) what was the result of high-fat diet enriched with ground flaxseed 15 g/100 g of food, rich in ALA; on HDL-C levels?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1207
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Antioxidative and antiatherogenic effects of flaxseed, α-tocopherol and their combination in diabetic hamsters fed with a high-fat diet\n\n ABSTRACT:\nOxidative stress has previously been shown to play a role in the pathogenesis of diabetes mellitus (DM) and its complications. In the present study, the effects of supplementation with dietary antioxidants, flaxseed and α-tocopherol were investigated in diabetic golden Syrian hamsters fed with a high-fat diet. Thirty-five golden Syrian hamsters were randomly divided into a control group (C) and four diabetic groups (DM, DM + flax, DM + E and DM + Flax + E). The hamsters received four different diets for a 20-week period, as follows: i) Groups C and DM received a high-fat diet (40% energy as fat), deficient in α-linolenic acid (ALA); ii) the DM + Flax group received a high-fat diet enriched with ground flaxseed 15 g/100 g of food, rich in ALA; iii) the DM + E group received a high-fat diet enriched with vitamin E, 40 mg α-tocopherol/100 g of food; and iv) the DM + Flax + E group received a high-fat diet enriched with flaxseed and vitamin E. The results of serum lipid and oxidative stress analysis suggested that the antiatherogenic effect of flaxseed, α-tocopherol and their combination added to a high-fat diet in diabetic hamsters was based primarily on their antioxidative role, demonstrated by decreased serum lipid peroxidation and increased liver glutathione content. Improvements of serum glucose and non-high-density lipoprotein cholesterol (HDL-C) levels were observed and may have contributed to the prevention of diabetic macroangiopathy evidenced in the histopathological examination. The antioxidant effect of flaxseed was similar to that of α-tocopherol in diabetic hamsters fed a high-fat diet and combined supplementation did not appear to bring more benefits than flaxseed alone. Moreover, the high dose of ground flaxseed alone may have a better cardioprotective effect than α-tocopherol in diabetic hamsters by reducing total cholesterol and non-HDL-C levels and increasing HDL-C levels.\n\nBODY.INTRODUCTION:\nOxidative stress is considered to be significant in the pathogenesis of diabetes-induced cardiovascular disease (CVD), which is associated with an abnormal blood lipid profile, insulin resistance and metabolic syndrome (1). Oxidative stress occurs due to a disturbance of the balance between antioxidant defense mechanisms and the levels of reactive oxygen species (ROS) (2,3). Numerous studies have identified significant alterations in plasma antioxidant vitamins (including vitamins C, E and A), antioxidant enzyme systems [superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GSH-Px)] and in lipid peroxidation in diabetes (4,5). In chronic diabetes mellitus (DM), the stability and reactive capacity of antioxidants seriously affect the long-term complications caused by oxidative stress (6,7). Antioxidants have been shown to reduce complications in DM by preventing free radical-mediated damage (4,8). Dietary supplementation with antioxidants, including vitamins and phenolic compounds obtained from plants, may help to maintain a desirable pro-oxidative/antioxidative balance (9,10). A variety of dietary sources are presently of considerable interest due to their potential health benefits in relation to numerous diseases, including diabetic disorders (11,12). Flaxseed (also known as linseed and Linum usitatissimum), an edible oil seed/grain from a traditional arable crop, has been acknowledged as a functional food (13), and is the focus of considerable attention due to its unique nutrient components and its potential to prevent CVD (14). In addition of being one of the richest plant sources of α-linolenic acid (ALA; ~22% of whole flaxseed), flaxseed is an essential source of lignans (in particular, secoisolariciresinol diglucoside, SDG) (15). SDG has demonstrated extreme radical scavenging activity, thereby protecting polyunsaturated fatty acids from oxidation (16–18). Despite the beneficial effects of SDG, flaxseed consumption has been found to significantly reduce α- and γ-tocopherol concentrations in rats (19) and only vitamin E supplementation was able to restore the plasma concentrations (20). α-tocopherol supplementation has previously been demonstrated to be beneficial in increasing the levels of antioxidant enzymes such as SOD and GSH-Px, and of reduced GSH in diabetes (21–23); however, little is known about the effect of adding vitamin E to a flaxseed-enriched diet on oxidative stress and aortic atherosclerotic lesions in diabetes. In the present study, the effects of dietary flaxseed, α-tocopherol and their combination were investigated on atherosclerotic progression in diabetic hamsters fed a high-fat diet by examining serum lipid concentrations, oxidative stress markers, aortic cholesterol content and aortic histological aspects.\n\nBODY.MATERIALS AND METHODS.ANIMALS:\nA total of 35 male Golden Syrian hamsters were used in the study. The hamsters were purchased from the Cantacuzino National Institute for Research and Development in Microbiology and Immunology (Bucharest, Romania). This study was approved by the Laboratory Animal Care Committee of Grigore T. Popa University of Medicine and Pharmacy (Iaşi, Romania), and the hamsters were maintained in accordance with the general guidelines for the care and use of laboratory animals recommended by the Council of European Communities (24). The hamsters were caged singly and kept in standard laboratory conditions with a controlled temperature (20±2°C) and a 12 h light/12 h dark cycle. Diabetes was induced in the hamsters using streptozotocin (STZ; catalog no. S0130; Sigma-Aldrich, St. Louis, MO, USA) (25). STZ, at a dose of 50 mg/kg body weight, freshly solved in citrate buffer (50 mM, pH 4.5), was administered intraperitoneally, as a single dose in the morning, following a night of fasting. Citrate buffer (50 mM) was prepared from sodium citrate and citric acid (S.C. Chemical Company, Iaşi, Romania) dissolved in distilled water, with adjustment of the pH to 4.5. The control group (C) received an equivalent quantity of citrate buffer (50 mM), administered intraperitoneally. All animals having a blood glucose level >126 mg/dl (7 mmol/l) were considered to be diabetic.\n\nBODY.MATERIALS AND METHODS.GROUPING AND DIETS:\nEach hamster received 15 g food/day and tap water ad libitum. The animals were randomly divided into a control group (C) and four diabetic groups (DM, DM + flax, DM + E and DM + Flax + E). The hamsters received four different diets for a 20-week period, as follows: i) Groups C and DM received a high-fat diet (40% energy as fat), that was deficient in ALA; ii) the DM + Flax group received a high-fat diet enriched with ground flaxseed (Linum usitatissimum; 15/100 g of food), that was rich in ALA; Flaxseed of the Olin variety was provided by the Department of Phytotechny of the Faculty of Agronomy (Iaşi, Romania). iii) the DM + E group received a high-fat diet enriched with vitamin E (40 mg α-tocopherol/100 g of food; Sigma-Aldrich); and iv) the DM + Flax + E group received a high-fat diet enriched with a combination of flaxseed and vitamin E. All four diets had similar carbohydrate, total fiber, protein and fat contents (Table I).\n\nBODY.MATERIALS AND METHODS.ANIMAL NECROPSY AND PROCESSING OF SAMPLES:\nAt the end of the experiment, the animals were sacrificed by cardiac puncture under ketamine anesthesia (100 mg/kg body weight). Blood samples were collected without anticoagulant. Aliquots of serum were frozen and kept at −80°C for subsequent analysis. The liver was immediately removed, rinsed with ice-cold saline, placed in a sealed container, homogenized for biochemical analysis and stored at −20°C until analyzed. The aortas were dissected, rinsed with cold saline and preserved in a phosphate buffer (pH 7.2).\n\nBODY.MATERIALS AND METHODS.SERUM GLUCOSE:\nSerum glucose was measured by enzymatic colorimetric methods on a Tecan microplate reader (Tecan SunriseTM Touchscreen, Männedorf, Switzerland) using the AD1A716 commercially available kits (Audit Diagnostics, Cork, Ireland).\n\nBODY.MATERIALS AND METHODS.SERUM LIPID PROFILE:\nSerum total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C) and triglyceride (TG) levels were measured by enzymatic colorimetric methods on a Tecan microplate reader using the AD1A704, AD1A602 and AD1A306 commercially available kits (Audit Diagnostics). Non-HDL-C levels were calculated by subtracting the level of HDL-C from that of TC.\n\nBODY.MATERIALS AND METHODS.EVALUATION OF CHOLESTEROL CONTENT IN THE AORTA:\nThe aortas of the hamsters in each group were removed. A section of the aortic arch of each animal was soaked in a 10% (v/v) formal saline solution and was stored on ice prior to being stained and fixed. The remainder of the aorta was soaked in phosphate-buffered saline (PBS) and homogenized for biochemical analysis. The lipid content was extracted by treatment with chloroform and methanol followed by centrifugation (26). The extracted lipid was dissolved in ethanol and measured using a AD1A704 cholesterol assay kit (Audit Diagnostics) utilizing absorbance spectrophotometry (Microlab 300, Vital Scientific, Spankeren, Netherlands) at 500 nm wavelength, and the amount quantified as μg/mg wet tissue.\n\nBODY.MATERIALS AND METHODS.PARAMETERS OF OXIDATIVE STRESS:\nSerum and liver thiobarbituric acid reactive substances (TBARS) are indices of lipid peroxidation and oxidative stress. TBARS were determined using the method described by Phelps and Harris (27). The quantity of the TBARS was measured using a Tecan microplate reader at a wavelength of 540 nm. Liver levels of reduced GSH were determined by an enzymatic reaction, based on the oxidation of GSH by 5,5′-dithiobis(2-nitrobenzoic acid) (DTNB), in the presence of GSH reductase and methylenetetrahydrofolate reductase, monitored at a wavelength of 405 nm (28). Liver SOD levels were determined using the method described by Minami and Yoshikawa (29).\n\nBODY.MATERIALS AND METHODS.MORPHOLOGICAL STUDY OF THE AORTA:\nFor evaluation of aortic atherosclerotic lesions by light microscopy, frozen sections of 10 μm thickness were taken from the region of the proximal aorta. The extent of atherosclerosis was visually assessed after cutting the frozen sections at a size of 8–10 mm, fixing them in formalin, briefly washing them with running tap water for 1–10 mins and rinsing with 60% isopropanol prior to staining the sections with Oil Red O (3 mg/ml Oil Red O in 60% acetone and 40% water; Sigma-Aldrich). The samples were then counterstained with hematoxylin and the samples were examined under a light microscope (Olympus BX40, Olympus, Tokyo, Japan).\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nData are expressed as the mean ± standard deviation (SD). Univariate statistical analysis was performed using the Student's t-test and Bonferroni's multiple comparison test (Statistical Software Package SPSS®, version 13, SPSS Inc., Chicago, IL, USA).\n\nBODY.RESULTS.SERUM GLUCOSE:\nThe addition of flaxseed or/and vitamin E to the high-fat diet led to significant reductions of serum glucose levels compared with those in the DM group (Table II).\n\nBODY.RESULTS.SERUM LIPID PROFILE PARAMETERS:\nDiabetes resulted in significant increases in the serum concentrations of TC, TG and non-HDL-C, and a significant reduction of HDL-C concentrations compared with those in the control group. Flaxseed supplementation significantly decreased serum TC and non-HDL-C levels and increased HDL-C levels (Table II) compared with those in the unsupplemented DM group. The addition of vitamin E to the flaxseed-rich diet significantly decreased TC, non-HDL-C and TG levels but did not improved the HDL-C level in diabetic hamsters.\n\nBODY.RESULTS.CHOLESTEROL CONTENT OF THE AORTA:\nAortic cholesterol contents were significantly increased in the DM group fed with a standard diet compared with those in the control group (Table II). Flaxseed or/and vitamin E supplementation of the diet significantly decreased the aortic cholesterol content in the hamsters with DM (Table II).\n\nBODY.RESULTS.PARAMETERS OF OXIDATIVE STRESS:\nThe main findings are summarized in Table III. Serum and liver TBARS, as indices of lipid peroxidation, were significantly increased in the DM group compared with those in the control group, while the addition of flaxseed or a combination of flaxseed and α-tocopherol to the high-fat diet of the hamsters with DM significantly decreased these two markers compared with those in the unsupplemented DM group. Liver GSH levels were significantly diminished in the DM group compared with those in the control group. Flaxseed, α-tocopherol and their combination added to the high-fat diet significantly increased the liver GSH levels in the diabetic hamsters. The increased in liver GSH content was significantly higher in the DM + Flax + E group compared with that in the DM + Flax group. No significant differences of SOD activity in the liver were identified between groups.\n\nBODY.RESULTS.HISTOLOGICAL CHANGES IN AORTIC TISSUES.CONTROL GROUP:\nThe histologic examination of aortal fragments with O-Rd staining revealed discrete thickening of the aortic intimal layer (Fig. 1A).\n\nBODY.RESULTS.HISTOLOGICAL CHANGES IN AORTIC TISSUES.DIABETIC GROUPS:\nThe histologic examination of aortic fragments with O-Rd staining from diabetic hamsters revealed morphologic changes specific to diabetic macroangiopathy (Fig. 1B). Intimal thickening, endothelial discontinuities, lipid drops in the subintimal space, and alterations to the middle layer of the arterial wall were observed. The addition of flaxseed and/or vitamin E to the high-fat diet in diabetic hamsters improved the histological aspects of aortic atherosclerosis described in the diabetic hamsters fed an unsupplemented diet. The aortal tissue in the supplemented groups was characterized by slightly swollen endothelium, with moderate intimal thickening and moderate lipid infiltration of the intima (Fig. 2).\n\nBODY.DISCUSSION:\nIn DM, oxidative stress occurs, primarily due to the increased production of oxygen free radicals and/or a reduction in the antioxidant defense. Numerous studies have demonstrated that an increase in oxidative stress is implicated in the pathogenesis and progression of diabetic vascular lesions (1,4,6). In the present study, the antioxidative and antiatherosclerotic effects of supplementation with flaxseed (15 g Linum usitatissimum/100 g food), α-tocopherol (40 mg α-tocopherol/100 g food), and their combination were investigated in an animal model that replicates human atherosclerotic lesions. Golden Syrian hamsters appear more predisposed to develop atherosclerotic lesions than rats (30), and present many similarities with the lipoprotein metabolism in humans (31). In hamsters, a 15% flaxseed-supplemented diet is similar in energetic load to the 40 g/day dosage used in human trials. In humans, high doses of 40–50 g flaxseed/day, which corresponds to ~10% of total energy intake have been used in trials (32). In the present study, it was hypothesized that adding vitamin E to the flaxseed diet would lead to improved results for oxidative stress and the prevention of atherosclerotic lesions as there is evidence that the consumption of flaxseed reduces vitamin E (19), and only vitamin E supplementation is able to restore the plasma concentration (20). The present study demonstrated that all three diets (flaxseed, α-tocopherol or their combination) had an antiatherogenic effect in diabetic hamsters, suggested by the prevention of histological lesions (reduced intimal lipid infiltration and discrete alterations to the middle layer of the arterial wall) and by the decreased aortic cholesterol content in supplemented diabetic hamsters compared with those in the unsupplemented diabetic group. The antiatherogenic mechanism of the diets used in our experiment may be associated with changes in metabolic parameters, particularly in glucose concentrations and the lipid profile, and oxidative stress. Flaxseed, α-tocopherol, and the combination of flaxseed and α-tocopherol added to the high-fat diet resulted in similar reductions in serum lipid peroxidation in the diabetic hamsters. Liver lipid peroxidation was significantly diminished by supplementation of the diet by flaxseed alone or in combination with α-tocopherol, but not by α-tocopherol supplementation alone. Moreover, all three diets increased the liver GSH content and the highest concentrations were obtained with the combined diet. These results suggest that the higher antioxidant effects of flaxseed plus vitamin E are due to their synergic antioxidant actions. Flaxseed is the richest source of the lignan SDG. SDG isolated from flaxseed has oxygen radical-scavenging properties that have been demonstrated in vitro by direct hydroxyl radical-scavenging activity (18) or by inhibition of lipid peroxidation (16). Vitamin E is known as a major antioxidant in the cellular antioxidant defense system, which acts by interrupting the propagation of free radical chain reactions. The effect of vitamin E against the prooxidant status associated with DM has been demonstrated by the suppression of diabetes-induced increases in ROS and diminishment of lipid peroxidation (33,34). The synergistic effect of flaxseed oil and vitamin E on oxidative stress has also been observed by Deng et al (35) in male Wistar rats fed a high-fat diet. Similar to the findings of previous studies (36), the diabetic hamsters had higher serum concentrations of serum glucose, TC, TG and non-HDL-C, and lower HDL-C concentrations compared with the control group. The addition of flaxseed, α-tocopherol or their combination to the high-fat diet reduced serum concentrations of glucose, TC and non-HDL in diabetic male hamsters (37,38). The hypocholesterolemic effects of flaxseed may be attributed to ALA, lignans (SDG) or the fiber content (39). In agreement with other studies (40), vitamin E significantly reduced serum TG levels in diabetic hamsters fed an atherogenic diet. In the present study, the addition of vitamin E alone or in combination with flaxseed to the high-fat diet resulted in lower serum concentrations of TC, non-HDL and TG but failed to correct the HDL-C concentrations in diabetic hamsters. Another study has reported that in hamsters with experimental STZ-induced DM fed high a saturated fat and cholesterol diet, supplementation with α-tocopherol increased TC and did not change HDL-C (40). The results of the present study suggest that the antiatherogenic effect of flaxseed, α-tocopherol and their combination added to a high-fat diet in diabetic hamsters is based primarily on the antioxidative role of these components. The improvement of serum glucose and non-HDL were evident and could contribute to the prevention of diabetic macroangiopathy. The antioxidant effect of flaxseed was similar to that of α-tocopherol in diabetic hamsters fed a high-fat diet and the combined diet (Flax + E) did not seem to confer more benefits than flaxseed alone. Moreover, the high dose of ground flaxseed alone may have a better cardioprotective effect than α-tocopherol in diabetic hamsters due to the reduction of TC and non-HDL levels and increase in HDL-C levels. Oxidative stress has been proposed as a pathogenic mechanism of diabetic macrovascular complications, and dietary strategies that ameliorate oxidative stress may represent a promising approach to the prevention of atherosclerosis. The flaxseed-enriched diet may regulate the generation of reactive oxygen species and the metabolism of glucose and lipids, and could consequently play a role in the prevention of major cardiovascular complications in diabetic states. Ground flaxseed combined with α-tocopherol does not appear to confer a clear advantage compared with flaxseed alone in the reduction of oxidative stress and vascular lesion formation in experimental diabetes.\n\n**Question:** Compared to received a high-fat diet (40% energy as fat), deficient in ?-linolenic acid (ALA) what was the result of high-fat diet enriched with ground flaxseed 15 g/100 g of food, rich in ALA; or enriched with vitamin E, 40 mg ?-tocopherol/100 g of food on the aortic cholesterol content?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1209
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The display effects of patients’ self-assessment on traumatic acute pain on the proportion and timing of analgesics administration in the emergency department\n\n ABSTRACT.BACKGROUND:\nAcute pain assessment in the emergency department (ED) is important in particular during the triage process. Early pain assessment and management improve outcome. The objective of this study was to determine the effects of documentation and display of patient's self-assessment of pain using numerical rating scale (NRS) on analgesic use among adult trauma patients in ED.\n\nABSTRACT.METHODS:\nA randomized control trial was conducted recruiting 216 trauma patients who presented to ED of two tertiary centers. Pain score was done using NRS for all patients. They were randomized into pain score display group or not displayed in the control. The outcome measured were proportion of patients receiving analgesics and timing from triage to analgesic administration.\n\nABSTRACT.RESULTS:\nThe proportion of patients who received analgesics when pain score was displayed was 6.5% more than when pain score was not displayed. This difference was however not statistically significant. However, stratified categorical analysis using chi-square showed that the displayed severe pain group was 1.3 times more likely to receive analgesics compared to the non-displayed group. The mean timing to analgesic administration for the displayed and non-displayed groups were 81.3 ± 41.2 (95% C.I 65.9, 96.7) and 88.7 ± 45.4 (95% C.I 69.0, 108.3), respectively (p > 0.05).\n\nABSTRACT.CONCLUSIONS:\nThe proportion of patients who received analgesics increased when NRS was displayed. However, the pain display has no significant effect on the timing of analgesics.\n\nBODY.1 BACKGROUND:\nThe usual scenario in the emergency department (ED) is an overworked doctor, carefully putting history together, examination, and investigations, to arrive at a diagnosis. This is often done with much time constraints and poor attention paid to acute pain management. However, a physician's primary duty is to comfort, manage, and reduce the suffering of patients. This study attempted to address this contradiction, by introducing a simple intervention in the ED. This is an effort to evaluate the effect of displaying pain score, without any other intervention, on analgesic use. As pain score is recommended as a vital sign in many international protocols, it is in a way asking if we can improve acute pain management if we implement the pain score display at triaging in the ED.\n\nBODY.2 OBJECTIVES:\nThe main aim of this study is to determine the effect of documentation and display of patient self-assessment of pain using numerical rating scale (NRS) on analgesic administration. Specifically, we are looking into the effects of numerical pain score display on the proportion of patients receiving analgesics and the timing of analgesic use.\n\nBODY.3 METHODS:\nThis was a cross-sectional randomization study in the ED of two tertiary teaching hospitals with an average annual patient attendance of 140,000. This study was granted ethical approval by the research and ethical committee of the institution. The following were the inclusion criteria: (a) Adult ≥ 20 years old (b) Patients complaining of acute pain secondary to trauma (c) Patients triaged to non-critical zone only (d) GCS 15/15 The following were the exclusion criteria: (a) Patients with language barrier (b) Patients who were intoxicated (c) Patients with altered mental status or psychiatric illness (d) Patient under police custody (e) Patients with potentially life-threatening diseases or injuries (f) Non-traumatic causes of acute pain Patients were carefully selected among whom who have likelihood of presenting with significant acute pain secondary to trauma, while avoiding interference with evaluation and care of patients with potentially life-threatening illness or injury. All patients attending the ED would be triaged by a triage officer at the entrance of the department. Critical patients would be admitted to the resuscitation bay immediately after the triage decision had been made. The semi-critical and non-critical patients would be triaged further including the pain scoring assessment. Acute pain assessment by using NRS is a part of a vital sign monitoring parameter included into the triage protocol of the department. Patients would be given analgesics at the triage counter if they presented with moderate to severe pain. Patients would be up-triaged and admitted to the treatment cubicle if they developed worsening of pain while waiting to be seen by doctors. Only patients who were able to give consent and had a clear mind to understand and do pain assessment were included in the study. Randomization was done prior to selection of patients. Patients were assigned to display pain score and non-display pain score groups, using a computer-generated randomization method, in blocks of four. These randomized selections were sealed in opaque brown, non-transparent, numbered envelop and arranged in ascending order. The envelopes were opened only after pain assessment was done and documented by researcher. The researcher and the triage officer were blinded to the randomization while doing the pain score and initial data collection. This was important to avoid bias when pain score was being performed. Verbal consent was obtained from each patient. The sample size was calculated by using the following parameters:(1)Po=0.5(2)Pi=0.3where α = 0.05 (95% CI), study power = 0.8 (80% ), two proportion samples Po = 0.5 and Pi = 0.3, and M = 1. The sample size required were 220 patients (110 per arm) including the 10% dropout.\n\nBODY.3 METHODS.3.1 DATA COLLECTION:\nThe NRS was used as the pain scale assessment tool as this has been assessed as the best tool to be used among trauma patients in the ED. (Figure 1) [1]. A bold red marker pen was used to display the pain score. A yellow sticker measuring 10 × 7 cm was used to attach the pain score at the top right hand corner of the front cover of medical case notes. This pain score was displayed prominently in the patients' medical clerk sheet in one group and not displayed in another group. Figure 1Numerical rating scale (NRS). Pain assessment was done at triage after chief complaint and vital signs were assessed. This was done by the patients, without relying on physicians' impression. This is as suggested by the Canadian Association of Emergency Physicians (CAEP) consensus document on emergency pain management [2]. Instructions on how to do a pain score was explained to the patients by the researcher. Instruction was given by using local languages and dialects. Relatives were discouraged from translating or trying to answer for the patient. The numerical rating scale was shown to the patient. Patients were asked to give pain score value according to their own assessment of pain severity. They were excluded from the study if they did not understand the instructions after being repeated twice. The data was entered into the data collection form. Based on the previous study carried out by Silka et al., we decided that the NRS score of 7 to 10 is severe pain, NRS score of 4 to 6 is clinically significant moderate pain, and any score of 3 and below is mild pain. Data entry and analysis were done using Statistical Packages for Social Science (SPSS) version 12.0.1 software which was registered by the institution. Numerical pain score was converted to categorical variable. Chi-square test was used in the analysis of the differences of analgesic administration between the displayed pain score and not displayed pain score groups. The confounding effect of pain severity on this relationship was analyzed by using Mantel-Hanszel estimate. Timing to analgesics was presented as mean and analyzed by using ANOVA test. Statistical significance was considered at p value of less than 0.05. Data exploration was done using descriptive statistics and presented as charts and tables for each variable.\n\nBODY.4 RESULTS:\nThe number of patients who met all criteria was 216. Finally, there were 107 patients (50.5%) in the study group and 99 (49.5%) patients in the control group after exclusion due to various reasons. The general demographic data comparison between the two study groups is as shown in Table 1. There is no significant difference in the gender distribution, age group, education, mechanism of injury, and pain score between the study groups (p > 0.05). Majority of patients who presented with acute pain in this study group were in the range of 20 to 29 years of age (n = 99, 49.5%). The distribution of the pain score groupings in the displayed NRS group for the mild, moderate, and severe pain are 12.9% (n = 13), 46.5% (n = 47), and 40.6% (n = 41), respectively. Likewise, the distribution of the pain score groupings in the non-displayed NRS group for the mild, moderate, and severe pain are 17.2% (n = 17), 49.5% (n = 49), and 33.3% (n = 33), respectively. The proportion of patients who received parenteral analgesics for mild, moderate, and severe pain in displayed group was 7.7%, 21.7%, and 47.6%, respectively (p < 0.05). Likewise, there is an increment of proportion who received analgesics in the non-displayed group from 23.5% to 31.3% with increasing pain severity (p < 0.05). The proportion of patients who received analgesics when pain score was displayed was 6.5% more than when pain score was not displayed. This difference was however not statistically significant (p = 0.30) (Figure 2 and Table 2). Table 1 Demographic comparison between the study and control groups Variables Non-display (control) group ( n  \n = 99) Display (trial) group ( n  \n = 107) \n p \n value Age in years (mean) 35.6 38.6 0.639 Gender distribution ( n  = Male or Female) Male = 70 (70.1%) Male = 73 (68.2%) 0.242 Female = 29 (29.2%) Female = 34 (31.8%) Education at minimum secondary school level 59 (59.6%) 64 (59.8%) 0.662 Mean (s.d) pain score (NRS) 5.7 (1.9) 6.1 (2.6) 0.300 Severe pain score (more than 7) ( n ) 33 (33.3%) 41 (38.3%) 0.492 Motor vehicle crash patients ( n ) 51 (51.5%) 42 (39.3%) 0.621 Patients received intravenous analgesics ( n ) 49 (49.5%) 58 (54.2%) 0.310 Figure 2Bar chart showing percentage of patients who received analgesics in trial versus control group. Table 2 Odd ratio of patients receiving analgesics   Crude odd ratio Weighted odd ratio \n p \n value Displayed pain score × Receiving analgesics 1.396   0.30 Pain group a displayed pain score × Receiving analgesics   1.306 0.42 a Confounder effect of variable (confounder effect of pain severity on displayed pain score was analyzed using Mantel-Hanszel estimate). The mean timing (in minutes) to analgesic administration for the displayed and non-displayed groups were 81.3 ± 41.2 (95% C.I 65.9, 96.7) and 88.7 ± 45.4 (95% C.I 69.0, 108.3), respectively (p = 0.538). The distribution of timing of analgesics according to the severity of pain is as shown in Table 3. Table 3 Test results of analysis of timing to analgesics by displayed stratified Pain Score using ANOVA Display pain score Mean (minutes) Std. deviation 95% CI (upper, lower) Minimum Maximum Pain score 4 to 6 (displayed) 84.2 51.8 16.4, 47.2 20.0 190.0 Pain score 4 to 6 (not displayed) 89.6 40.4 55.8, 123.4 25.0 170.0 Pain score > 6 (displayed) 80.8 36.0 8.0, 63.9 20.0 165.0 Pain score > 6 (not displayed) 94.3 48.6 72.7, 96.3 25.0 174.0 p  = 0.946. \n\nBODY.5 DISCUSSION:\nPainful suffering is neither a diagnosis nor a disease. Nevertheless, it is a condition experienced by patients posttrauma that needs to be attended to attentively. Over the last few years, there has been much progress in the understanding of the pathophysiology of pain and its effect on disease and healing. More importantly, evidence has emerged on the detrimental effect of neglecting pain on disease progression and patient recovery [3],[4]. There have been many attempts to introduce ways to improve pain management in the ED. Great steps to improving pain and suffering in a patient start with understanding what pain means to the patient. Pain is what the patient states it is. Physicians must respect this. A diverse spectrum of psychological, sociocultural, temporal, and situational variables affects how people perceive and express their pain [5],[6]. The patients' self-report is the most reliable indicator of the presence and intensity of pain. Health care professionals often fail to routinely assess and document pain. Physicians should trust the patients' subjective reports of pain unless there is evidence to the contrary [7],[8]. Few studies had shown the positive effects of pain score display on analgesic use in the ED. In a retrospective study by Nelson, the effect of introducing a mandated verbal numeric pain scale on the incidence and timing of analgesic administration in trauma and non-trauma patients in the ED was studied. Analgesic use increased from 25% to 36%, and analgesics were administered more rapidly after the scale was introduced [9],[10]. Considering the positive results shown by these studies, it was felt that a randomized control study to determine the effect of documentation and display of patient self-assessment of pain using NRS, on analgesic administration was an important move to increase the evidence of the importance of pain score as a fifth vital sign in triage. In this study, a detailed analysis of the three different pain score groups, minimal, moderate, and severe pain, has shown that with the increasing pain score, more patients received analgesics. This is more frequent in the displayed pain score group. In Silka's study in 2004, no patient who reported mild pain received analgesics, whereas 72% of patients with moderate to severe pain did receive analgesics. A similar study was done in 2004 by Thomas. That study used the visual analog scale (VAS), which was displayed at the patient's ED chart and placed at the head of the ED stretcher. The study by Thomas showed that 63% of patients in the group with graphical display of VAS score received analgesics compared to 58.7% in tabulation group and 55.7% in the control group [11],[12]. Our study has found that analgesic administration is significantly associated with increasing severity of pain. This study gives the benefit of knowing what the pain score was in all patients. In patients where pain score was not displayed, a significant proportion of patients with moderate and severe pain still received analgesics. This shows that there are other factors influencing the attending medical officer to prescribe analgesics. Clinical judgment, the doctors' own assessment of pain, seems to have played a significant role here. What can be concluded here is that more patients received analgesics with increasing severity of pain, whether the pain score was displayed or not. It is not only the awareness of a displayed pain score that increased the likelihood of analgesics but also the clinical judgment of the presence of pain is an important factor. However, in this study, the displayed pain score did not affect the timing to analgesics compared to when pain score was not displayed. The study by Silka also showed no difference in the timing to first analgesics when comparing those patients with or without VPS. Pain assessment using NRS at the time of triage and prominent display of the pain score in patient medical record did not alter the timeliness of analgesic administration. Time to analgesics has not been studied in the ED in this country previously as far as the author's awareness. There is no nationwide system currently in place to triage a patient with a high pain score to see the doctor earlier. There are also no guidelines to administer analgesics earlier in patients with high pain score, while they wait for doctors. These are the measures that could be introduced to reduce the time the patient is in pain while waiting for analgesic administration.\n\nBODY.5 DISCUSSION.5.1 LIMITATIONS:\nThe main limitation of this study is the practice of analgesic administration in the ED being influenced by the awareness that the study is going on. The presence of the researcher in the triage and the new sticky label on the medical case notes with the pain score developed the awareness that they were being monitored. Perhaps this created some pressure to give more analgesia than usual or otherwise. This inevitable Hawthorne effect was counterbalanced by the utilization of a control group. Notably, patients and physician were blinded to the study null hypothesis. Only patient factors that influenced pain score and analgesic administration were studied. Factors regarding the doctors' experience or knowledge of pain management were not analyzed. These factors could not be studied as there was a high turnover of doctors in the ED and many different doctors work for the three shifts. Other factors that may influence analgesic administration include the number of doctors on duty and the number of patients on the days of study or particular shift. If there were many patients and the doctors were very busy, they might ignore the pain score and try to work fast in their limited time per patient. The time of the day when the study was conducted may also affect the outcome. These were not considered in the study.\n\nBODY.6 CONCLUSIONS:\nThis study has determined the effect of documentation and display of patient self-assessment of pain using NRS on analgesic administration. The proportion of patients who received analgesics increased when NRS was displayed. However, the pain display has no significant effect on the timing of analgesics.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nNHR involved in the conception of the study and preparing the manuscript. CA involved in data collection and statistical analysis. Both authors read and approved the final manuscript.\n\n**Question:** Compared to not displayed in the control. what was the result of pain score display group on The proportion of patients who received analgesics?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
815
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Implementing a structured education program for children with diabetes: lessons learnt from an integrated process evaluation\n\n ABSTRACT.BACKGROUND:\nThere is recognition of an urgent need for clinic-based interventions for young people with type 1 diabetes mellitus that improve glycemic control and quality of life. The Child and Adolescent Structured Competencies Approach to Diabetes Education (CASCADE) is a structured educational group program, using psychological techniques, delivered primarily by diabetes nurses. Composed of four modules, it is designed for children with poor diabetic control and their parents. A mixed methods process evaluation, embedded within a cluster randomized control trial, aimed to assess the feasibility, acceptability, fidelity, and perceived impact of CASCADE.\n\nABSTRACT.METHODS:\n28 pediatric diabetes clinics across England participated and 362 children aged 8–16 years, with type 1 diabetes and a mean glycosylated hemoglobin (HbA1c) of 8.5 or above, took part. The process evaluation used a wide range of research methods.\n\nABSTRACT.RESULTS:\nOf the 180 families in the intervention group, only 55 (30%) received the full program with 53% attending at least one module. Only 68% of possible groups were run. Staff found organizing the groups burdensome in terms of arranging suitable dates/times and satisfactory group composition. Some staff also reported difficulties in mastering the psychological techniques. Uptake, by families, was influenced by the number of groups run and by school, work and other commitments. Attendees described improved: family relationships; knowledge and understanding; confidence; motivation to manage the disease. The results of the trial showed that the intervention did not significantly improve HbA1c at 12 or 24 months.\n\nABSTRACT.CONCLUSIONS:\nClinic-based structured group education delivered by staff using psychological techniques had perceived benefits for parents and young people. Staff and families considered it a valuable intervention, yet uptake was poor and the burden on staff was high. Recommendations are made to inform issues related to organization, design, and delivery in order to potentially enhance the impact of CASCADE and future programs.\n\nABSTRACT.CURRENT CONTROLLED TRIALS:\nISRCTN52537669.\n\nBODY:\nKey messagesThe Child and Adolescent Structured Competencies Approach to Diabetes Education (CASCADE) structured education program is perceived by young people and parents who attend as having benefits but practical challenges associated with attendance result in low uptake.Staff are positive about the potential of the program but organizational aspects are unacceptably burdensome.CASCADE is potentially deliverable to families as part of routine care and could be a useful intervention. However, improvements in clinical and administrative support, staff training, program content, and service structures are required to ensure fidelity to the program and feasibility and acceptability to key stakeholders.\n\nBODY.INTRODUCTION:\nType 1 diabetes mellitus (T1DM) in children and young people is increasing worldwide. Fewer than one in six children and young people achieve glycosylated fraction of hemoglobin (HbA1c) values in the range identified as providing best future outcomes.1 It has been recognized that there is an urgent need for clinic-based pragmatic, feasible, and effective interventions that improve both glycemic control and quality of life, with a particular emphasis on structured education programs.2 In recent years, a number of large multicenter studies have trialed a standard education intervention.3–5 Findings published, to date, report no significant positive impact on glycemic control as measured by HbA1c and only limited impact on a wide range of secondary measures.4\n5 Nevertheless, the recent Best Practice Tariff for Paediatric Diabetes for diabetes services in the UK6 requires the provision of structured educational programs for young people and their families and, as a consequence, there is an urgent need for high-quality evidence to inform the implementation of this recommendation. The CASCADE (Child and Adolescent Structured Competencies Approach to Diabetes Education) pragmatic cluster randomized controlled trial (RCT) with integral process and economic evaluation is the most recent study. It was undertaken by a team that included clinicians from a London-based pediatric diabetes clinic, a representative from a diabetes patient organization and researcher teams from three universities in London. The CASCADE intervention is a structured education program designed for children and young people with T1DM aged between 8 and 16 years and their parents or carers.7 The intervention underwent phase 1 pilot work and a non-randomized trial, in which the delivery was carried out by a psychologist.8 The CASCADE intervention was then modified to be delivered by two members of a diabetes multidisciplinary team (MDT) who receive 2 days of training to enable them to become 'site educators'. CASCADE is a manual-based program. It is delivered in four modules over 4 months, each lasting approximately 2 hours, to groups of three to four families with children and young people grouped according to age (8–11 or 12–16 years). Two psychological approaches, motivational interviewing and solution-focused brief therapy, shown to have potential with children with diabetes are central to the CASCADE intervention.9\n10 These aim to engage participants to identify and develop their own positive approaches and consequent behavior change relevant to the management of their condition. The intervention thus offers both structured education, to ensure young people (and their parents) know what they need to know, and a delivery model designed to motivate self-management through empowerment techniques (see table 1). Table 1 Outline of the CASCADE program (as set out in the manual) The teaching plan Session activities, objectives, time guides, and resources including key information essential for the educator, learning objective for the family, and brief descriptions of each activity Each module starts with a review of, and since, the previous session, creating an opportunity for families to highlight any changes that have taken place and to congratulate young people on successes Module 1 Focuses on the relationship between food, insulin, and BG ( eg, considering the pros and cons of matching insulin to food to attain better glycemic control ) Module 2 Reviews BG testing and factors influencing BG fluctuation ( eg, identifying factors that cause BG to rise and fall and explore hypoglycemia definitions, reviewing symptoms according to severity) Module 3 Looks at the pros and cons of adjusting insulin (eg, a brainstorming session considers when, how, and who to contact for help managing hyperglycemia) Module 4 Addresses aspects of living with diabetes, including managing BG levels and exercise ( eg, young people and families complete a ‘blueprint for success’. This marks the end of the sessions and acknowledges the steps into the future the young person has already made) Homework tasks are given to families to consolidate learning after each module BG, blood glucose; CASCADE, Child and Adolescent Structured Competencies Approach to Diabetes Education. The intention is that delivering CASCADE to groups will provide staff with an alternative mode of working with young people in the clinic setting to improve outcomes, rather than requiring additional work.\n\nBODY.CASCADE TRIAL SUMMARY:\nThe trial involved young people with T1DM and family members in 28 English pediatric diabetes clinics (randomly assigned at clinic level to intervention or control) in London, South East England, and the Midlands. Clinics eligible to participate were staffed by at least one pediatrician and pediatric nurse with an interest in diabetes. Other inclusion criteria included not running a group education program at time of recruitment and not participating in a similar pediatric diabetes trial within the past 12 months. It was approved by the University College London (UCL)/UCLH Research Ethics Committee (REC) reference number 07/HO714/112. Site-specific approval was granted at each site. Three hundred and sixty-two young people were recruited to the study. Inclusion criteria included: diagnosis with a duration ≥12 months; mean 12-month HbA1c of 8.5 or above; aged 8–16 years. Clinical staff identified eligible young people from their patient list. Researchers sent letters and information sheets to these young people and their parents or carers inviting them to participate in the research and to speak to a researcher at their next clinical appointment. Recruitment was primarily carried out by members of the process evaluation team who attended clinics at which eligible young people had an appointment. Signed consent forms were collected from parents and children wishing to participate. The primary outcome measure was venous HbA1c at 12 and 24 months. Secondary outcomes included: knowledge, skills and responsibilities associated with diabetes management; emotional and behavioral adjustment; quality of life. Two staff members from each intervention site clinical team participated in the 2 days CASCADE training program. These site educators then took responsibility for organizing the modules at their clinics and delivering the intervention. The extensive and integral process evaluation was designed to enable an understanding of the implementation of CASCADE and examination of the interaction of causal mechanisms and contextual factors that may be determinants of the intervention's success or failure, as assessed by the trial.11 Given that the trial found no evidence of benefits on venous HbA1c at 12 and 24 months and little evidence of benefits on secondary outcomes, the focus of this paper is to use the findings of the process evaluation to suggest how future structured education may be more effectively implemented.12\n\nBODY.PROCESS EVALUATION METHODS:\nThe process evaluation aimed to assess the feasibility, acceptability, fidelity and perceived impact of the CASCADE intervention. It ran for the 4-year life of the trial and included the multiple methods shown in table 2. Researchers from the process evaluation teams at the Institute of Education (IOE) and the School of Pharmacy (SOP) conducted the fieldwork. Table 2 Process evaluation methods and response rates Phase of the study Methods Purpose of methods Response rates Two-day training of site educators Unstructured observation of training of site educators by a member of the research team Fidelity of training 6 training days observed Participant questionnaires (completed 2 weeks after training) Description of participants Participant experience/acceptability of training 27 participant questionnaires from 18 nurses, 8 dietitians, and 1 doctor (63% of participants) Semistructured interviews with the two trainers Background to intervention development; views on training days Both trainers Delivery of CASCADE modules with patients/carers Observation of modules carried out by a member of research team including rating of fidelity to psychological techniques and content of manual Fidelity of delivery Experience/acceptability of delivery of program to site educators Experience/acceptability of participation in the program by young people/parents 47 CASCADE modules observed across 13 intervention sites (12 each of modules 1, 2, and 4; 11 of module 3) Self-complete feedback proformas for site educators Who delivered each module; who attended each module Self-assessment of delivery fidelity and general feedback on each module Site educators returned 125 feedback proformas (94% of 131 completed modules) Following delivery of all CASCADE groups Young person and parent 12 and 24 month questionnaires in intervention arm Perceptions of impact Acceptability of the intervention Process questions were completed on questionnaires by 135 young people (82%) and 121 parents (66%) at 12 months; 121 young people (66%) and 114 (63%) parents at 24 months Semistructured interviews (audio-recorded) with site staff (nurses and dietitians), young people, and parents/carers in both trial arms Description of standard care—including any structured education currently delivered Intervention arm only—experiences of the intervention (training and delivery) 30 site staff (16 intervention sites; 14 control) 53 young people (32 intervention/21 control) and 52 parents were interviewed. Of the young people, 31 were female; 17 were 10–11 years old; and 36 were 12–18 years old CASCADE, Child and Adolescent Structured Competencies Approach to Diabetes Education. \n\nBODY.PROCESS EVALUATION DATA ANALYSIS:\nQualitative data analysis was carried out by the process evaluation teams at IOE and SOP (all the authors except LB, RT, and DC). Qualitative analysis of the interview data, supported by the use of NVivo software, identified key topics and issues that emerged through familiarization with transcripts.13 Pertinent excerpts were coded and memos written to summarize and synthesize emerging themes. Researchers refined their analysis ensuring that themes were crosschecked with other data, first within and then between transcripts. Analysis of each training workshop observation was carried out by a researcher, who was not the observer, reading through the notes made by the observer and identifying key themes and fidelity issues emerging from the data. Quantitative data were analyzed by MW using Excel and the SPSS V.19 software for statistical tests. In terms of the CASCADE modules delivered in the sites, composite fidelity delivery scores were created for content and for technique from individual researcher observer and site educator self-rated scores. A further composite variable was then calculated which summed the content and technique scores for each site across all four modules, allowing comparison across sites and modules.\n\nBODY.PROCESS EVALUATION RESULTS:\nThe results are structured under the following themes: recruitment and training of site educators; organizing the groups; delivery of the modules; uptake and acceptability of the modules; and perceptions of impact. Response rates are reported in table 2.\n\nBODY.PROCESS EVALUATION RESULTS.RECRUITMENT AND TRAINING OF SITE EDUCATORS:\nThe National Institute for Health and Care Excellence (NICE) requirement,2 that structured education programs are delivered as part of routine care was widely recognized by clinic staff and, as a consequence, it proved relatively straightforward to recruit two members of the MDT from each of the 14 intervention sites to become site educators. The majority of site educators were experienced pediatric diabetes specialist nurses (PDSNs); in approximately half of the sites one of the educators was a dietitian. The diabetes specialist nurse and psychologist who developed the intervention delivered the 2 day CASCADE training for site educators in four workshop sessions. In general, it was feasible for sites to send the required minimum of two staff to the core workshops. A few sites sent additional interested members of the MDT though only four consultants attended some or all of the training. The training was delivered in a central London location, except for one site where following a request, training was delivered locally. Site staff reported this change in location to be helpful. The majority of staff who completed the questionnaire following the workshops indicated they had been 'extremely' or 'very' keen to participate. Most staff thought the training was very good, motivating, and comprehensive. The most common concern raised in staff interviews about becoming site educators and running the CASCADE program, both before and after the training, was additional workload. Other concerns included practical constraints such as finding available rooms in which to run the groups and ability to rapidly change their practice to employ the psychological approaches underpinning CASCADE. One site educator commented.It [the training] was a lot in the few days. Teaching people theories and expecting them to suddenly change their behaviour I think is very difficult. The two trainers, and some attendees, expressed concern about levels of diabetes knowledge among the site educators.Some of it [the training] was ending up teaching them the content as opposed to teaching them the style of delivery. (UCLH trainer)At the time [of the training] I'd got very little diabetes knowledge so, for me, I was actually learning from it and I know that's not really what it was about but a lot of it was that...I found it quite intimidating because of my lack of knowledge. (Site educator)\n\nBODY.PROCESS EVALUATION RESULTS.ORGANIZING THE GROUPS:\nA total of 30 complete CASCADE groups, comprising all four modules, were run across 12 of the 14 intervention sites. A post hoc calculation, based on the number of study recruits in a site and the optimum group size of 3–4 young people, suggested 44 groups should have been run across the 14 intervention sites. Thus, 68% of possible groups ran, with only three clinics completing the maximum number of groups possible for their site. A key reason for this limited delivery was difficulties with organizing the groups. The organization was undertaken by the site educators in all the sites. This involved: deciding which participants should be grouped together using similar ages as a key criterion; setting dates and times; inviting families to attend; and booking a room. Interviews revealed that site educators found these processes frustrating and very time-consuming. One site educator commented:I didn't notice that it saved me any time because I was constantly chasing them [families] up to be there. One site delivered no modules because the lead site educator left her PDSN post soon after the training. Another site delivered only the first module because of a number of challenges which included: the small number of potential eligible patients on the clinic list; poor uptake of the first module by young people/parents; practical organizational constraints. All the sites ran the groups in addition to routine clinics where standard care continued to be received by patients on an individual basis. Staff interview data revealed that the pressure on hospital clinic facilities was too great to make running the groups feasible during clinics. Establishing a date and time for the group sessions that was acceptable to the families was extremely challenging. To maximize attendance, some site educators tried a range of timings including during school hours, after school, weekends, and school holidays. Communication with families, about groups, was via a combination of letter, telephone, and (occasionally) text messages. No sites used email or online meeting booking sites. Despite all the negotiation and careful planning by site educators, late cancellation or non-attendance by participants was reported as common.Some didn't even bother to get back to us and some did and said they were still gonna come but still didn't come. It is frustrating and I think that's what was time consuming, which I hadn't really accounted for...(Site educator) As a result of these difficulties, compromises were made to the intended group size and composition. Groups often had small numbers (sometimes one family only) and/or a wide age range among the young people attending. Although the intention was to run four modules with the same participants, the composition of many groups changed.\n\nBODY.PROCESS EVALUATION RESULTS.DELIVERY OF THE MODULES:\nThe site educators believed they were appropriate individuals to deliver the intervention because they knew patients well, although familiarity with patients was not a requirement. Participating families appeared to support this view. All sites had continuity of at least one trained site educator, but complications in sustaining the availability of a second educator in a few sites resulted in some lack of continuity of trainer pairs. Site educators reported that the time required to organize sessions meant that they often had little or no time for planning and practising delivery of the modules. Observation data and some staff interviews suggested that this lack of practice time was particularly challenging when staff had limited experience in group work. Researcher observation of the modules and site educator feedback forms indicated that site educators generally delivered activities as described in the manual. However, less time than was recommended was spent on some of the key exercises due to staff finding them difficult to deliver and/or not well received by groups. One such example was the 'review since the previous session' exercise at the beginning of each module. Also, while researcher observation and staff feedback showed fidelity of CASCADE psychological techniques was good across sessions in half the sites, it was not optimal in the remainder. Difficulties in delivering the intervention particularly occurred when sessions had groups of participants with a wide age range or group numbers were very small.The first group that we ran had two girls and a boy and the boy was at the younger end of the teenage years and the girls were at the older, it was unfortunate because we didn't have that many patients as part of the study so it was very difficult then to get the groups sorted out so we kind of had to put them together. [...] He was just a bit of a silly boy in that...I don't mean horribly, he was lovely, but just kind of played the fool a little bit whereas the girls were older and a similar age and a lot more grown up about it all. (Site educator) Staff reported that the organization and delivery of the intervention was affected by the research context in a number of ways. First, having to restrict the education groups to a subset of recruited patients, instead of offering them to the entire clinic list, was perceived as making the organization of the groups more challenging. This meant that natural groupings of patients (by age or geographical area) often proved too difficult to achieve. Second, delays encountered in the recruitment of families to the trial in many sites (see12 for detail on this), meant site educators often had to wait several months after their training before they could start to organize groups and deliver the intervention. Third, some site educators reported that additional trial-related tasks, such as organizing research blood samples added to their workload and took time away from organization of, and preparation for, groups.\n\nBODY.PROCESS EVALUATION RESULTS.UPTAKE AND ACCEPTABILITY OF THE MODULES:\nOf the 180 young people recruited to the intervention arm, only 55 (30%) received the full education program of four modules with just over half of the original recruits (53%) attending at least one module. Eighty-four young people (47%) failed to attend any modules. Those who attended had significantly lower mean baseline HbA1c scores than those who were offered the sessions but did not attend (9.52 vs 10.33, p<0.01). Significantly more children (8–12 years) attended at least one module compared with teenagers (13–16 years; 64% vs 44%, p<0.01). Clinics were permitted to offer sessions at a time of their choice. If out of school hours sessions were not offered, the main reason given for young people not attending modules was that they did not want to miss school. For parents, taking time off work during the day was a barrier to attendance. Other reasons for non-attendance cited by children and parents included holidays and other extracurricular activities. On most occasions a parent/carer attended with the young person. Parents and young people reported that joint attendance was a very positive aspect of the experience (see table 3). Staff also, in most instances, found it helpful to include parents. Table 3 Acceptability of CASCADE to parents and young people attending at least one CASCADE module (12-month questionnaire) Themes Young people Parents/carers ‘Quite a lot’ or ‘A great deal’ n/N (%) ‘Quite a lot’ or ‘A great deal’ n/N (%) Group dynamic  Liked parents/young people being together in modules 81/90 (90) 81/84 (96)  Felt learnt something from other people in the group 64/93 (69) 60/85 (71) Teaching style and length  Liked the way the trainers taught 74/93 (81) 81/86 (94)  Felt the sessions were too long 22/94 (23) 7/85 (8) Content  Felt that some of the things covered were too complicated 7/92 (8) 8/85 (9)  Felt that some of the things covered they knew before 48/91 (53) 42/86 (49) \n\nBODY.PROCESS EVALUATION RESULTS.PERCEPTIONS OF IMPACT:\nThe majority of parents and young people who attended CASCADE groups described some positive impacts, including improved family relationships, wider knowledge and understanding of diabetes, greater confidence, and increased motivation to manage the disease (see table 4 and young person's comment below).\nI've been more happier...yeah, like around the house I've been more happier. Not so many strops...'cause my readings are better and we've been given a lot more information about the ketones and how to treat it....I found it really good. [Young person] Table 4 Parents’ and young people's perceptions of influence of CASCADE (12-month questionnaire) Questionnaire items Answered ‘Quite a lot’/‘A great deal’ Question: After attending some or all of the CASCADE diabetes education sessions, how much did your child/you…? Parent N=90 Young person N=97 Knowledge  Understand better how insulin works 66 (74%) 70 (73%)  Understand better which foods contain CHO 73 (81%) 68 (70%)  See why counting the CHO in the food your child/you eat(s) can be helpful 80 (90%) 73 (75%) Intention to change  Want to stop your child's/your glucose levels from going too low or high 85 (94%) 85 (87.5%)  Want to test your child's/your BG levels more often 56 (43%) 40 (42%) Control  Feel more in charge of your child's/your diabetes 61 (69%) 65 (68%)  Feel able to change your child's/your insulin dose when they are exercising 68 (77%) 66 (69%)  Feel you are able to control your child's/your BG levels better 70 (78%) 64 (67%) Access to care  Feel more able to ring/contact your diabetes nurse/GP/hospital if your child/ you need(s) help 72 (82%) 52 (54%) Family dynamic  Feel you had a better understanding of how diabetes affects your family 67 (75%) 69 (72%) BG, blood glucose; CASCADE, Child and Adolescent Structured Competencies Approach to Diabetes Education; CHO, carbohydrate; GP, general practitioner. A number of young people and parents mentioned that timing of the CASCADE sessions would be more appropriate and useful sooner after diagnosis; site educators also commented that this may lead to better uptake of the sessions and have greater impact.I felt they were of little use to me as I already knew everything however this kind of session would be useful to someone who had just been diagnosed. (Young person)They're a bit sort of more 'do as they're told' for the first 12 months, they're more likely to attend and perhaps take it on board, it gets them in the right frame of mind early. (Site educator) Twenty-four months after the intervention, when asked in the questionnaire what effect the program had had, nearly half of the young people selected the response \"The sessions made me want to try harder and I have carried on trying\". However, these impacts were not reflected in the primary or secondary outcome measures, even for the subgroup of those who attended.\n\nBODY.DISCUSSION:\nThe CASCADE intervention aimed to train PDSNs and other members of diabetes teams to deliver a manualised, structured education program, based on behavior change methods, to groups of families. Training of these site educators took place over 2 days. Few members of the MDT, other than PDSNs, attended the training. Trainee educators expressed enthusiasm for the program but highlighted concerns including that: CASCADE would increase their workload; there would be practical constraints to setting up and running groups; and that incorporating the CASCADE psychological model into their practice would be challenging. Following delivery of CASCADE in the sites, PDSNs and other clinical staff were positive about the program. Having PDSNs and dietitians, who knew the patients, as site educators worked well for both the educators and families. There were, however, feasibility issues with regard to running the program in its current form in the 'real world' of the National Health Service. These were evidenced by low uptake by families and staff feeling unacceptably burdened by organizational aspects of the intervention. Organizing groups was, as anticipated by staff, challenging and time-consuming and many groups did not comprise the recommended number or age range of young people. This affected group dynamics and made it difficult to run the sessions as set out in the manual. It was also difficult to keep a group together for the planned four modules. Delivery of the modules was further compromised by: the gap in time between training and delivering sessions; time spent on organizing group sessions at the expense of practising delivery of the modules; and finding some exercises consistently hard to deliver. Despite the fact that families and staff reported that they liked the program and felt that it offered benefits, the trial found no evidence of impact on venous HbA1c at 12 and 24 months and little evidence of benefits on secondary outcomes, even with the subgroup who attended the training. We think the reasons behind this are twofold. First the organizational difficulties that made the intended group composition problematic and second the difficulties with delivery, especially the lack of fidelity to the psychological techniques. To address these issues, and to support the development of other structured education programs, we make a range of recommendations.\n\nBODY.DISCUSSION.RECOMMENDATIONS:\nTo reduce the burden on the site educators more members of the MDT, including consultants, could attend the program training to foster greater buy-in and a team approach to facilitate sharing of the workload. To make this feasible, including containing cost, training of teams could be conducted at local sites rather than centrally in London. Furthermore, dedicated administrative support to organize venues, appointments, groups, and effective reminder systems would increase the likelihood of improved overall uptake, and would help with grouping the young people by age, as intended. Additional support for site educators in practising and sustaining quality of delivery would have been beneficial. Possible approaches could include: those associated with the successful DAFNE program,14 such as longer training, a greater focus in the training on improving group work skills, and an observation of CASCADE experts delivering the program; site level mentoring from CASCADE experts including feedback on site educators delivering trial runs; face-to-face mentoring from local colleagues, such as psychologists. In addition, before undertaking structured education programs, there may be a need to improve the knowledge base of some of the current pediatric diabetes service workforce, as levels of knowledge were very variable. Raising knowledge levels may be addressed by the development of a curriculum for professionals specifically in diabetes, ranging from a core curriculum (basic knowledge that all team members would be expected to know) to an extended curriculum (covering high level application of knowledge specific to individual team members). This finding may have relevance to other medical specialisms where structured education programs are being considered. The uptake of the education sessions was low. For families the key issue was the challenge of fitting attendance into busy day-to-day routines. The education modules were offered in sessions independent of routine clinic appointments. Our data suggest that to improve accessibility it could have been advantageous to make the modules an integral part of routine clinic appointments, thereby overcoming the need for families to make additional hospital visits, with the implications this has for time away from school and work. This would require those in organizational administrative roles to assist with sustainable organizational adjustments required for extending clinic services. This finding and the suggestion that there should be greater 'buy-in' from the wider clinic team echo those in the broader literature on group-based programs.15 Furthermore in the study, participants had to have been diagnosed with diabetes for more than a year to meet the inclusion criteria for participation. Our data suggest that if the program was offered to families sooner after the initial diabetes diagnosis, this might lead to improved motivation to attend the groups. Additionally offering this structured group education more universally might be more successful, including making the organization of groups by age more feasible, than targeting those with the poorest control of their blood glucose levels. It may be more realistic to assume that those with the very poorest control might also require the greater flexibility and intensity that individualized interventions with a psychologist would offer. A summary of the key recommendations is presented in box 1.\nBox 1Summary of key recommendations to improve training in, and delivery of, structured education sessionsMore involvement of the wider clinical team facilitated by local training;Greater mentoring of site educators by trainers;Practice sessions with feedback from trainers for site educators before going 'live' and time between training and delivery of first session kept to a minimum;More diabetes-specific training for the pediatric diabetes service workforce to guarantee a basic level of diabetes knowledge prior to training in the program;Dedicated administrative support to assist with organizing the sessions;Education sessions to be held within clinic time;Offer the sessions to all young people on clinic lists and soon after diagnosis.\n\nBODY.DISCUSSION.STRENGTHS AND LIMITATIONS OF THE STUDY:\nIt is a strength of the study that the process evaluation was unusually extensive and fully integrated into the main trial. Data were collected from all key stakeholders through a range of different methods throughout the different phases of the implementation of the intervention. Triangulation of findings enabled an evaluation of the implementation, barriers, and facilitators in relation to all aspects of implementation, operation, and perceived impact to be examined. It was also a strength that as a pragmatic RCT this intervention was evaluated in 'real-life' and representative settings. One limitation of the study was the impact of the research context on implementation, but steps were taken in the information and reassurance provided, methods, and timing of data collection to minimize effects as much as possible. Additionally, a major hindrance to the intervention was the lower than expected number of CASCADE groups run and the poor uptake of these groups by families. This might suggest a weakness in the intervention's pilot, which was not carried out within the same clinical contexts as the main trial. As such, opportunities to address challenges in organization and delivery were missed prior to, or through carefully managed processes within, the full trial.16 Experience from pragmatic studies of complex interventions such as CASCADE has yielded valuable new learning on the importance of particular investment in the developmental and piloting stages of complex interventions.17\n\nBODY.CONCLUSION:\nThe extensive multimethod process evaluation showed that the CASCADE structured education program was deliverable; however, improvements in clinical and administrative support, staff training, program content, and service structures to improve accessibility for families were required. The suggested improvements identified in this study all have resource implications, and thus any future research requires cost-benefit considerations. These findings give valuable information on what is required not only in CASCADE but also other similar programs to achieve their aims.\n\n**Question:** Compared to control what was the result of The Child and Adolescent Structured Competencies Approach to Diabetes Education (CASCADE) is a structured educational group program, using psychological techniques, delivered primarily by diabetes nurses. on HbA1c at 12 or 24?months.?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
822
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Cost-effectiveness of ranibizumab in treatment of diabetic macular oedema (DME) causing visual impairment: evidence from the RESTORE trial\n\n ABSTRACT.BACKGROUND/AIMS:\nTo evaluate the cost-effectiveness of ranibizumab as either monotherapy or combined with laser therapy, compared with laser monotherapy, in the treatment of diabetic macular oedema (DME) causing visual impairment from a UK healthcare payer perspective.\n\nABSTRACT.METHODS:\nA Markov model simulated long-term outcomes and costs of treating DME in one eye (BCVA ≤75 letters) based on data from the RESTORE Phase III trial. Outcomes measured in quality-adjusted life-years (QALYs) were simulated for a 15-year time horizon based on 12-month follow-up from RESTORE and published long-term data. Costs included treatment, disease monitoring, visual impairment and blindness (at 2010 price levels).\n\nABSTRACT.RESULTS:\nRanibizumab monotherapy resulted in a 0.17 QALY gain at an incremental cost of £4191 relative to laser monotherapy, yielding an incremental cost-effectiveness ratio (ICER) of £24 028. Probabilistic sensitivity analysis showed a 64% probability of being cost-effective at a threshold of £30 000 per QALY. Combined ranibizumab and laser therapy resulted in a 0.13 QALY gain at an incremental cost of £4695 relative to laser monotherapy (ICER £36 106; 42% probability of ICER <£30 000).\n\nABSTRACT.CONCLUSIONS:\nBased on RESTORE 1-year follow-up data, ranibizumab monotherapy appears to be cost-effective relative to laser monotherapy, the current standard of care. Cost-effectiveness of combination therapy is less certain. Ongoing studies will further inform on disease progression and the need for additional ranibizumab treatment.\n\nBODY.INTRODUCTION:\nDiabetic macular oedema (DME) is the most frequent cause of vision impairment in people with diabetes and can lead to blindness if left untreated. Even when patients receive optimal treatment with the current standard of care for DME, laser photocoagulation, improvements in vision are relatively uncommon and many patients lose vision despite laser therapy.1\n2 This continued vision loss is the result of structural and physiological damage to the retinal capillary bed, and from progressive and permanent damage to the macular pigment epithelium, associated with poor control of blood glucose, blood pressure and lipid levels (the three main systemic risk factors for diabetic retinopathy and DME).3 The UK prevalence of visual impairment due to DME is estimated at approximately 3% of the adult (aged 18+ years) diabetic population.4 Visual impairment places a substantial socio-economic burden on patients, their caregivers and healthcare systems at large.5 There is, therefore, a strong public health incentive to choose safe therapies that provide increased health gains through improved vision and patient functioning, while offering an acceptable balance between benefits and costs. Ranibizumab (Lucentis®, Novartis Pharma AG, Switzerland) is a novel agent that is currently licensed for the treatment of visual impairment due to DME.6 Ranibizumab selectively inhibits active isoforms of human vascular endothelial growth factor A (VEGF-A) from binding to its receptors. VEGF-A stimulates growth of new blood vessels and is a major mediator of increased vascular leakage,7–10 mechanisms thought to be associated with retinal damage and progression of DME and the resulting visual impairment. The Phase III RESTORE trial enrolled 345 patients with visual impairment due to DME and assessed ranibizumab given either as monotherapy or in combination with laser photocoagulation, compared with laser photocoagulation alone. The results showed that ranibizumab alone or in combination with laser provided significantly greater improvements in best corrected visual acuity (BCVA) at 1 year compared with laser therapy alone, with mean average BCVA changes of +6.1 and +5.9 versus +0.8 letters, respectively.11 In addition, health-related quality of life, as assessed using the National Eye Institute Visual Function Questionnaire (NEI VFQ-25), was improved significantly from baseline for both ranibizumab treatment groups compared with laser therapy alone (p<0.05 for composite score and vision-related subscales). RESTORE is ongoing in a 2-year open-label extension phase. This report summarises the results of a health economic model that was developed based on RESTORE trial data to evaluate the cost-effectiveness of ranibizumab from a UK payer perspective when used as monotherapy or in combination with laser therapy compared with laser therapy alone. The model was developed separately from current assessments of ranibizumab by the UK Health Technology Assessment bodies, such as the National Institute for Health and Clinical Excellence.\n\nBODY.METHODS:\nA Markov model was constructed to simulate costs and changes in BCVA over a 15-year period in a hypothetical cohort of patients with DME (figure 1), based on data from the RESTORE trial (in which 87.5% of patients had type 2 diabetes and average baseline glycated haemoglobin was 7.3%). The average age at entry to the model was 63 years, consistent with patient characteristics in RESTORE. The model also assumed the same baseline BCVA distribution, but excluded patients with BCVA >75 letters, consistent with guidance from the ranibizumab summary of product characteristics that such patients may benefit less from treatment than those with baseline BCVA ≤75 letters.6 Figure 1Markov model structure. Health states are defined by best corrected visual acuity (BCVA) in the treated eye. Patients enter the model at treatment start where they are assumed to have BCVA as in RESTORE (>39 letters and ≤75 letters). BCVA is evaluated at 3-monthly intervals. After each cycle, patients may transition to any other health state including death; the probability of moving from health state A to health state B is based on RESTORE data (baseline to month 12) and literature. The model framework allocated eight linear health states defined by BCVA in the treated eye using a set of 10-letter (two-line) categories (table 1). Movement of patients from one health state to another was determined by transition probabilities that depended on the effectiveness of treatment and natural BCVA changes over time. A 10-letter range was used to categorise health states based on evidence that a 10-letter BCVA score loss is associated with a substantial decline in health-related quality of life (eg, loss of functional ability, increasing dependency, role limitations and impaired mental health).15 Each health state was assigned a quality-of-life index and cost. Costs and outcomes were accrued over 3-month cycles, applying half-cycle corrections. The time horizon in the base case was 15 years; although a lifetime horizon could be justified given the chronic nature of the condition, we selected a more conservative approach for the base case because of the lack of evidence on long-term prognosis. Table 1 Utility by BCVA in treated eye Health state defined by BCVA category (letters; treated eye) RESTORE * Lloyd et al 12 † Brown et al 13 § Mean utility (SE) 1: 86–100 0.860 (0.034) ‡ 0.830 0.839 2: 76–85 0.860 (0.014) 0.750 0.839 3: 66–75 0.813 (0.012) 0.750 0.783 4: 56–65 0.802 (0.014) 0.715 0.783 5: 46–55 0.770 (0.018) 0.680 0.732 6: 36–45 0.760 (0.027) 0.680 0.681 7: 26–35 0.681 (0.053) 0.530 0.630 8: 0–25 0.547 (0.083) 0.340 0.579 * Utility scores were calculated based on EQ-5D scores in RESTORE; EQ-5D scores were converted to utilities using social tariffs measured in a UK population. 14 Mean utility for each BCVA state was calculated using a regression technique for repeated measurements at baseline, month 3, month 6 and month 12. Data from several measurement points were pooled to cover all possible health state transitions with a sufficient sample size. A possible trend effect in the pooled data was rejected (p<0.05). † Patients underwent a Snellen visual acuity (VA) assessment and were categorised based on the better-seeing eye. Some adjustments were made to published values in order to convert VA ranges in Lloyd et al (obtained in a population of patients with diabetic retinopathy) to health states as defined in the current model. § Utilities were elicited from patients with diabetic retinopathy. Patients underwent a Snellen VA assessment and were categorised based on the better-seeing eye. Some adjustments were made to published values in order to convert VA ranges to health states as defined in the current model. ‡ Restricted to being greater than or equal to the utility in health state 2. BCVA, best corrected visual acuity. The key inputs and assumptions are summarised in table 2. Patients were assumed to receive ranibizumab treatment in year 1 at the frequency observed in RESTORE. In year 2, patients were assumed to need fewer injections, as observed in the Diabetic Retinopathy Clinical Research Network (DRCR.net) protocol I study (which included patients with comparable baseline demographics to RESTORE).1 A proportionately smaller number of monitoring visits was therefore assumed in year 2. After year 2, laser therapy was assumed to be administered as required in all arms, with no further need for ranibizumab; the assumed number of monitoring visits was further reduced accordingly. Table 2 Key model inputs and assumptions Time period Model input Combination therapy Ranibizumab monotherapy Laser monotherapy Source Year 1 BCVA progression RESTORE trial data, adjusted for drop-out rates RESTORE trial data, adjusted for drop-out rates RESTORE trial data, adjusted for drop-out rates RESTORE data on file Treatment frequency 7 injections + 2 laser sessions, as in RESTORE; drop-outs continue in standard care (ie, laser therapy) 7 injections; drop-outs continue in standard care (ie, laser therapy) 2 laser sessions; drop-outs continue in standard care (ie, laser therapy) RESTORE data on file Monitoring visits 12 12 4 SmPC and expert interview (data on file) Adverse events Negligible Negligible Negligible RESTORE data on file Year 2 BCVA progression Equal rates of improvement and worsening (3% in 3 months) Equal rates of improvement and worsening (3% in 3 months) Equal rates of improvement and worsening (3% in 3 months) Supported by DRCR.net and RESTORE data on file Treatment frequency 2 injections + 1 laser session 3 injections (no laser) 1 laser session Supported by DRCR.net Monitoring visits 8 10 4 Assumption Year 3 BCVA progression Constant rates of change of BCVA with a majority of patients having a decline in BCVA Constant rates of change of BCVA with a majority of patients having a decline in BCVA Constant rates of change of BCVA with a majority of patients having a decline in BCVA Calibrated to WESDR data (Supplementary Methods) Treatment frequency No additional ranibizumab; laser therapy as required No additional ranibizumab; laser therapy as required Laser therapy as required Assumption Monitoring visits 4 4 4 Assumption Any year Cost of blindness When BCVA ≤35 letters is reached in better-seeing eye When BCVA ≤35 letters is reached in better-seeing eye When BCVA ≤35 letters is reached in better-seeing eye Adapted from costing approach by Meads et al 16 (Supplementary table 2) Patient-level changes in BCVA data in RESTORE were used to derive transition probabilities in year 1. After year 1, long-term changes in BCVA were simulated assuming categorisation into one of three possible outcomes: ≥10 letters improvement within 3 months (one health state up), ≥10 letters worsening within 3 months (one health state down) or no change exceeding 10 letters within 3 months. BCVA, best corrected visual acuity; DRCR, Diabetic Retinopathy Clinical Research Network; SmPC, summary of product characteristics; WESDR, Wisconsin Epidemiologic Study of Diabetic Retinopathy. The average BCVA achieved in year 1 was assumed to be maintained during year 2, as was observed in the DRCR.net protocol I study.1 After year 2, all arms of the model followed natural disease history based on 4-year health state transition outcomes modelled from the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) reports.17–19 Transition probabilities were calibrated to adjust for the improvement in diabetes management since the WESDR reports (see Supplementary Methods), and predicted that around 30% of patients would be expected to exhibit a worsening in BCVA of at least 10 letters and 20% of patients would show an improvement of at least 10 letters over a 4-year time horizon (Supplementary table 1). Treatment discontinuation rates observed in RESTORE were applied to the model in year 1; it was assumed there would be no additional withdrawals from treatment in year 2. Adverse events were assumed to have a negligible impact on the cost-effectiveness of ranibizumab therapy based on the established safety profile of ranibizumab in clinical trials in DME and in wet age-related macular degeneration, an indication for which ranibizumab is also licensed.1\n20–22 Mortality was estimated by adjusting general UK population death rates according to the increased RR of death in patients with DME. Mulnier et al estimated an increased mortality (HR 1.93) in a UK type 2 diabetes population relative to patients without diabetes,23 while Hirai et al estimated an HR of 1.27 for death in patients with clinically significant macular oedema (CSME) and diabetes relative to diabetic patients without CSME.24 We calculated a 2.45 RR of death in a DME population by multiplying these two ratios. Utility scores were calculated based on patient-reported outcomes data from RESTORE (table 2), in which patients completed the EuroQoL (EQ-5D) questionnaire at baseline and months 3, 6 and 12. Individual EQ-5D health scores were converted into utility scores using preferences from a UK population survey;14 mean utility scores were calculated for each health state (table 1). As these states were defined by BCVA in the treated eye, of which 67.2% were the worse-seeing eye at baseline, this method established an association between utility and BCVA changes in the treated eye. Health state costs included the costs of treatment and monitoring (Supplementary tables 3–6), and the costs associated with blindness (Supplementary table 2). Treatment costs included the costs of ranibizumab (Novartis UK, personal communication) and its administration, laser therapy and investigative procedures. Monitoring costs, including consultation and procedure costs, were estimated from the UK National Health Service Reference Costs.25 Costs of blindness included those incurred by the UK National Health Service for items such as low-vision aids, low-vision rehabilitation, residential or home care, depression and hip fracture/replacement as listed in the costing study by Meads and Hyde.16 Where older cost estimates were used, these were inflated to 2010 prices using the Hospital and Community Health Services index.26 The cost of blindness would be incurred only in patients reaching health states with BCVA ≤35 letters (Snellen ≤6/60) in the better-seeing eye. However, as the study assesses treatment response according to the enrolled eye, the proportion of patients reaching this level within the time horizon of the model is therefore uncertain. As such, the base case model adjusts for the cost of blindness on the basis of treated eyes reaching the BCVA ≤35-letter threshold. As with other model parameters subject to uncertainty, deviations from this assumption were explored in sensitivity analyses (Supplementary table 7). The main outcome measure was the incremental cost-effectiveness ratio (ICER), expressed as the additional cost per quality-adjusted life-year (QALY) gained by one treatment over another. An annual 3.5% discount rate was applied for future costs and utilities, consistent with the standard UK approach. Univariate sensitivity analyses assessed the uncertainty around specific data sources by exploring the effects on the ICER of inputs, as shown in table 3. Probabilistic sensitivity analysis assessed the overall uncertainty about the ICER based on variations in individual input parameters; details of the applied distributions and results are provided in Supplementary table 7. Table 3 Cost-effectiveness: base case and sensitivity analyses * Assumption/parameter Base case Sensitivity analyses Incremental cost Incremental QALY ICER ICER (% change) Ranibizumab monotherapy versus laser monotherapy  Base case – – £4191 0.17 £24 028 –  Discount rate of future costs and benefits 3.50% 0%–5% £3593–£4383 0.16 to 0.16 £17 051–£27 042 –29% to +13%  Time horizon 15 years 10–20 years £4738–£3991 0.14 to 0.19 £33 139 to £21 343 +38% to –11%  Cost of blindness £6477 –25% to +25% £4868–£3515 0.17 to 0.17 £27 907–£20 150 +16% to –16%  Long-term progression of VA Declining Constant or increasing £4487–£4693 0.17 to 0.17 £26 198–£28 413 +9% to +18%  Total number of ranibizumab injections 10 –4 injections to +4 injections £2171–£6774 0.17 to 0.17 £12 446–38 836 –48% to +62%  Baseline age 63 years 58 years £3767 0.20 £19 259 –20%  Source of utilities RESTORE Lloyd et al 12 £4191 0.22 £19 238 –20% RESTORE Brown et al 13 £4191 0.19 £21 953 –9% Combination therapy versus laser monotherapy  Base case – – £4695 0.13 £36 106 –  Discounting of future costs and benefits 3.50% 0%–5% £4271–£4828 0.16 to 0.12 £26 957–£40 096 –25% to +11%  Time horizon 15 years 10–20 years £5133–£4507 0.10 to 0.13 £49 294–£34 135 +37% to –5%  Cost of blindness £6477 –25% to +25% £5050–4340 0.13 to 0.13 £38 833–£33 378 +8% to –8%  Long-term progression of VA Declining Constant or increasing £5091–£5276 0.13 to 0.12 £40 852–£44 071 +13% to +22%  Total number of ranibizumab injections 9 –4 injections to +4 injections £3165–£7260 0.13 to 0.13 £24 340–£55 828 –33% to +55%  Baseline age 63 years 58 years £4393 0.15 £29 952 –17%  Source of utilities RESTORE Lloyd et al 12 £4695 0.16 £28 778 –20% RESTORE Brown et al 13 £4695 0.16 £29 576 –18% * Incremental cost measures the additional cost of ranibizumab monotherapy or combination therapy compared with laser monotherapy in the modelled time horizon (15 years in base case). Incremental QALY measures the corresponding QALY gain when ranibizumab monotherapy or combination therapy is compared with laser monotherapy. The ICER is calculated by dividing the incremental cost by the incremental QALY. The ICER is interpreted as the cost of achieving an additional year of life in perfect health. ICER, incremental cost-effectiveness ratio; QALY, quality-adjusted life-year; VA, visual acuity. \n\nBODY.RESULTS:\nThe model predicted that after 1 year, a greater proportion of patients treated with ranibizumab monotherapy or combination therapy would have BCVA >65 letters (Snellen score >6/18) compared with patients treated with laser monotherapy (48% and 47%, respectively, versus 38% in the laser arm) (figure 2). After 15 years, the proportion with severe visual impairment in the treated eye (BCVA <35 letters, Snellen <6/60) was predicted to be 12% and 13% in the ranibizumab monotherapy and combination therapy groups, respectively, versus 19% in the laser group. Figure 2Modelled distribution by health states after (A) 1 and (B) 15 years. BCVA, best corrected visual acuity. Ranibizumab monotherapy was associated with an incremental gain of 0.17 QALY and cost of £4 191, corresponding to an ICER of £24 028 per QALY gained relative to laser monotherapy (table 3). Combination therapy provided an incremental gain of 0.13 QALY over laser monotherapy for an incremental cost of £4 695, leading to an ICER of £36 106 per QALY gained. Univariate sensitivity analyses showed that the model was stable and that ICERs were most sensitive to changes in the number of injections and time horizon (table 3). Using utilities elicited by Lloyd et al12 and Brown et al13 in patients with diabetic retinopathy led to greater QALY gains and lower ICERs for ranibizumab monotherapy or combination therapy relative to laser monotherapy. Probabilistic sensitivity analysis confirmed the model's robustness (Supplementary figure 1). Assuming a willingness-to-pay threshold of £30 000 per QALY gained, the probability that ranibizumab monotherapy would be cost-effective relative to laser monotherapy was estimated to be 64% (Supplementary figure 2); the corresponding probability for combination therapy relative to laser monotherapy was 42%.\n\nBODY.DISCUSSION:\nThis is the first study to evaluate the cost-effectiveness of an anti-VEGF agent for the treatment of patients with DME causing visual impairment. Our economic model, which was based on data from the RESTORE clinical trial, shows that ranibizumab monotherapy provides superior improvements in visual acuity and is cost-effective relative to the current standard of care, laser photocoagulation. Ranibizumab monotherapy was associated with an ICER of £24 028 per QALY gained relative to laser therapy alone, a value within the willingness-to-pay threshold of £30 000 per QALY gained that has generally been considered cost-effective in the UK. The results of our model may be applicable to a broad spectrum of patients with visual impairment due to DME, as the RESTORE trial showed consistent efficacy of ranibizumab therapy across patient subgroups based on a range of demographic and disease characteristics. The cost-effectiveness of combined ranibizumab and laser therapy was less favourable than that of ranibizumab monotherapy (ICER £36 106 per QALY gained relative to laser therapy alone). While this in part reflects higher treatment costs, combination therapy also provided a lower predicted QALY gain than ranibizumab monotherapy (0.13 vs 0.17 QALY over the 15-year time horizon in the base case). A possible explanation for these results is the observation in RESTORE that combination therapy provided smaller improvements in BCVA in patients who had received prior laser therapy than those who had not (4.7 vs 6.9 letters gained, respectively).11 In RESTORE, the proportions of patients who received prior laser therapy were 52% (ranibizumab monotherapy), 47% (combination therapy) and 43% (laser therapy). Sensitivity analyses showed that the model results were robust to reasonable alterations in inputs and assumptions; ICERs were particularly sensitive to changes in the number of ranibizumab injections and the time horizon of the model. The base case assumed an average of 10 ranibizumab injections over 2 years, based on data from the DRCR.net protocol I study.1 Ranibizumab monotherapy remained cost-effective (ICER below £30 000 per QALY gained) up to a total of 13 injections. Increasing the number of injections beyond 13 resulted in an ICER outside the generally accepted threshold, emphasising that additional injections beyond 2 years of treatment will be an important cost driver. However, the current model is conservative in that it includes incremental costs for additional injections beyond year 2, but assumes no incremental benefit. The possible need for re-treatment beyond 2 years remains speculative; forthcoming data from the 2- and 3-year RESTORE follow-up and the DRCR.net 5-year data will improve our understanding of the likely duration of treatment. Longer time horizons would be expected to lead to improved cost-effectiveness of ranibizumab, because the benefits of improved vision accrue over time whereas treatment costs are incurred immediately once treatment is initiated. We have identified only one previously reported cost-effectiveness study of interventions for DME. Sharma et al modelled the cost-effectiveness of laser therapy alone for DME,27 comparing early and deferred laser treatment with no treatment based on 3-year outcomes from the Early Treatment Diabetic Retinopathy Study.28 The model included health states defined by BCVA and applied a 40-year time horizon based on a population with a mean age of 47 years. Over this time horizon, laser treatment was predicted to provide a gain of 0.236 QALY and was considered highly cost-effective for DME relative to no treatment. Our results cannot be compared directly with the Sharma et al27 study because RESTORE did not include a 'no treatment' arm; moreover, our model applied a 15-year time horizon from a baseline age of 63 years, consistent with RESTORE. However, if a baseline age of 47 years and time horizon of 40 years were applied to our model, as in the Sharma et al27 model, the RESTORE model would predict a 0.26 QALY gain and an ICER of £10 412 for ranibizumab monotherapy relative to laser therapy alone. This suggests that the cost-effectiveness of ranibizumab would be higher in patients with DME who were younger than the average in RESTORE (63 years) because of their longer life expectancy. Several methodological considerations should be noted. While treatment effectiveness was based on 1-year data from RESTORE, we projected long-term changes in BCVA using the best available clinical evidence. For year 2, the DRCR.net protocol I study supported the assumption that BCVA at 1 year was maintained on average to 2 years.1 BCVA changes in years 3–15 were based on reported progression in the WESDR study, which demonstrated a gradual decline in vision over time in DME. The rate of decline in BCVA was reduced by adjustment using data from both the Diabetes Control and Complications Trial and UK Prospective Diabetes Study to account for a less intensive systemic diabetes management regimen in the past relative to current practice.29–32 Sensitivity analysis showed that these assumptions had only a minor impact on the model results. Utilities were estimated from the BCVA of treated eyes in RESTORE, irrespective of whether this was the better-seeing or worse-seeing eye. This approach links utility values directly to the eye receiving the intervention. The RESTORE study protocol required the worse-seeing eye to undergo intervention unless there was a medical contraindication; 32.8% of treated eyes were the better-seeing eye at baseline, and 37.3% were the better-seeing eye at 12 months. Using the better eye as reference is supported by quality of life studies in vision-related conditions, which have shown that the better eye is the major driver of overall quality of life and patient functioning.33 Indeed, patient-preference studies have shown that the utility gains associated with treating the worse-seeing eye are uncertain, even though good vision in two eyes confers better quality of life than good vision in only one eye.34 As expected, subdividing the elicited utilities in RESTORE by better-seeing or worse-seeing eye showed that patients being treated in the better-seeing eye reported lower utility at a given level of BCVA than those treated in the worse-seeing eye. Nevertheless, utility measured in patients treated in the worse-seeing eye demonstrated significant sensitivity to variation in the BCVA in the treated eye, comparable with that of patients treated in the better-seeing eye. Unfortunately, the small sample of RESTORE patients in each BCVA health state meant that the resulting utility functions were not sufficiently robust to allow separate cost-effectiveness analysis by better-seeing or worse-seeing eye. It should also be stated that the option of not treating visual impairment in a better-seeing eye is not an ethical stance. We could not find published utility estimates from specific populations with DME. We performed sensitivity analysis using utilities reported in two studies based on populations with diabetic retinopathy (which includes a wider population than DME).12\n13 Lloyd et al reported utilities elicited by the general population, while Brown et al reported utilities elicited by patients. This showed that applying the Lloyd et al12 utilities to the model greatly improved the cost-effectiveness of ranibizumab monotherapy, leading to a larger incremental gain of 0.22 QALY and a lower ICER of £19 238 per QALY gained relative to laser therapy alone. Applying the Brown et al13 utilities also increased the QALY gain with ranibizumab and led to an ICER of £21 953 per QALY gained for ranibizumab monotherapy versus laser therapy alone. The fact that the RESTORE utilities are less sensitive to BCVA decline may simply reflect the fact that a majority of the reference eyes in RESTORE were worse-seeing, while all reference eyes in the Lloyd et al12 and Brown et al13 studies were better-seeing. Limitations of this analysis should be considered. First, we modelled treatment of unilateral DME based on RESTORE data, but did not estimate the cost-effectiveness of treating bilateral DME. While 82.8% of patients in RESTORE had signs of bilateral DME at baseline, the proportion of patients who would have become eligible for treatment because of vision impairment in their fellow eye was not known. Treatment of both eyes may be relevant in many patients with bilateral DME, but there is a lack of evidence for the additional utility benefit of treating the fellow eye. Uncertainty also exists regarding the cost consequences of treating both eyes, where total cost is not likely to double given the possibility of achieving economies of scale from shared categorical spending, such as administrative and monitoring costs. An additional limitation was that the model assumed the cost of blindness would be incurred in patients reaching health states with BCVA ≤35 letters (Snellen ≤6/60) in the treated eye. There is also uncertainty inherent in working with published results of studies, as was necessary for incorporating the DRCR Network Protocol I results, as opposed to using patient-level data. While analysis using patient level data is clearly preferable, incorporation of findings from studies in addition to RESTORE, such as the highly comparable DRCR study, provide very useful data for answering critical questions relating to clinical practice in a broad spectrum of patients with DME. It should be noted, however, that some subgroups, such as patients with the poorest glycaemic control (high glycated haemoglobin), were excluded from these trials. In conclusion, the results of our economic model show that ranibizumab monotherapy is cost-effective relative to laser therapy alone in the treatment of DME causing visual impairment, while combined ranibizumab and laser therapy may be cost-effective depending on patient characteristics. The cost-effectiveness of ranibizumab monotherapy or combination treatment is expected to be higher in younger patients who have a longer life expectancy. These findings have important practical implications, given the high socio-economic burden of DME and the need for new, cost-effective treatments that reduce long-term progression to blindness. Ongoing studies, such as the RESTORE extension, will provide additional clarification of current uncertainties such as the need for ranibizumab injections after 2 years and the likelihood of recurrent DME.\n\nBODY.SUPPLEMENTARY MATERIAL:\nSupporting Statement Supporting Statement Supporting Statement Supporting Statement Supporting Statement Supporting Statement\n\n**Question:** Compared to ranibizumab combined with laser therapy what was the result of ranibizumab on ICER (incremental cost-effectiveness ratio)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
830
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n  = 210) ( n  = 286) ( n  = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Having a bath or shower?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
835
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n  = 210) ( n  = 286) ( n  = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Partial return to normal work/employment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
836
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n  = 210) ( n  = 286) ( n  = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Full return to normal work/employment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
837
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n  = 210) ( n  = 286) ( n  = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Going out socially?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
838
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n  = 210) ( n  = 286) ( n  = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Sporting activity or exercise?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
862
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Statin efficacy in the treatment of hepatitis C genotype I\n\n ABSTRACT.BACKGROUND:\nLipid metabolism is one of the hepatitis C virus (HCV) life cycle steps. Statins can reduce cholesterol level and finally can decrease HCV replication. Thus, we assessed the effect of Statins in combination with standard antiviral treatment on hyperlipidemic genotype I HCV infected patients.\n\nABSTRACT.MATERIALS AND METHODS:\nThis study was a prospective clinical trial. 40 patients were selected from those referred to educational and Therapeutic Centers of Isfahan University of Medical Sciences from 2009 to 2010 with confirmed HCV viremia. All patients received Peg-interferon-a2a and ribavirin. 20 hyperlipidemic Patients received 20 mg atorvastatin nightly for 3 months and placebo was prescribed for 20 normolipidemic HCV infected patients as a control group. Liver enzymes and complete blood count were checked monthly and thyroid stimulating hormone was checked every 3 months. We also performed quantitative HCV-ribonucleic acid (RNA) test in 12th week of therapy, at the end of treatment and 6 months after therapy for all samples.\n\nABSTRACT.RESULTS:\nWe didn't find any significant differences in the mean of HCV-RNA numbers between statin and placebo groups in 12th week of treatment, in the end of treatment and 6 months after treatment (P > 0.05).\n\nABSTRACT.CONCLUSION:\nAtorvastatin has no effect on the mean of HCV viral load when we added it to standard treatment for hepatitis C infection. Further studies are necessary to examine the possible antiviral properties of statins and their potential role as adjuncts to standard HCV therapy.\n\nBODY.INTRODUCTION:\nHepatitis C virus (HCV) is an important human pathogen, not only because of its high prevalence and worldwide burden, but also because of the potentially serious complications of persistent HCV infection. These complications include cirrhosis, hepatocellular carcinoma, and end-stage liver disease necessitating liver transplantation. The incidence rates for all of these complications are expected to rise in the near future.[12] It is estimated that 130-170 million people have chronic HCV (about 3% of the world population).[34] More than 3,00,000 people die every year from hepatitis C related liver diseases.[5] There is no vaccination for HCV, so the treatment is very important. The standard medication of HCV (pegylated interferon (IFN) alpha plus ribavirin [RBV]) in HCV genotype I infected patients have shown 40-50% sustained virological response (SVR),[3] so we need more effective treatment method. Lipid metabolism is one of the HCV life cycle steps. HCV forms a replication complex in lipid raft membrane that is full of cholesterol and sphingolipids.[67] 3-hydroxy-3 methyl-glutaryl Coenzyme A inhibitors (statins) can reduce cholesterol level and finally can decrease HCV replication. Hence, statin therapy can be introduced as a complementary treatment. In vitro examinations have found that some statins, especially fluvastatin and atorvastatin can inhibit HCV replication,[8] although statins should be used with caution in advanced end-stage liver disease because of decompensation risk.[9] Recently, beneficial effect of statin use among patients with HCV-related liver disease has been suggested. In vitro studies show that high concentrations of statins disrupt HCV replication through depletion of isoprenoid geranylgeranyl pyrophosphate.[1011] Statins may thus have antiviral effects through mechanisms not related to lipid metabolism.[1213] The low-density lipoprotein (LDL) receptor and the high-density lipoprotein scavenger receptor B1 putatively facilitate HCV entry into hepatocytes. Complex host proteins are found to be closely associated with HCV nonstructural proteins. The process which links these host and HCV proteins is termed prenylation. Statin agents which block the formation of the lipid precursors for prenylation, could theoretically interfere with viral replication.[141516] Some human studies have done for assessing the effect of statins in hepatitis c treatment, but their results are different. O'Leary et al. have shown no reduction in HCV-ribonucleic acid (RNA) titers with 20 mg atorvastatin.[17] However, another study indicates non-dose-dependent discretion in HCV-RNA in %50 of patients,[18] the purpose of this study was to determine the effect of statin therapy on standard antiviral treatment response in hyperlipidemic genotype I HCV patients.\n\nBODY.MATERIALS AND METHODS.SELECTION OF PATIENT GROUPS:\nOur study was a prospective clinical trial which was done on genotype I HCV patients who had been referred to educational and therapeutic centers of Isfahan University of Medical Sciences between years 2009 and 2010. All patients were tested to confirm HCV viremia by Cobas Amplicor HCV Monitor test, version 2 (Roche Diagnostics, Mannheim, Germany). We selected our patients using simple random sampling. Subjects in the statin group were hyperlipidemic patients according to WHO definition for hyperlipidemia which is LDL cholesterol level more than 160 mg/dl, 130 mg/dl for patient with one risk factor and 100 mg/dl for patients with diabetes mellitus, 2 or more than risk factors. Patients in the placebo group were selected from HCV patients without hyperlipidemia. This study protocol was approved by Institutional Review Board and Ethics Committee of Isfahan University of Medical Sciences (research project number: 309079). Patients were included in the study if they were naïve for HCV treatment, more than 18 years old and patients who did not have any clinical sign for decompensate Cirrhosis. All cases who were affected by other genotypes of HCV and chronic liver disease such as hepatitis B virus, HIV, Wilson disease, hemochromathosis, autoimmune liver disease, primary biliary cirrhosis, secondary biliary cirrhosis, obstructive liver disease, viral infections that affect liver function, and who had the history of alcohol drinking and drugs related steatosis and hepatotoxicity were excluded from our study. Patients were excluded from the study if the level of their liver enzymes (alanine transaminase and aspartate aminotransferase) had risen >3 times during statin treatment. Patients were examined clinically before antiviral treatment. Fasting blood sugar, plasma lipids, liver enzymes tests and liver and abdomen sonography (Hitachi, Tokyo Japan) were done 2 times (with 1 month apart) before treatment for all of them. We also checked their blood pressure, waist size, and body mass index (BMI) according to related guideline.[19] All patients weekly received 180 μg Peg-IFN-a2a and 1000, 1200 and 1400 mg RBV for weights <75 kg, 75-100 kg, and more than 100 kg, respectively (Pegasys Co.). Statin group received 20 mg atorvastatin (Farabi Co., Tehran, Iran) nightly for 3 months and placebo was prescribed for the placebo group the same as atorvastatin in the statin group. Liver enzymes and complete blood count were checked monthly and thyroid stimulating hormone checked every 3 months. We also did quantitative HCV-RNA test to reveal patients with undetectable test in 12th week of therapy (early virological response [EVR]), after therapy (end of treatment virological response) and 6 months after therapy SVR for all samples. We analyzed our data using SPSS version 16 (Inc., Chicago, IL, USA) and considered P < 0.05 as valuable, Chi-square was applied for comparison of categorical parameters and Student's t-test for comparing the continuous parameters.\n\nBODY.RESULT:\nOur study included 40 HCV positive patients (20 hyperlipidemic and 20 normolipidemic patients). Our results showed that there are significant differences in the mean of BMI, triglyceride, LDL, and cholesterol between statin and placebo groups [Table 1]. Table 1 demographic and lipid character in statin and placebo group The evaluation of viral load before statin therapy did not show a significant difference between statin and placebo, also, we didn't find any significant differences in the mean of HCV-RNA numbers in statin and placebo groups in 12th week of Statins therapy (P > 0.05), at the end of treatment (P > 0.05) and 6 months after treatment (P > 0.05) [Table 2]. Our findings also implied that EVR in statin and placebo were 75 and 70% and SVR was 95% in both groups. ALT level was higher in statin group before treatment [Table 3]. Although both AST and ALT level rose after treatment, we did not find any difference between AST and ALT level after treatment. Table 2 Comparison between viral load before treatment and after start of treatment between statin and placebo groups Table 3 comparison between transaminases in statin and placebo group before and after treatment \n\nBODY.DISCUSSION:\nIn the current study, we didn't find any significant differences in the mean of viral load of hepatitis c in statin and placebo groups in 12th week of statin therapy, at the end of treatment and 6 months after treatment. Our findings also implied that early response to treatment (EVR) in statin and placebo were 70 and 75% and Sustain response to treatment (SVR) in statin and placebo was 95% in both groups. A pilot study of 31 HCV-infected veterans who were given fluvastatin 20-320 mg/day for 2-12 weeks with weekly monitoring of HCV-RNA and liver tests reported modest reductions of viral load.[20] Furthermore, a pharmacoepidemiologic study found that the use of lovastatin was associated with a 40-50% lower incidence of moderate as well as severe liver injury among patients with preexisting liver disease.[21] Even though, prior individual studies examined important aspects of the association between statin and lowering the severity if liver disease in HCV-infected patients, these studies either did not adjust for histological severity of liver disease, had generally short follow-up, or did not use placebo subjects. Clearly, more information is needed about the possible beneficial effect of statins in HCV-infected.[22] The result of other in vivo studies has been same as our findings in this study. A study which has done by O'Leary et al. (2007) on 10 samples show that atorvastatin does not inhibit HCV-RNA replication in vivo.[17] Forde et al. in a cross-sectional study (2009) confirmed that statins don't have any apparent effect on HCV-RNA replication in vivo.[23] Unlike in vivo studies the result of in vitro research has shown that statins can decrease HCV-RNA replication.[3] Ikeda et al. evaluated the effect of statins on HCV-RNA replication in OR6 cells.[24] Fluvastatin exhibited the strongest anti-HCV activity while atorvastatin and simvastatin showed moderate inhibitory effects. The combination of IFN and the statins exhibited strong inhibitory effects on HCV-RNA replication.[2526] There are some reasons for differences between in vitro and in vivo studies. As we told before various factors affect on antiviral treatment response in the human body such as obesity, insulin resistance and liver steatosis[1516] so we need to do more studies to examine the effect of statins on HCV in relation to confounder factors. On the other hand, statins concentration in cell cultures is higher than their concentration in plasma. Studies have shown that serum levels of such agents after prolonged therapy are significantly lower than the statin concentrations those were used in replicon systems.[27] In our study, response rate of patients were higher than western studies, there are some similar reports that show high response rate of combination therapy in Iranian patients, it may be due to lower viral load in comparison to western patients.[2829] In summary in our study statins did not have any significant effect on HCV virus number and virologic response. Our limitation in this study was few sample size, and further studies are necessary to examine the possible antiviral properties of statins and their potential role as adjuncts to standard HCV therapy.\n\nBODY.AUTHORS’ CONTRIBUTION:\nAll authors have contributed in designing and conducting the study. Ash, AB, SS, SSh, and Nkh collected the data and MM, MKh, MT, Ash, and BA did the analysis. All authors have assisted in preparation of the first draft of the manuscript or revising it critically for important intellectual content. All authors have read and approved the content of the manuscript and are accountable for all aspects of the work.\n\n**Question:** Compared to placebo what was the result of 20 mg atorvastatin nightly for 3 months on mean of HCV-RNA numbers in 12 weeks of therapy?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
875
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How to avoid discontinuation of antihypertensive treatment. The experience in São Paulo, Brazil\n\n ABSTRACT.OBJECTIVES:\nTo evaluate the importance of providing guidelines to patients via active telephone calls for blood pressure control and for preventing the discontinuation of treatment among hypertensive patients.\n\nABSTRACT.INTRODUCTION:\nMany reasons exist for non-adherence to medical regimens, and one of the strategies employed to improve treatment compliance is the use of active telephone calls.\n\nABSTRACT.METHODS:\nHypertensive patients (n = 354) who could receive telephone calls to remind them of their medical appointments and receive instruction about hypertension were distributed into two groups: a) \"uncomplicated\" – hypertensive patients with no other concurrent diseases and b) \"complicated\" - severe hypertensive patients (mean diastolic ≥110 mmHg with or without medication) or patients with comorbidities. All patients, except those excluded (n = 44), were open-block randomized to follow two treatment regimens (\"traditional\" or \"current\") and to receive or not receive telephone calls (\"phone calls\" and \"no phone calls\" groups, respectively).\n\nABSTRACT.RESULTS:\nSignificantly fewer patients in the \"phone calls\" group discontinued treatment compared to those in the \"no phone calls\" group (4 vs. 30; p<0.0094). There was no difference in the percentage of patients with controlled blood pressure in the \"phone calls\" group and \"no phone calls\" group or in the \"traditional\" and \"current\" groups. The percentage of patients with controlled blood pressure (<140/90 mmHg) was increased at the end of the treatment (74%), reaching 80% in the \"uncomplicated\" group and 67% in the \"complicated\" group (p<0.000001).\n\nABSTRACT.CONCLUSION:\nGuidance to patients via active telephone calls is an efficient strategy for preventing the discontinuation of antihypertensive treatment.\n\nBODY.INTRODUCTION:\nMost patients with hypertension do not benefit from treatment.1 Data from NHANES/NCHS 1999–2004 showed that 71.8% of individuals were aware of their condition, 61.4% were under current treatment and 35.1% had it under control.2 Although the percentage of individuals with controlled blood pressure in Brazil is unknown, we estimate it to be low because the percentage of patients with their high blood pressure under control is around 30-35% at outpatient clinics that specialize in hypertension, and these clinics do not represent the national reality.3 Many reasons exist for non-adherence to medical regimens, including adverse drug effects, poor instructions, poor provider-patient relationship, poor memory, a patient's disagreement with the need for treatment or inability to afford medication.4,5 One of the strategies employed to improve treatment compliance is the use of active telephone calls6 to patients through which they are given guidance and treatment questions are answered. Using this strategy, several studies have shown an increase in treatment compliance, a higher percentage of blood pressure control6-8 and a decrease in mortality.9-11 Another strategy that aims at improving treatment compliance is the employment of two low-dose medications. Evidence suggests that the combination of two antihypertensive agents provides a higher percentage of blood pressure control due to complementary mechanisms of action. In addition, this results in better tolerability and consequently, better treatment compliance.12-14 The ASCOT study15 (Anglo-Scandinavian Cardiac Outcomes Trial) has indicated a significant reduction in all causes of mortality with the use of current medications for treating hypertension: calcium channel antagonists and angiotensin-converting enzyme inhibitors. It has also been demonstrated that a high incidence of adverse events may decrease treatment compliance. However, in the LIFE16 study, angiotensin receptor antagonists proved to be better at reducing the risks related to cardiovascular events and they resulted in a lower rate of treatment discontinuation when compared to a beta-blocker. In view of these data, we evaluated the importance of providing guidelines to patients via active telephone calls for blood pressure management and preventing treatment discontinuation in hypertensive patients. We used two treatment regimens with low-dose medications that were offered for free to avoid the influence of financial factors. We opted for one regimen called \"traditional\" based on diuretics and beta-blockers and another called \"current\" based on angiotensin II antagonists and calcium channel blockers.\n\nBODY.METHODS:\nThe study participants were selected at the Hypertension Unit, University of São Paulo General Hospital, Nephrology Division, University of São Paulo School of Medicine. The patients had essential hypertension and were able to receive telephone calls to be reminded of their medical appointments and be given guidance about hypertension. Patients were of both genders, from diverse ethnic backgrounds, over 18 years of age, and had body mass indices below 40 kg/m2. The patients were enrolled in the study after signing a free and informed consent form. The study was approved by the Ethics Committee of the Board of Directors of the University of São Paulo School of Medicine. Patients were excluded on several bases, including blood pressure <140/90 mmHg without antihypertensive medication, secondary hypertension, white-coat hypertension with systolic pressures ≥140 mmHg and/or diastolic pressures ≥90 mmHg at the doctor's office and awake mean systolic pressures <135 mmHg or awake mean diastolic pressures <85 mmHg without antihypertensive medication, malignant hypertension and patients with a previous history of hypersensitivity reaction to the study medications. Other exclusion criteria were as follows: pregnant women or nursing mothers, the presence of liver dysfunction evidenced by the patient's clinical history or by one of the liver function tests (levels twice the normal values for alkaline phosphatase, total bilirubin, aspartate aminotransferase), patients with clinical conditions that might interfere with the total conformity with the study or those who might have an increased risk for participating in the study, patients with a history of alcoholism, drug abuse or mental disorders that might invalidate the free and informed consent or limit the patient's ability to meet the protocol rules and patients who had participated in any other studies involving investigational drugs or drugs already marketed within the previous month before enrollment in this study or concomitantly with this study.\n\nBODY.METHODS.BLOOD PRESSURE MEASUREMENT:\nBlood pressure and heart rate measurements were performed on the right upper limb on sitting patients five times by the nursing staff using an appropriately sized cuff with a validated automatic oscillometric device (Dixtal, DX2710, São Paulo, Brazil).17 The mean of the last two measurements was calculated and recorded if the difference between these measurements was less than 4 mmHg. If after the five measurements the difference between the last two was greater than 4 mmHg, the measurement was repeated until the difference between the two measurements was less than 4 mmHg. During the study, blood pressure measurements were always performed in the afternoon by the same nursing staff using the same device. The diagnosis of hypertension was made when the mean values of the last two measurements were as follows: systolic pressure ≥140 mmHg and/or diastolic pressure ≥90 mmHg with or without medication at the initial visit (Visit 0). Patients who were receiving antihypertensive medication at the initial visit and had a systolic pressure <140 mmHg or a diastolic pressure <90 mmHg were re-evaluated eight weeks after discontinuation of their medication and introduction of placebo; these patients were included in the study when the mean values included a systolic pressure ≥140 mmHg and/or a diastolic pressure ≥90 mmHg. Controlled blood pressure was defined in two levels as (with the patient in the seated position) a systolic/diastolic pressure less than 140/90 or 120/80 mmHg. All of the patients underwent Ambulatory Blood Pressure Monitoring (ABPM), which was performed with a validated oscillometric device (SpaceLabs 90207, SpaceLabs Inc, Richmond, WA, USA),18,19 five weeks after the start of the placebo to eliminate cases of patients with white-coat hypertension and, in patients who did not receive placebo, to identify the white-coat effect. According to clinical characteristics, the patients were assigned to two groups: a) \"uncomplicated\"– hypertensive patients without complications and without other concurrent diseases and b) \"complicated\"- patients with severe hypertension (mean diastolic pressure ≥110 mmHg with or without medication) or comorbidities such as diabetes mellitus, renal failure (serum creatinine >1.4 mg/dL), coronary insufficiency, congestive heart failure or prior history of cerebrovascular accident.\n\nBODY.METHODS.PATIENT RANDOMIZATION:\nAll patients, from both the complicated and the uncomplicated groups, were open-block randomized to receive active telephone calls (\"phone calls\" group) or not to receive telephone calls (\"no phone calls\" group) and to follow two treatment regimens, \"traditional\" or \"current\". Thus, after the first visit and randomization, patients from the \"phone calls\" group were invited to enroll by telephone in a program called \"Biosintética Assistance\", supported by Biosintética Laboratory. The patients who subscribed started receiving active telephone calls from appropriately trained operators and also started receiving magazines with health-related information, which were sent periodically by mail. There were six contacts by telephone during the study. During phone calls, the patient was reminded to attend the next visit, and he/she was educated about hypertension and any necessary clarifications about his/her treatment. All patients randomized to the \"phone calls\" group were invited to attend occasional informative lectures with the participation of a multidisciplinary team. At the initial stage of treatment (following eight weeks of treatment with placebo), the \"uncomplicated\" group received one of the following treatment regimens: a) \"traditional\" treatment with 6.25 mg 2x/day hydrochlorothiazide and 25 mg 2x/day atenolol; b) \"current\" treatment with 25 mg 2x/day losartan and 2.5 mg 2x/day amlodipine. If the blood pressure could not be controlled during the visits, the medication doses were doubled or another antihypertensive was added. The \"complicated\" group did not undergo the treatment period with placebo and was randomized to receive either \"traditional\" or \"current\" drug regimens similar to the ones administered to the \"uncomplicated\" group, though the specifics of each condition were carefully considered. The addition of other antihypertensive agents in the \"uncomplicated\" group and the specificities of the patient regimens in the \"complicated\" group were performed according to the guidelines from the V Brazilian Guidelines on Arterial Hypertension.3 All patients were instructed to take their medication every day at 7:00 AM and 7:00 PM, with a variation of up to one hour. A 12-month supply of the necessary medication was supplied to the patients by the physician at the end of the visit at no cost to eliminate the financial factor in this analysis; patients were given enough medication to last between visits. Patients were instructed to bring the remaining pills to their subsequent visits, at which time they were counted by the nursing staff without the patients' knowledge of this procedure. Doctors' visits, preceded by the nursing staff visit, took place every eight weeks for 56 weeks and included measurements of blood pressure, heart rate and weight. The weight was checked with the patient wearing light clothes standing barefoot on a scale (model 2096PP, Toledo do Brasil, São Paulo, Brazil). Study withdrawal was characterized by non-attendance to appointments for up to three months after the scheduled date. Patients who returned to the medical clinic within three months after the scheduled date were allowed to continue in the study and be evaluated in an unscheduled visit. The tests performed during the placebo treatment and after 40 weeks of active treatments included the following: fasting glucose, urea, creatinine, total cholesterol, fractions of cholesterol, triglycerides, uric acid, total bilirubin, creatine phosphokinase (CPK), sodium (Na+), potassium (K+), hemoglobin, thyroid-stimulating hormone (TSH), alkaline phosphatase, aspartate aminotransferase (AST), alanine aminotranferase (ALT) and urinary excretion of sodium in 24 h.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData were analyzed using an analysis of variance (ANOVA) according to the factor of controlled or uncontrolled hypertension. P<0.05 was considered statistically significant.\n\nBODY.RESULTS:\nThe study included 398 patients. Of those, 44 were excluded for the following reasons: a) lack of enrollment in the Biosintética Assistance program (n = 17); b) body mass index >40 kg/m2 (n = 6); c) secondary hypertension (n = 4); d) white-coat hypertension (n = 3); e) white-coat normotension (\"masked hypertension\") (n = 3); f) alcohol use (n = 2); g) classification error (n = 4); h) refusal to do ABPM (n = 2); i) loss of blood pressure measurements from the first visit (n = 1); j) pregnancy (n = 1); k) death (n = 1). A total of 354 patients were evaluated. There were no statistically significant differences in age, gender, skin color, body mass index, blood pressure or heart rate between the groups classified as: \"complicated\" (n = 175) and \"uncomplicated\" (n = 179); \"traditional\" (n = 176) and \"current\" (n = 178); \"phone calls\" (n = 108) and \"no phone calls\" (n = 246) (Table 1). In addition, no differences were seen in marital status, education or occupation.\n\nBODY.RESULTS.A) CONTROL OF BLOOD PRESSURE:\nA marked reduction in blood pressure was seen in patients from both the \"complicated\" and \"uncomplicated\" groups (Table 2). The blood pressure was reduced by 19±10/20±14 mmHg in the \"uncomplicated\" group and by 18±14/22±17 mmHg (systolic/diastolic) in the \"complicated\" group (p>0.05; n = 354). The \"phone calls\" and \"no phone calls\" groups showed significantly lower blood pressures at the end of treatment (p< 0.00001). The \"traditional\" and \"current\" groups showed significantly lower blood pressures at the end of treatment (Figure 1). The percentage of patients with controlled blood pressure (<140/90 mmHg) was also high at the end of treatment (74%). On the other hand, the percentage of patients with blood pressure reduced to <120/80 mmHg was only 29%. There was no difference in the percentage of patients with controlled blood pressure between the \"phone calls\" and \"no phone calls\" groups or in the reduction of blood pressure and the percentage of patients with controlled blood pressure among the \"traditional\" and \"current\" treatments. However, patients in the \"phone calls\" group (80%) had better blood pressure control than those in the \"no phone calls\" group (71%), though the difference was not statistically significant. The \"uncomplicated\" and \"complicated\" groups had statistically significant differences in the percentage of patients with controlled blood pressure (<140/90 mmHg) (80% vs. 67%, respectively; p<0.000001). At the next to last study visit (visit 7), 90% and 66% of patients had blood pressure measurements <140/90 mmHg in the \"uncomplicated\" and \"complicated\" groups, respectively. However, there was no difference in the percentage of patients with blood pressure <120/80 mmHg (31% on the final visit in both groups, Table 3). Among patients with a blood pressure <140/90 mmHg at the final visit, only 3% had received only one type of antihypertensive medication; most patients (56%) received 2 (34%) or 3 (22%) types of antihypertensive medication (Table 4).\n\nBODY.RESULTS.B) TREATMENT DISCONTINUATION:\nA significantly lower number of patients in the \"phone calls\" group quit the treatment compared to the \"no phone calls\" group (4 vs. 30, respectively; Figure 2). There was no difference in the percentage discontinuation in the \"complicated\" and \"uncomplicated\" groups or in the \"current\" and \"traditional\" treatment regimens (p>0.05).\n\nBODY.RESULTS.C) TABLET COUNT:\nPatients in the groups: \"complicated\" and \"uncomplicated\"; \"traditional\" and \"current\"; \"phone calls\" and \"no phone calls\" did not show statistically significant differences in medication intake, as verified by counting the tablets returned at the visits. Compliance was over 85% at all visits. However, patients in the current + uncomplicated + phone call group had the highest rate of compliance (93%). The lowest rate of compliance occurred in the traditional + uncomplicated + no phone call group (85%) (p>0.05).\n\nBODY.RESULTS.D) BODY WEIGHT:\nBlood pressure reductions occurred in the presence of increased body weight (baseline visit: 73±14 kg; final visit: 74±14 kg, p = 0.0008, Table 5).\n\nBODY.RESULTS.E) ADVERSE EVENTS AND LABORATORY TESTS:\nMost of the adverse events that occurred during the study were of light or moderate intensity. Patients in the \"traditional\" group showed significantly more depression symptoms (5 vs. 0) and coughing (30 vs. 4) when compared with patients in the \"current\" treatment group. On the other hand, patients in the \"current\" group had a greater number of complaints of dizziness when compared to patients in the \"traditional\" treatment group (63 vs. 42, respectively). Compared to patients in the \"uncomplicated\" group, patients in the \"complicated\" group presented a larger number the following symptoms: abdominal pain (19 vs. 8), sleepiness (32 vs. 10), cough (23 vs. 11), weakness (23 vs. 11) and blurred vision (6 vs. 0). Conversely, compared to patients in the \"complicated\" group, patients in the \"uncomplicated\" group presented a larger number of complaints such as pain (74 vs. 53) and headache (80 vs. 59). Patients in the \"phone calls\" group reported a 34% rate of symptoms, while a 66% rate of symptoms was reported by patients in the \"no phone calls\" group. The most frequently (>10%) found adverse events were as follows: headache (62.1%), unspecific pain (58.2%), dizziness (45.8%), edema (40.1%), fatigue (15.3%), sleepiness (14.4%), cough (13.3%), precordial pain (13.3%), weakness (11.9%), tachycardia (11.3%), insomnia (10.5%) and paresthesia (10.5%). There were 26 (8%) severe adverse events: death (5), unstable angina (4), acute myocardial infarction (1), transient ischemic attack (1), hypertensive encephalopathy (1), breast cancer (3), bronchospastic crisis (1), cholecystopathy (1), diabetic decompensation (1), syncope (1), nephrectomy (2), trauma (2), peritoneal bypass (1), diarrhea (1) and gastrectomy (1). The results of the laboratory tests performed in the beginning and end of the study are shown in Table 6. There was an increase in the 24-hour urinary sodium excretion from the beginning to the end of the study (120±46 vs. 129±45 mEq/L, respectively; p = 0.000036)\n\nBODY.DISCUSSION:\nAdherence to a medication regimen is generally defined as the extent to which patients take medications as prescribed by their health care providers.20 This can be classified into three aspects of the individual's behavior regarding his/her health: 1) take the medication correctly, 2) follow the professionals' instructions related to diets and lifestyle changes and 3) attend medical visits.21 The discontinuation of medication is considered the most serious type of noncompliance with treatment. In our study, the \"phone calls\" group showed a significantly lower percentage of patients who discontinued treatment compared with the \"no phone calls\" group. A meta-analysis of randomized studies22 suggested that treatment discontinuation is effectively avoided by contacting the patient using letters, telephone calls or e-mails. Márques Contreras et al.6 evaluated the efficacy of telephone and e-mail intervention for therapeutic compliance among 538 patients with mild to moderate hypertension and verified that the group that received phone calls showed higher compliance rates (96.2%) than the group that was only contacted by e-mail (91.3%) or the group that did not receive any intervention (69.2%). Similar data were found in a controlled study recently conducted by Bosworth et al.7 in which the self-reported medication adherence was shown to increase by 9% in the group that had received behavioral and educational intervention (319 hypertensive subjects) through telephone calls versus 1% in the group that had not received intervention (n = 317). Friedman et al.23, through telephone calls associated with usual medical care for six months, compared hypertensive patients who received their usual medical treatment with those who used a monitoring and counseling system. Participants with compliance below 80% of the prescribed medication prior to enrollment in the study showed better compliance with the treatment when the telephone monitoring system was used than did patients who did not use the system (36% vs. 26%, respectively). On the other hand, participants with a compliance level above 80% prior to enrollment in the study did not show any change in compliance, and compliance was comparable in users and non-users of the telephone monitoring system. In our study, regardless of the group to which they were randomized, patients showed compliance rates above 85% at all visits. However, we do not have data regarding medication compliance before enrollment in the study. It is important to point out that treatment compliance is directly related to mortality. A study conducted by Wu et al.9 randomized 442 non-compliant patients who took five or more medications for chronic illnesses to either receive or not receive telephone counseling. The objective was to investigate the influence of this advice for treatment compliance and patient mortality. The study verified that telephone counseling was associated with a 41% reduction of death risk after two years. These authors used scores to classify the levels of compliance of 1011 patients, and they showed that the lower the level of compliance, the higher the risk of mortality in the long term. The influence of the initial selection of antihypertensive medication in regards to compliance with treatment was evaluated by Monane et al.24 The authors verified that the use of newer antihypertensive agents, such as angiotensin converting enzyme inhibitors and calcium channel antagonists, was associated with a compliance ≥80% when compared with thiazide diuretics in patients with cardiac morbidities and multiple visits to physicians. This may have been the result of better tolerability to newer classes of antihypertensive agents. However, in our study, there was no difference in tolerability between the \"traditional\" and \"current\" groups, which could explain the similar compliance rates in both groups. On the other hand, just like in Monane et al.24 we found that fewer patients in the \"complicated\" group had their blood pressure under control in the final visit (67%), whereas in the next to last study visit, 90% of patients in the \"uncomplicated\" group and only 66% in the \"complicated\" group had blood pressure levels below 140/90 mmHg. This may be due to comorbidities and the use of other concurrent treatments. The groups: \"complicated\" and \"uncomplicated\"; \"traditional\" and \"current\"; \"phone calls\" and \"no phone calls\"showed significant reductions in blood pressure from the randomization visit to the final visit, as a high percentage of patients with good blood pressure control (BP <140/90 mmHg) in the total group (74%), regardless of an increase in body weight throughout the study. In our study, the high percentage patients with controlled blood pressure at the end of treatment can be explained by the simple drug regimen, the simple drug acquisition provided by the physician at no cost at the end of each visit, a fixed team of physicians and nurses throughout the study period and the easy access to team members in cases of unexpected occurrences by means of unscheduled visits. We observed that in the next-to-last study visit, the \"uncomplicated\", \"current\", \"traditional\", \"phone calls\" and \"no phone calls\" groups had more patients with controlled blood pressure when compared to the last visit. It is possible that during the last visit the patients were worried about continuing the treatment and receiving the medications. Therefore, guidance provided to patients by means of active telephone calls, brochures and group workshops with healthcare professionals is an efficient strategy for reducing treatment discontinuation, the most severe type of noncompliance with treatment.\n\nBODY.DISCUSSION.PERSPECTIVES:\nIn an attempt to reduce non-adherence to antihypertensive treatment determinants in a developing country, we demonstrated that better blood pressure control was obtained through doctor's visits every two months with the same physician preceded by nursing staff visits with patients receiving all medication necessary for their treatment; this benefit was independent of comorbidities and the type of treatment used. Guidance provided to the patients through active telephone calls, brochures, group workshops with healthcare professionals and donation of all the required medications significantly reduced the treatment discontinuation rates.\n\nBODY.DISCUSSION.SOURCES OF FUNDING:\nThis study received support from \"Biosintetica Assistance\" and Biosintética Laboratory, which donated all of the required medication.\n\n**Question:** Compared to \"no phone calls\" group what was the result of phone calls\" group on percentage of patients with controlled blood pressure?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
879
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: How quick is soon? Early response to continuous positive airway pressure: a randomized controlled trial\n\n ABSTRACT:\n\n\nBODY.INTRODUCTION:\nNumerous studies have confirmed that using non-invasive continuous positive airway pressure (nCPAP) for chronic obstructive pulmonary disease and congestive heart failure improves the respiratory rate, heart rate (HR), and work of breathing. We hypothesize that early application of nCPAP with concomitant medical therapy to patients with acute undifferentiated shortness of breath (SOB) will improve objective measures of respiratory distress. Specifically, early application of nCPAP can improve the tidal volume (TV), end-tidal carbon dioxide (EtCO2) and Rapid Shallow Breathing Index (RSBI), and reduce intubations over the standard of treatment alone in 15 minutes or less.\n\nBODY.METHODS:\nFifty-two patients were randomized equally to either CPAP + standard of care (nCPAP group) or to standard of care (standard group) for acute undifferentiated SOB. nCPAP was applied for 15 minutes. Subject enrollment was randomized and demographic data were recorded upon enrollment. Volumetric measures were obtained by breathing through the EtCO2/Flow sensor for 30 seconds at 5-minute intervals along with vital signs. Inclusion criteria were adults >18 years, with acute respiratory distress, admitted to the resuscitation room of the emergency department, respiratory rate > 25 bpm, SpO2 > 75%, Glasgow Coma Score > 8, HR > 60/min, and systolic blood pressure > 90 mmHg. Exclusion criteria were respiratory arrest and/or cardiac arrest, suspected pulmonary embolism, pneumothorax, myocardial infarction, temperature >38.5°C, or refusal to participate.\n\nBODY.RESULTS:\nAll tests were two-sided and assessed at the 0.05 type-I error rate. The gender distribution was equal for both groups. There was no difference in baseline characteristics except for age, HR and diastolic blood pressure (P < 0.05). Subjects in the nCPAP group had a greater improvement for various parameters compared with the standard group including TV (0.8 l, 0.3 l), EtCO2 (30 mmHg, 38 mmHg) and RSBI (39, 150), respectively. The nCPAP group also had a shorter hospital and ICU length of stay compared with the standard group (4 vs 5 days, and 2 vs 3 days, respectively). Finally, the rate of intubations was higher in the standard group (n = 8, n = 3) than the nCPAP group (P < 0.01).\n\nBODY.CONCLUSION:\nThe early application of nCPAP in patients with acute undifferentiated SOB improves their volumetric parameters and vital signs in as early as 5 minutes. This pilot study provides objective support for the notion that early application of nCPAP can lead to measurable improvement in TV, EtCO2, RSBI and reductions in intubations.\n\n**Question:** Compared to standard group what was the result of CPAP + standard of care (nCPAP group) on hospital and ICU length of stay?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
894
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Belatacept-Based Immunosuppression in \n\n ABSTRACT:\nThis exploratory phase II study evaluated the safety and efficacy of belatacept in de novo adult liver transplant recipients. Patients were randomized (N = 260) to one of the following immunosuppressive regimens: (i) basiliximab + belatacept high dose [HD] + mycophenolate mofetil (MMF), (ii) belatacept HD + MMF, (iii) belatacept low dose [LD] + MMF, (iv) tacrolimus + MMF, or (v) tacrolimus alone. All received corticosteroids. Demographic characteristics were similar among groups. The proportion of patients who met the primary end point (composite of acute rejection, graft loss, death by month 6) was higher in the belatacept groups (42–48%) versus tacrolimus groups (15–38%), with the highest number of deaths and grafts losses in the belatacept LD group. By month 12, the proportion surviving with a functioning graft was higher with tacrolimus + MMF (93%) and lower with belatacept LD (67%) versus other groups (90%: basiliximab + belatacept HD; 83%: belatacept HD; 88%: tacrolimus). Mean calculated GFR was 15–34 mL/min higher in belatacept-treated patients at 1 year. Two cases of posttransplant lymphoproliferative disease and one case of progressive multifocal leukoencephalopathy occurred in belatacept-treated patients. Follow-up beyond month 12 revealed an increase in death and graft loss in another belatacept group (belatacept HD), after which the study was terminated.\n\nBODY.INTRODUCTION:\nLiver transplantation is a life-saving procedure for patients with end-stage liver disease (ESLD) that improves overall survival and quality of life (1,2). Liver transplant (LT) recipients are at increased risk for cardiovascular disease, and chronic kidney disease (CKD; (3–7)). Stage 4 or 5 CKD has been reported in ∼18% of LT recipients by 5 years posttransplant (4) and is associated with increased morbidity and mortality (4–7). Data from 1997 to 2008 in the Organ Procurement Transplant Network/United Network for Organ Sharing (OPTN/UNOS) database showed a threefold increased rate of kidney transplantation in patients who previously received LT (8). While the causes of CKD in LT patients are multifactorial, including pretransplant and peritransplant factors like hepatitis C virus (HCV), diabetes and hepatorenal syndrome, calcineurin inhibitors (CNIs) appear to be significant contributing factors (4,7,9–11). In the OPTN/UNOS analysis, ∼50% of LT patients who received a kidney transplant had a diagnosis consistent with CNI toxicity (8). Thus, there is a need for immunosuppressive regimens that provide efficacy while avoiding the nephrotoxic, cardiovascular and metabolic risks of CNIs in LT recipients. Belatacept, a selective costimulation blocker (12), is designed to provide effective immunosuppression and avoid both renal and nonrenal toxicities associated with CNIs. Results from two phase III clinical trials in kidney transplant recipients found that belatacept-based immunosuppression was associated with similar rates of patient and graft survival, significantly better renal function and an improved cardiovascular/metabolic risk profile versus cyclosporine-based therapy ≤3 years after transplantation (13–17). Higher rates and grades of acute rejection (AR) were observed in belatacept-treated patients receiving standard-criteria donor kidneys (13,15). Posttransplant lymphoproliferative disease (PTLD) with central nervous system (CNS) involvement, primarily in patients seronegative for Epstein–Barr virus (EBV) at the time of transplant and progressive multifocal leukoencephalopathy (PML) were the most serious adverse events (AEs) reported in belatacept-treated patients (13,14,18). The objective of this exploratory, phase II study was to evaluate the efficacy and safety of belatacept in adult recipients of first LTs from a deceased donor. To identify an optimal immunosuppressive regimen in LT recipients, three belatacept regimens were studied and compared with two tacrolimus regimens. In addition, a follow-up long-term extension (LTE) study (i.e. ≥12 months) was conducted to assess longer-term safety and tolerability. The LTE study was terminated in 2011 (when patients were ∼2 years posttransplant) because of worse patient and graft survival in two of the three belatacept treatment groups. Both the 12-month results and those from the LTE are reported herein.\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN:\nThis was a randomized, partially blinded, active-controlled, parallel group, multicenter, phase II clinical trial in adult recipients of first LTs (ClinicalTrials.gov: NCT00555321). Three belatacept regimens were studied, representing a stepwise decrease in the level of overall immunosuppression, and compared with the approved immunosuppressive regimen (tacrolimus alone) and the most widely used immunosuppressive regimen (tacrolimus + mycophenolate mofetil [MMF]) in LT. This exploratory study intended to assess the comparability of a belatacept-based regimen with a tacrolimus-based regimen in terms of AR, graft loss and death. Immediately before transplantation, patients were randomized to one of the following treatment groups: (i) basiliximab + belatacept high dose (HD) + MMF; (ii) belatacept HD + MMF; (iii) belatacept low dose (LD) + MMF; (iv) tacrolimus + MMF or (v) tacrolimus alone. Subjects were randomized in a 1:1:1:1:1 ratio using an interactive voice response system with centralized randomization and stratified by HCV status (yes or no) in blocks of five. All patients received corticosteroid therapy for the first 3 months. The trial was fully blinded to patients and study personnel with respect to belatacept dosing regimen (HD or LD) and basiliximab assignment (through the use of placebo infusions), open-label to belatacept or tacrolimus treatment and open-label between the two tacrolimus groups. The study duration was 12 months with an LTE (study was initiated January 22, 2008 and completed May 2, 2011). An external Data Monitoring Committee (DMC) comprising a chair and four members (specialty physicians and one statistician) reviewed emerging safety and efficacy data on a regular basis. The study was conducted in accordance with the Declaration of Helsinki and was consistent with International Conference on Harmonisation Good Clinical Practice and applicable regulatory requirements. The study protocol and amendments were reviewed and approved by the institutional review board/independent ethics committee for each site before initiation of the study (see Table S1 for enrollment by site).\n\nBODY.MATERIALS AND METHODS.PATIENTS AND INTERVENTIONS:\nThe study population included adults of 18–70 years of age, who were recipients of first LTs from a deceased donor (see Table 1 for additional inclusion and exclusion criteria). Belatacept was administered via intravenous (IV) infusion. Table 1 Study inclusion and exclusion criteria Inclusion criteria Exclusion criteria Adults of 18–70 years of age, who were recipients of first LTs from a deceased donor Donation after cardiac death Informed consent from all patients Living-donor recipients Additional inclusion criteria Split liver recipients Reliable IV access Recipients of organs ABO compatible donor-recipient pairs From donors <12 years or >65 years of age Patients with hepatocellular carcinoma meeting Milan criteria (one nodule ≤5 cm in diameter or three or fewer nodules, with none >3 cm in diameter) With anticipated cold ischemia time >14 h From donors who were positive for HBV or HCV when recipients were negative for HBV or HCV, respectively From donors with known human immunodeficiency virus infection Patients receiving dialysis before LT for ≥2 consecutive weeks before enrollment or who were anticipated to have prolonged dialysis posttransplant Patients with known intrinsic kidney disease (e.g. a urine protein/creatinine ratio >150 mg/g or presence of an abnormal number of red blood cells or granular casts in urine) AND a calculated GFR (cGFR) <40 mL/min/1.73 m 2 body surface area (modified MDRD) within 1 month of enrollment Patients with acute liver failure, hypercoagulable state or malignancy within the previous 5 years (except for nonmelanoma skin cancer cured by local resection or hepatocellular carcinoma as defined above) Patients who were seronegative for Epstein–Barr virus (subsequent study amendment) HBV, hepatitis B virus; HCV, hepatitis C virus; IV, intravenous; LT, liver transplant. \n\nBODY.MATERIALS AND METHODS.PATIENTS AND INTERVENTIONS.HD REGIMENS (GROUPS 1 AND 2):\nBelatacept 10 mg/kg was given on days 1 (day of transplant), 3 and 5, and weeks 2, 4, 6, 8, 10, 12, 16, 20 and 24 for the first 6 months and then 5 mg/kg every 4 weeks from months 7–12.\n\nBODY.MATERIALS AND METHODS.PATIENTS AND INTERVENTIONS.LD REGIMEN (GROUP 3):\nBelatacept 10 mg/kg was given on days 1, 3 and 5, and weeks 2, 4, 8 and 12 for the first 3 months and then 5 mg/kg every 4 weeks from months 3–12. The belatacept doses in the present study were different from those studied in renal transplant recipients in that an additional belatacept dose was administered on day 3 in all belatacept treatment groups, and one additional dose was allowed for excessive bleeding (>3L) or ascites loss (>4L) during the first 2 weeks following LT. Eight patients in the basiliximab + belatacept HD + MMF group, 9 patients in the belatacept HD + MMF group and 10 patients in the belatacept LD + MMF group received an additional dose of belatacept for bleeding or ascites. Tacrolimus (groups 4 and 5) was administered orally at an initial dose of 0.10–0.15 mg/kg/day after transplantation; doses were adjusted to achieve target trough concentrations between 6 and 12 ng/mL. Basiliximab induction (20 mg IV) was given to patients in group 1 only (belatacept HD regimen) on days 1 and 5. Patients in groups 1 through 4 received MMF at 2 g/day orally, which was later amended to 1 g/day after a case of PML was reported in a patient receiving belatacept HD + MMF. At that time, all but two had received ≥6 months of study treatment. All patients received tapered corticosteroids (Figure 1). Figure 1Patient disposition and dosing. All patients received corticosteroids on days 1–5, which was tapered to ≤10 mg/day by day 30 and ≤5 mg/day by day 90. Thereafter, withdrawal of corticosteroids was at the discretion of the investigator. HD, high dose; LD, low dose; MMF, mycophenolate mofetil. All patients received antiviral prophylaxis for cytomegalovirus (CMV) and herpes simplex virus for ≥3 months posttransplant and for 3 months upon administration of T cell–depleting agents. Additionally, all patients were to receive a 6-month course of prophylaxis for Pneumocystis jiroveci pneumonia.\n\nBODY.MATERIALS AND METHODS.OUTCOMES:\nThe primary end point was the composite incidence of AR, graft loss and death at 6 months after transplantation. AR was included in the composite end point if it was clinically suspected and proven via biopsy. All biopsies for suspected AR were assessed by a central histopathologist blinded to treatment assignment using the Banff schema for grading of LT rejection and rejection activity index for staging (19). Graft loss was defined as impairment of liver function that resulted in patient death or re-transplantation. Secondary end points included the incidence, severity, treatment and outcome of AR by 12 months; graft loss and death by 12 months; change in renal function over time as determined by measured GFR (mGFR) and calculated GFR (cGFR; MDRD methodology); incidence of HCV recurrence by 12 months, defined as histologic confirmation on liver biopsy scored by the Ishak (modified Knodell) system, a score of ≥5 of 18 on modified histological activity index grading, and a fibrosis score of ≥2 of 6 (20). Incidence rates of cardiovascular and metabolic comorbidities (i.e. posttransplant diabetes mellitus, dyslipidemia and hypertension) and the overall safety of the belatacept-based regimens were also evaluated. Liver allograft biopsies (confirmed by a central pathology laboratory) were obtained at baseline and at 12 months to assess histology. Biopsies were also performed in patients who met protocol-specified criteria for clinical suspicion of AR or HCV recurrence.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSES:\nA statistical analysis plan was prepared prior to unblinding this study. The purpose of the study was to provide initial clinical experience regarding the efficacy and safety of belatacept in LT recipients and identify a belatacept-based regimen with an acceptable composite rate of AR, graft loss and death. Therefore, statistical testing was not prespecified and the study was not powered to demonstrate a difference between treatment groups. A calculated sample size of 50 patients per treatment group was determined to provide initial data regarding safety and efficacy. If the observed primary end point (composite incidence of AR, graft loss and death by 6 months) for a belatacept-based group was 40%, it was estimated that the two-sided 95% confidence interval (CI) of this incidence would extend from 26.4% to 53.6%, which would be within the general range observed in large LT studies that used CNI-based immunosuppression (21,22). Analyses were performed on the intent-to-treat population (i.e. those who were randomized and received a transplant). Most analyses were descriptive in nature, using point estimates and 95% CIs. mGFR was assessed via iothalamate at months 2 (baseline) and 12. cGFR was assessed pretransplant and at 1, 2, 3, 6 and 12 months.\n\nBODY.RESULTS:\nA total of 260 patients at 39 centers worldwide were randomized, of whom 250 received a transplant and were treated. A total of 164 patients completed 12 months of treatment (Figure 1). Demographic and baseline characteristics were generally similar across treatment groups (Table 2). The mean age was 54 years, and 46% of patients were HCV-positive. The primary reasons for discontinuing treatment, as defined by the study investigators, were AEs and lack of efficacy in the belatacept groups and AEs in the tacrolimus groups. Table 2 Demographic characteristics of recipients by treatment group Characteristic Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac (n = 50) (n = 48) (n = 49) (n = 53) (n = 50) Mean age, years 54.0 53.4 55.2 53.0 54.7 Sex, male, % 78 71 63 87 84 Race, %  White 88 83 94 93 86  Black 8 6 2 6 4 Cause of ESLD, %  Noncholestatic cirrhosis 76 69 67 74 82 HCV-positive, % 46 48 43 47 48 Mean MELD score 22.6 21.1 20.6 24.3 21.6 Hepatorenal syndrome, % 10 10 10 13 6 ESLD, end-stage liver disease; HCV, hepatitis C virus; HD, high dose, LD, low dose; MELD, Model for End-Stage Liver Disease; MMF, mycophenolate mofetil; Tac, tacrolimus. An imbalance in the frequency of deaths and graft losses in the belatacept LD group early in the study was noted by the DMC. Based on the DMC recommendation, randomization to the belatacept LD group was stopped. Study enrollment, however, was nearly complete (247/250 patients), and patients were permitted to remain on the belatacept LD regimen at the discretion of the investigator. Randomization into the other four treatment groups continued.\n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.COMPOSITE END POINT:\nAt the time of the primary analysis at month 6, the incidence rates of the composite end point were higher in the belatacept groups (42–48%) than in the tacrolimus groups (15–38%) (Table 3). This difference was primarily driven by a higher number of AR episodes in the belatacept groups than in the tacrolimus groups. At month 12, patient and graft survival were highest in the tacrolimus + MMF group and lowest in the belatacept LD group. The proportion of patients surviving with a functioning graft was 90%, 83% and 67% in the basiliximab + belatacept HD, belatacept HD, and belatacept LD groups, respectively, and 93% and 88% in the tacrolimus + MMF and tacrolimus groups, respectively. Table 3 Outcomes: composite end point (death, graft loss and AR) by 6 months (primary end point) and 12 months and AR up to month 12 Composite end point Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac (n = 50) (n = 50) (n = 48) (n = 49) (n = 53) 6 months (primary end point), n (%) 24 (48.0) 20 (41.7) 23 (46.9) 8 (15.1) 19 (38.0)  Difference from Tac + MMF, % (95% CI) 32.9 (16.1–49.8) 26.6 (9.6–43.5) 31.8 (14.8–48.5) — —  Difference from Tac, % (95% CI) 10.0 (−8.7–29.6) 3.7 (−15.3–23.2) 8.9 (−9.8–28.4) — —   AR, n 20 15 15 5 15   Death, n 4 4 6 1 3   Graft loss, n 2 2 6 4 3   Survival with a functioning graft, n (%) 45 (90.0) 43 (89.6) 38 (77.6) 49 (92.5) 45 (90.0)   (95% CI) (81.7–98.3) (80.9–98.2) (65.9–89.2) (85.3–99.6) (81.7–98.3) 12 months, n (%) 26 (52.0) 23 (47.9) 26 (53.1) 10 (18.9) 20 (40.0)  AR, n 22 16 16 7 15  Death, n 4 7 10 1 4  Graft loss, n 2 2 8 4 4  Survival with a functioning graft, n (%) 45 (90.0) 40 (83.3) 33 (67.3) 49 (92.5) 44 (88.0)  (95% CI) (81.7–98.3) (72.8–93.9) (54.2–80.5) (85.3–99.6) (79.0–97.0) AR (centrally read) up to month 12, n (%)  Patients with AR 22 (44.0) 16 (33.3) 16 (32.7) 7 (13.2) 15 (30.0)  Treated, n/N (%) 12/22 (54.5) 12/16 (75.0) 8/16 (50.0) 5/7 (71.4) 12/15 (80.0)   Corticosteroids only, n (%) 12 (24.0) 7 (14.6) 8 (16.3) 4 (7.5) 10 (20.0)   Initial lymphocyte-depleting therapy, n (%) 0 (0) 3 (14.6) 0 (0) 0 (0) 0 (0) Grade of AR, n (%)  I 15 (30.0) 7 (14.6) 7 (14.3) 6 (11.3) 7 (14.0)  II 7 (14.0) 8 (16.7) 8 (16.3) 1 (1.9) 6 (12.0)  III 0 (0) 1 (2.1) 1 (2.0) 0 (0) 2 (4.0) Death or graft loss in patients with AR, n 1 1 6 1 1 AR, acute rejection; CI, confidence interval; HD, high dose, LD, low dose; MMF, mycophenolate mofetil; Tac, tacrolimus. \n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.DEATH:\nAt the 6-month primary analysis, a higher number of deaths was observed in patients treated with belatacept (basiliximab + belatacept HD [n = 4], belatacept HD [n = 4], belatacept LD [n = 6]) than in those treated with tacrolimus (tacrolimus + MMF [n = 1], tacrolimus [n = 3]). A similar pattern was also observed at 12 months posttransplant (Table 3). The causes of death reported in more than one patient included multisystem organ failure (n = 7), sepsis (n = 7), gastrointestinal bleeding (n = 2) and myocardial infarction (n = 2). In two cases, the cause of death was unknown (Table 4). Table 4 Causes of death Treatment group Age/sex Reported cause of death Study day Basiliximab + belatacept HD + MMF 57/M Sepsis 18 61/M Myocardial infarction 23 49/M Multiple system organ failure 25 56/M Sepsis 127 Belatacept HD + MMF 43/M Sepsis 24 37/M Sepsis 84 62/F Multiple system organ failure 111 47/M Gastrointestinal bleed 147 62/F Unknown 274 58/M Unknown 278 51/M PML 322 Belatacept LD + MMF 65/M Multiple system organ failure 2 59/F Colon perforation 8 48/M Acute hepatic failure 16 63/F Pulmonary failure 21 53/M Sepsis 65 50/M Gunshot injury 117 58/F Multiple system organ failure 202 55/M Multiple system organ failure 208 65/F Multiple system organ failure 339 51/M PTLD 364 Tacrolimus + MMF 54/M Sepsis 168 Tacrolimus 49/M Myocardial infraction 34 52/M Multiple system organ failure 63 46/M Gastrointestinal bleed 68 49/M Sepsis 286 F, female; HD, high dose, LD, low dose; M, male; MMF, mycophenolate mofetil; PML, progressive multifocal leukoencephalopathy; PTLD, posttransplant lymphoproliferative disease. \n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.GRAFT LOSS:\nBy month 6, numerically fewer cases of death-censored graft loss were reported in the belatacept HD groups (n = 1 each) than in the belatacept LD group (n = 5) and tacrolimus groups (tacrolimus + MMF [n = 3]; tacrolimus [n = 2]). At month 12, a similar pattern was observed. Across treatment groups, ∼50% of graft losses occurred within the first month posttransplant, the main causes of which were primary nonfunction (n = 5) and arterial thrombosis (n = 4).\n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.ACUTE REJECTION:\nOverall, AR (centrally read) was more common in the belatacept groups than in the tacrolimus groups (Table 3). At the month 6 primary analysis, the percent of patients with AR in the basiliximab + belatacept HD, belatacept HD and belatacept LD groups was 40%, 31% and 31%, respectively, versus 9% and 30% for the tacrolimus + MMF and tacrolimus groups, respectively. Most episodes of AR (57–94%) occurred early after transplant (i.e. by month 3) and were mild to moderate in severity (Banff grade I or II). Among patients with AR, ∼50% received treatment for AR in the basiliximab + belatacept HD and belatacept LD groups, while 70–80% received treatment in the other three groups. The majority of patients who were treated for AR received corticosteroids only.\n\nBODY.RESULTS.OUTCOMES: COMPOSITE END POINT, DEATH, GRAFT LOSS AND AR.ANALYSIS BY HCV STATUS:\nIn HCV-positive patients, the percent with AR by month 6 in the belatacept groups ranged from 33% to 39% versus 12% and 38% for the tacrolimus + MMF and tacrolimus groups, respectively. In patients who were HCV-negative, the percent with AR by month 6 in the belatacept groups was 29–41% versus 7% and 23% for the tacrolimus + MMF and tacrolimus groups, respectively.\n\nBODY.RESULTS.SAFETY:\nSerious AEs, including serious infections and malignancies, occurred with a similar frequency across all treatment groups (Table 5). The most commonly reported serious AEs across all groups included pneumonia, sepsis, biliary strictures, cholangitis, pyrexia and acute renal failure. The most common serious infections resulted from CMV, fungal and mycobacterial pathogens. One fatal case of PML occurred ∼6 months after transplantation in a patient receiving belatacept HD and higher than recommended doses of MMF (3–4 g/day for 7.5 weeks). Table 5 Adverse events and events of interest at 12 months Events Patients Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac (n = 50) (n = 48) (n = 49) (n = 53) (n = 50) HCV at baseline, n 23 23 21 25 24 HCV recurrence, 1 n (%) 14 (60.9) 7 (30.4) 6 (28.6) 13 (52.0) 9 (37.5) Serious adverse events, n (%) 28 (56.0) 29 (60.4) 37 (75.5) 40 (75.5) 35 (70.0)  Infections and infestations 11 (22.0) 12 (25.0) 13 (26.5) 12 (22.6) 12 (24.0) Malignancies, n 1 0 2 2 2  PTLD 1 2 0 1 0 0 All infections and infestations, n (%) 32 (64.0) 39 (81.3) 30 (61.2) 31 (58.5) 29 (58.0)  Bacterial 5 (10.0) 11 (22.9) 11 (22.4) 6 (11.3) 13 (26.0)  Fungal infections 6 (12.0) 9 (18.8) 14 (28.6) 6 (11.3) 5 (10.0)  Viral infections 3 10 (20.0) 11 (22.9) 14 (28.6) 9 (17.0) 7 (14.0)  CMV 5 (10.0) 4 (8.3) 10 (20.4) 4 (7.5) 1 (2.0)  Herpes 3 (6.0) 3 (6.3) 4 (8.2) 3 (5.7) 2 (4.0) Adverse events of nervous system disorders, n (%) 23 (46.0) 19 (39.6) 15 (30.6) 34 (64.2) 34 (68.0)  Headache 10 (20.0) 8 (16.7) 5 (10.2) 14 (26.4) 14 (28.0)  Tremor 2 (4.0) 2 (4.2) 4 (8.2) 17 (32.1) 13 (26.0) Adverse events of renal and urinary disorders, n (%)  Renal failure 1 (2.0) 4 (8.3) 3 (6.1) 5 (9.4) 14 (28.0)  Acute renal failure 2 (4.0) 5 (10.4) 2 (4.1) 13 (24.5) 8 (16.0)  Renal impairment 0 (0) 1 (2.1) 0 (0) 1 (1.9) 8 (16.0) CMV, cytomegalovirus; HAI, histological activity index; HCV, hepatitis C virus; HD, high dose, LD, low dose; MMF, mycophenolate mofetil; PML, progressive multifocal leukoencephalopathy; PTLD, posttransplant lymphoproliferative disease; Tac, tacrolimus. 1 HCV recurrence confirmed histologically by central pathologist; modified HAI grading score ≥5/18 and fibrosis score ≥2. 2 PTLD case occurred after month 12. 3 One case of PML in belatacept HD + MMF. Malignancies were reported in 2–4% of belatacept patients and in 4% of patients receiving tacrolimus-based treatments. Two cases of PTLD were reported in belatacept-treated patients (one in the belatacept LD group at 11 months posttransplant and one in the basiliximab + belatacept HD group after month 12), both of which involved the liver. Neither case involved the CNS; both patients were EBV-seropositive at the time of transplantation. One patient died as a result of PTLD; the other was treated with chemotherapy and is alive with a functioning graft. Overall, most infections were mild or moderate in severity (Table 5); urinary tract infections were the most common. Patients in the belatacept LD group had numerically higher rates of viral and fungal infections (Table 5), most of which were nonserious. A similar proportion of patients across all treatment groups experienced at least one AE. AEs related to neurotoxicity (i.e. headache and tremor) occurred less frequently in patients receiving belatacept-based regimens versus tacrolimus-based regimens (Table 5).\n\nBODY.RESULTS.HCV RECURRENCE:\nApproximately 72% (153/212) of patients with a functioning graft at month 12 had biopsies available for evaluation. Recurrence of HCV among patients who were HCV-positive at baseline was higher in the basiliximab + belatacept HD (61%) and tacrolimus + MMF (52%) groups versus the other treatment groups (29–38%) (Table 5).\n\nBODY.RESULTS.RENAL FUNCTION:\nMean cGFR at baseline was 66–80 mL/min. The differences in cGFR between the belatacept-treated patients and tacrolimus-treated patients were observed as early as month 1 and persisted through month 12 (15–34 mL/min/1.73 m2 higher in each belatacept group vs. the tacrolimus groups at month 12) (Figure 2). By month 12 in the intent-to-treat analysis, mean mGFR was 89–93 mL/min in the belatacept HD groups and 71–75 mL/min in the belatacept LD and tacrolimus groups. An on-treatment analysis revealed similar findings at 12 months (mean cGFR was 88 mL/min for basiliximab + belatacept HD; 100 mL/min for belatacept HD; 94 mL/min for belatacept LD; 67 mL/min for tacrolimus + MMF and 62 mL/min for tacrolimus). The proportion of patients with >10 mL/min improvement in cGFR from baseline to month 12 was 47–65% in the belatacept groups versus 12–27% in the tacrolimus groups. Figure 2Mean calculated GFR (MDRD methodology) over time (intent-to-treat analysis, as observed data with no imputation for missing values). All calculated GFR >200 were truncated at 200 mL/min. HD, high dose; LD, low dose; MMF, mycophenolate mofetil.\n\nBODY.RESULTS.CARDIOVASCULAR/METABOLIC PROFILE AT 12 MONTHS:\nSystolic and diastolic blood pressures at 12 months were lower among patients receiving belatacept (121–127 mmHg and 75–77 mmHg, respectively) versus those receiving tacrolimus-based regimens (137–138 mmHg and 80 mmHg, respectively). Serum lipids increased from baseline in all treatment groups, with triglyceride levels increasing more in the tacrolimus groups and LDL levels increasing more in the belatacept groups (data not shown). The incidence of new-onset diabetes mellitus (i.e. ≥30 days of treated diabetes in patients without a diagnosis of diabetes before randomization) was somewhat lower in the belatacept HD and LD groups (16% and 14%, respectively) versus patients receiving basiliximab + belatacept HD (36%), tacrolimus + MMF (24%), and tacrolimus (38%).\n\nBODY.RESULTS.PHARMACOKINETICS:\nThe pharmacokinetics of belatacept in LT recipients was similar to those in kidney transplantation recipients receiving a similar dosing regimen (23). The pattern of minimum belatacept concentration values with the HD and LD regimens in the early posttransplant period were consistent with the differences between the two regimens. Steady-state trough serum concentrations of belatacept in the maintenance phase (during 5-mg/kg dosing) were similar between the HD and LD regimens. Mean serum trough concentrations of belatacept and tacrolimus, and mean daily MMF and steroid doses are shown in Figure 3. Some of the patients who received additional belatacept to compensate for fluid loss went on to experience death or graft loss, but results were mixed and potentially confounded by intra- and postoperative complications. In the overall study population, there was no association between trough belatacept levels and death or graft loss. Figure 3Serum trough levels and mean daily dosing at specified time points. Concentrations of belatacept in human serum were determined using a validated enzyme-linked immunosorbent assay (ELISA) by PPD (Richmond, VA). (A) Mean serum trough levels of belatacept. (B) Mean daily dose of MMF. (C) Mean serum trough levels of tacrolimus. (D) Mean daily dose of steroids. HD, high dose; LD, low dose; M, month; MMF, mycophenolate mofetil; W, week.\n\nBODY.RESULTS.DONOR-SPECIFIC ANTIBODIES:\nAll treatment groups had patients who tested positive for donor-specific antibodies (DSA) at baseline (Table 6). By 12 months after transplantation, the number of patients with DSA decreased in two belatacept groups (HD and LD), remained the same in two groups (basiliximab + belatacept HD and tacrolimus), and increased in one group (tacrolimus + MMF) (Table 6). There were few cases of de novo DSA in all groups. Table 6 DSA: number of patients with detectable antidonor HLA antibodies Patients with DSA/patients in analysis, n (%) Basiliximab + belatacept HD + MMF Belatacept HD + MMF Belatacept LD + MMF Tac + MMF Tac Baseline DSA (pretransplant) 5/48 (10) 5/46 (11) 8/48 (17) 3/50 (6) 4/49 (8) Total DSA by month 12 5/47 (11) 3/47 (6) 4/43 (9) 8/52 (15) 4/45 (9) De novo DSA 1 by month 12 3/47 (6) 1/47 (2) 1/43 (2) 6/52 (12) 3/45 (7) DSA, donor-specific antibodies; HD, high dose, LD, low dose; MMF, mycophenolate mofetil; Tac, tacrolimus. 1 De novo DSA was defined as appearance of antibody to a new HLA specificity following transplantation. \n\nBODY.RESULTS.LONG-TERM EXTENSION:\nOf the 164 patients who completed 1 year of treatment, 145 entered the LTE phase (Figure 4). During the LTE, additional deaths and graft losses were noted in the belatacept HD group (n = 4). Because of the cumulative number of deaths and graft losses in two of the three belatacept groups relative to the tacrolimus groups, the DMC recommended termination of the LTE study. Although a causal relationship to belatacept could not be clearly established, it could also not be rejected. All patients who were currently on belatacept were switched to standard-of-care immunosuppression. Figure 4Patient disposition in LTE. HD, high dose; LD, low dose; LTE, long-term extension; MMF, mycophenolate mofetil. Based on these findings, a comprehensive assessment of the deaths observed in belatacept-treated patients was conducted, and a series of post hoc analyses were performed. Factors evaluated included demographic characteristics, donor age, baseline disease status, mean Model for End-Stage Liver Disease score, and belatacept trough concentrations. Deaths and graft losses were not evenly distributed among the centers. No demographic or disease-related risk factors were predictive of death associated with belatacept, and similarly, no clear explanation for the higher rates of death observed in the belatacept HD and LD groups was identified. Death and graft loss in the subset of patients who were HCV-positive at the time of transplantation were numerically higher in the belatacept groups at month 6 but comparable with the tacrolimus groups at month 12.\n\nBODY.DISCUSSION:\nThis phase II, randomized, multicenter study was the first clinical trial to explore the use of belatacept in LT recipients. The primary objective was to evaluate the safety and efficacy of belatacept relative to tacrolimus, as reflected by the incidence of AR, graft loss and death in de novo LT recipients. At the conclusion of this study, two of three belatacept groups had higher rates of death and graft loss relative to the standard-of-care control group tacrolimus + MMF. All three belatacept groups also had higher rates of AR, and there was an increase in viral and fungal infections, the majority of which were not serious. Two cases of PTLD and one case of PML were reported in belatacept-treated patients. The two belatacept HD groups had substantially better mGFR than the remaining three groups, while all three belatacept groups had better cGFR. There was also evidence of fewer neurotoxicity events in the belatacept groups. A high rate of discontinuation was observed in the belatacept treatment groups, such that by the time of the 12-month assessment, ∼40% of patients had discontinued assigned belatacept therapy. In contrast to the findings in renal transplant recipients, which showed comparable patient and graft survival relative to the CNI cyclosporine (13–17), this exploratory study in LT recipients demonstrated increased rates of death and graft loss in two of three belatacept groups compared with tacrolimus (different CNI comparator). Multiple factors may account for differences in survival outcomes between kidney transplant and LT recipients, including organ-specific aspects of alloimmune responses (24,25). In the present study, it is notable that basiliximab did not lead to a lower rate of rejection versus placebo in the two belatacept HD groups. There are several hypotheses, including the role of CD28-independent T cells and/or inhibition of regulatory pathways that could explain this observation (26). Further studies are required to understand the mechanism and contribution of these factors to AR in the setting of belatacept therapy. Important nonimmunologic differences include the degree of illness at the time of transplantation, extent of surgical trauma and massive perioperative fluid shifts. Doses of belatacept in this study were higher than those used in the kidney transplant trials, with the assumption that perioperative fluid shifts and fluid losses would necessitate additional belatacept. There was no association between trough belatacept levels and death or graft loss. This was expected, given that there is no exposure–response relationship in the kidney transplant setting, such that therapeutic drug monitoring is not needed with belatacept treatment. While preservation of renal function was a key finding in the pivotal phase III clinical trials of belatacept in renal transplant recipients, the magnitude of the renal function difference in the current study highlights the capacity for native kidneys to recover substantial function following LT in the absence of CNI exposure. The tendency toward more viral and fungal infections observed in belatacept-treated patients raises the possibility of overimmunosuppression in some patients, as observed with regimens that include MMF (27). This finding is somewhat paradoxical, however, given the higher rates of AR in the belatacept groups, which would suggest underimmunosuppression or immunomodulatory effects not yet understood in the context of LT. It is also possible that the management of AR placed patients at risk for subsequent infectious complications. The standard treatment for rejection may have been more intense than was needed in belatacept-treated patients, suggesting that overimmunosuppression may be part of the explanation for the infectious complications. The limited number of patients and events preclude the ability to definitively interpret the relative contributions of overimmunosuppression versus underimmunosuppression. LT recipients are an inherently complex population, with diverse and serious underlying medical concerns that have the potential to adversely affect posttransplant outcomes. Distinguishing the contribution of belatacept alone or an interaction between belatacept and such factors in the context of this phase II study is not feasible. An additional limitation of the study includes the open-label design (belatacept vs. tacrolimus assignments). Findings from this phase II study did not allow for the identification of a safe and effective dose or a regimen for further development of belatacept fulfilling the substantial need for nonnephrotoxic immunosuppressive therapy in LT recipients.\n\n**Question:** Compared to tacrolimus groups what was the result of belatacept groups on The proportion of patients who met the primary end point (composite of acute rejection, graft loss, death by month 6)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
899
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Evaluation of the functional efficacy of an antioxidative probiotic in healthy volunteers\n\n ABSTRACT.BACKGROUND:\nIn persons without clinical symptom it is difficult to assess an impact of probiotics regarding its effect on health. We evaluated the functional efficacy of the probiotic Lactobacillus fermentum ME-3 in healthy volunteers by measuring the influence of two different formulations on intestinal lactoflora, fecal recovery of the probiotic strain and oxidative stress markers of blood and urine after 3 weeks consumption.\n\nABSTRACT.METHODS:\nTwo 3-week healthy volunteer trials were performed. Open placebo controlled (OPC) study participants (n = 21) consumed either goat milk or by L. fermentum ME-3 fermented goat milk (daily dose 11.8 log CFU (Colony Forming Units). Double blind randomised placebo controlled (DBRP) study participants (n = 24) received either capsules with L. fermentum ME-3 (daily of dose 9.2 CFU) or placebo capsules. The faecal lactoflora composition, faecal ME-3 recovery, effect of the consumption on intestinal lactoflora, and oxidative stress markers of blood (total antioxidative activity; total antioxidative status and glutathione red-ox ratio) was measured.\n\nABSTRACT.RESULTS:\nME-3 was well tolerated and a significant increase in total faecal lactobacilli yet no predominance of ME-3 was detected in all study groups. Faecal recovery of ME-3 was documented by molecular methods only in fermented milk group, however the significant improvement of blood TAA (Total Antioxidative Activity) and TAS (Total Antioxidative Status) indices was seen both in case of fermented goat milk and capsules\", yet glutathione re-ox ratio values decreased only in case of fermented by ME-3 goat milk.\n\nABSTRACT.CONCLUSION:\nThe functional efficacy of both consumed formulations of an antioxidative probiotic L. fermentum ME-3 is proved by the increase of the intestinal lactobacilli counts providing putative defence against enteric infections and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. In non-diseased host the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of host, e.g. markers of antioxidative defence system.\n\nBODY.BACKGROUND:\nProbiotics are defined as live microbial food supplements, which beneficially influence human health [1,2]. Widely accepted probiotics contain different lactic acid producing bacteria of human origin: bifidobacteria, lactobacilli or enterococci. Nowadays the concept of functional foods, incl. probiotic food and dietary supplements implies to their ability to beneficially influence body functions in order to improve the state of well-being and health and reduce the risk of disease [2,3]. The important areas of human physiology that are relevant to functional food science according ILSI (International Life Science Institute) and FUFOSE (The European Commission Concerted Action on Functional Food Science in Europe) are besides others, the modulation of basic metabolic processes and defence against high-grade oxidative stress [4,5]. Human nutrition is clearly associated with oxidative metabolism, which beside production of energy is involved in a number of vital functions of the host. For example, under physiological conditions the reactive species (including peroxyl radicals, nitric oxide radical, superoxide anion) figure a crucial role in primary immune defense of the human body by phagocytic cells against harmful microorganisms [6,7]. On the other hand, a prolonged excess of reactive species is highly damaging for the host biomolecules and cells, resulting in dysbalance of the functional antioxidative network of the organism and leading to substantial escalation of pathological inflammation [8]. By our knowledge, no systematic studies have been performed to approve the functional efficacy of different formulations of probiotic on the antioxidative defence system of a healthy human. In our previous study Lactobacillus fermentum ME-3 (DSM 14241) [9-11], expressed strong antimicrobial activity against Gram-positive and Gram-negative entero- and uropathogens [12,13]. The cells and cell lysate of L. fermentum ME-3 possessed substantial antioxidative potency [14]. In an animal experiment ME-3 suppressed the excessive oxidative stress reaction caused by Salmonella infection in intestinal mucosa and thus improved the gut mucosal antioxidative status [15]. The antioxidative effect of L. fermentum ME-3 on human body oxidative stress markers was confirmed by our pilot study with fermented goat milk [16]. The aim present study was to evaluate the functional efficacy of the probiotic strain L fermentum ME-3 in the human gastrointestinal tract (GIT) of healthy volunteers. The faecal recovery, effect of two different formulations on total faecal lactoflora and oxidative stress markers of blood and urine were compared after 3 weeks consumption.\n\nBODY.METHODS.FORMULATIONS:\nThe efficacy of two different formulations (experimental fermented goat milk and probiotic capsules) on the human body oxidative stress markers was evaluated. Lactobacillus fermentum ME-3, a probiotic strain of healthy human intestinal origin [17], has been identified by biochemical and molecular methods [9]. The patent application has been submitted to the Estonian Patent Agency (Application No. 0356/01PV) as well as to the International Bureau of World Intellectual Property Organization (WIPO) (Application No. WO03002131) [11]. L. fermentum ME-3 was used as freeze-dried powder in capsulated form and in fermented milk.\n\nBODY.METHODS.FORMULATIONS.CAPSULES:\nGelatine coated capsules were manufactured by the Tallinn Pharmaceutical Company. The freshly prepared probiotic capsules contained 9.0 logs CFU of L. fermentum ME-3 per capsule in addition to 250 mg of saccharose and microcellulose. Identical placebo capsules contained only saccharose and microcellulose. All capsules were stored at +4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF ME-3 IN CAPSULE:\nSurvival of ME-3 in capsule was monitored during 12 months at +4°C. The content of one capsule was dissolved aseptically in 2 ml of 0.9% NaCl solution. The suspension was vortexed, serially diluted and seeded 0.1 ml on de Man-Rogosa-Sharpe (MRS) agar medium (OXOID, U.K.) and incubated 48 hours at 37°C microaerobically (10% CO2). The number of colonies was counted and the viable cell count in capsule was calculated.\n\nBODY.METHODS.FORMULATIONS.EXPERIMENTAL FERMENTED MILK:\nThree different lots of experimental fermented goat milk was prepared for the 3-week trial with healthy volunteers in order to establish the health effects of ME-3 consumption. The study group was supplied with fresh product once a week. Experimental fermented milk was prepared as described previously [16] by combining the probiotic strain with two supportive lactobacilli cultures L. plantarum LB-4 and L. buchneri S-15. L. buchneri strain S1-5 decreased the specific taste of the goat milk. L. plantarum LB-4 was included as a strong producer of exopolysaccharides, which gives the fermented milk a cream-like consistence and delightful acidity. The goat milk was inoculated with 2% mixture of Lactobacillus strains and incubated at 37°C for 24 hours. Theproduct, ready to use, was cooled and stored at 4°C.\n\nBODY.METHODS.FORMULATIONS.SURVIVAL OF L. FERMENTUM ME-3 IN FERMENTED GOAT MILK:\nTo measure the viable cell count of ME-3 in fermented goat milk, samples were taken at the end of fermentation (before cooling the product), and after 24 h, 32 h, 48 h and 7 days from the preparation, when the product was stored at 4°C. The amount of 0.5 ml of the fermented milk was serially diluted in saline and plated on MRS agar medium and incubated for 48 h at 37°C in microaerobic conditions.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS:\nTwo healthy volunteer (n 45) trials, particularly open placebo controlled (OPC) study and double blind randomized placebo controlled (DBRP) study were carried on to evaluate the functional efficacy of L. fermentum ME-3 in the human body. The inclusion criteria included the wish to participate, no known health problems, and no medical conditions requiring drug therapy, no other yogurts or no special diets. The subjects with a history of GIT disease, food allergy and acute infection, use of any antimicrobial agent within the last month or use of any regular concomitant medication including antioxidant vitamins and anti-inflammatory non-steroidal drugs were excluded. The members of the trial were daily questioned about their general welfare, intestinal function (general welfare, gut gas production, stool frequency) and putative adverse effects. The withdrawal criteria from the trials included acute infections during the study. Reasons for dropout were the unwillingness to proceed with the study or relocation to new area. The blood samples (6 ml) from the antecubital vein, faecal and urine samples were collected before and at the end of all clinical trials. Participants of all trials gave informed consent to the study protocols approved by the Ethical Committee of Tartu University.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.OPEN PLACEBO CONTROLLED FERMENTED GOAT MILK TRIAL:\nThe study participants were 5 men and 16 women, mean age 50 years (range 35–60). During three weeks of the trial the study group (3 males and 13 females) consumed daily 150 ml fermented goat milk. The daily dose of probiotic Lactobacillus strain was 11.2 to 11.8 log CFU per person. The control group (1 male and 4 females) consumed the same dose of fresh goat milk.\n\nBODY.METHODS.DESIGN OF HUMAN VOLUNTEER TRIALS.PROBIOTIC CAPSULE TRIAL:\nA DBRP study (ISRCTN 53164826) was carried out as follows. The study group consisted of 15 men and 9 women, mean age 52 years (range 40–60) allocated according to their wish to participate and randomly divided by an independent person and computer program for two groups. The study group members (8 males and 4 females) took three probiotic containing capsules (8.4 log CFU per capsule) two times daily (the daily dose 9.2 log CFU) during three weeks. The placebo group (7 males and 5 females) received identical capsules without the probiotic strain. Faecal samples of all participants to assess change in faecal lactoflora and the persistence of the ingested probiotic strain were collected before and at the end of trial. Several laboratory indices of blood and urine were measured before and after the consumption of ME-3. Here we report on changes in human body oxidative stress markers as total antioxidative activity (TAA), total antioxidative status (TAS) and glutathione red-ox ratio (GSH/GSSG) from blood serum.\n\nBODY.METHODS.MICROBIOLOGICAL ANALYSES OF FAECES:\nThe total count of lactobacilli and the count of L. fermentum were evaluated in faecal samples. The faecal samples were collected at day 0 and 21 in both trials. Samples were kept at -80°C before analyzed. Serial dilutions (10-2–10-9) of the weighed faecal samples were prepared with phosphate buffer (pH 7.2) and 0.05 ml of aliquots was seeded onto MRS agar medium [17]. The plates were incubated at 37°C for 4 days microaerobically in 10% CO2 environment (incubator IG 150, Jouan, France). The catalase negative colonies were selected on the basis of typical for LAB colony morphology, cells microscopy and Gram staining. The count of Lactobacillus species was expressed in log10 colony forming units per gram faeces (log10 CFU/g) and percentage (relative share) in the total count of lactobacilli. The detection level of lactobacilli was a 3.0 log CFU/g faeces. The relative amount of L. fermentum, colonizing the gastrointestinal tract of persons in the study groups was expressed as a proportion of the total count (%), using the Bioquant program [18]). The program gives output data for every microorganism as an absolute count (log10 CFU/g) and their percentage in the total count with its normal values.\n\nBODY.METHODS.AP-PCR TYPING:\nThe putative ME-3 isolates were typed by arbitrarily primed polymerase chain reaction (AP-PCR). Genomic DNA was extracted from 24 h old cultures, cultivated on MRS agar microaerobically with the QIAamp DNA Mini Kit 50 (QIAGEN GmbH., Hilden, Germany) according to the manufacturers instructions. AP-PCR typing was done with two primers: ERIC1R (5'-ATGTAAGCTCCT GGGGATTCAC-3') and ERIC2 (5'-AAGTAAGTGACTGGGGTGAGCG -3') (DNA Technology A/S, Aarhus, Denmark). A 30 μl volume of reaction mixture consisted of 10 × PCR buffer (Fermentas, Vilnius, Lithuania), 2.5 mM MgCl2 (Fermentas, Vilnius, Lithuania), 200 μM deoxynucleoside triphosphate mixture (dATP, dGTP, dTTP and dCTP, Amersham Pharmacia Biotech, Freiburg, Germany) 0, 60 μg of each primer and 2.5 U Taq DNA Polymerase (Fermentas, Vilnius, Lithuania,) and 5 μl of extracted DNA according to Matsumiya et al. [19]. The PCR mixture was subjected to thermal cycling 35 cycles of denaturation at 95°C for 1 min, annealing at 35°C for 1 min, and extension at 74°C for 2 min, with a final extension at 74°C for 5 min with the PTC-200 thermal cycler (Eppendorf AG, Hamburg, Germany). The PCR products were separated by electrophoresis in a horizontal 2% agarose gel containing 0.1 μl/ml ethidium bromide in Tris-acetic acid-EDTA (TAE) buffer (40 mM Tris, 20 mM boric acid, 1 mM EDTA, pH 8.3) (Bio-Rad Laboratories, Hercules, USA) at constant voltage of 120 V. A 1 kb ladder (GeneRuler, Fermentas, Vilnius, Lithuania) was used as a base pair size marker. The banding patterns of isolates were visualized with UV light and compared with that of L. fermentum ME-3 strain.\n\nBODY.METHODS.MEASUREMENT OF HUMAN BODY OXIDATIVE STRESS STATUS:\nBlood serum was analysed for total antioxidative activity TAA, total antioxidative status TAS and glutathione red/ox ratio (oxidized glutathione and reduced glutathione, GSSG/GSH). TAA of the serum was assessed by the linolenic acid test (LA-test) described previously [16]. This test evaluates the ability of the sample to inhibit lipid peroxidation. TAS of the serum was measured with a commercially available kit (TAS, Randox Laboratories Ltd. Ardmore, UK) as described elsewhere [16], water-soluble vitamin E (Trolox) serving as a standard. This method is based on the inhibition of the absorbance of the ferrylmyoglobin radicals of 2,2'-azinobis-ethylbenzothiazoline 6-sulfonate (ABTS+) generated by activation of metmyoglobin peroxidase with H2O2. The cellular oxidative stress markers as total glutathione and oxidized glutathione were measured using the method of Griffith [20] as described elsewhere [16]. The glutathione content was calculated on the basis of a standard curve generated with known concentration of glutathione. Amount of GSH (μg/ml) was calculated as a difference between the total glutathione and GSSG (total glutathione – GSSG). The glutathione red/ox ratio was expressed as GSSG/GSH.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe computer program Sigma Stat for Windows 2.0 (Jandel Corporation, USA) was applied. The counts of faecal lactoflora were compared by using Student's t-test and Mann-Whitney rank sum test. Changes in oxidative stress markers of blood sera (TAA, TAS and glutathione red-ox ratio) were evaluated by Student's t-test, paired t-test and Mann-Whitney rank sum test. The choice of tests was made automatically according to the distribution of the data. Both microbial and biochemical markers were given as mean and standard deviation. One-way ANOVA test was performed to compare the effect of different formulation on TAA, TAS and faecal lactoflora parameters. Differences were considered statistically significant if the value was p < 0.05.\n\nBODY.RESULTS.SURVIVAL OF ME-3 IN FORMULATIONS:\nIn capsule after approximately 1-log drop after one week from the production of the capsules, the viable count of the probiotic strain remained stable at the level of 8.4 log CFU per capsule. Additional results have shown at +4°C the stability of the freeze-dried capsulated culture at least 17 months from the production. In fermented goat-milk the cell count of the probiotic strain varied insignificantly from 9.0 to 9.7 log CFU/ml from one preparation to the other. The viable count of ME-3 in the fermented goat milk was found to remain stable at least during 7 days of storage at 4°C.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS:\nNo dropouts were registered during volunteer trials, yet one participant was withdrawn from the probiotic capsule trial due to acute respiratory viral infection. Besides, no adverse affects in general welfare or changes in GI functionality were assessed during the trial.\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.CHANGES IN TOTAL LAB COUNT:\nThe consumption of both ME-3 fermented milk and ME-3 capsule significantly increased the total count of lactobacilli in faeces as compared to the initial levels (Fig. 1). In opposite, in the group of volunteers consuming non-fermented goat-milk there was even a decrease in total LAB counts during the 3-week trial and no changes were found in capsule placebo group. Figure 1Increase of total fecal counts of lactobacilli in healthy volunteers consuming of ME-3 in fermented goat milk and probiotic capsule. 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values (Student's t-test): * p < 0.05; Significantly different from control (Student's t-test): ‡ p = 0.01\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.RECOVERY OF THE PROBIOTIC STRAIN:\nIn goat milk group L. fermentum as a species appeared in fecal samples of all individuals (n = 16) after consumption of fermented goat milk (Table 1). The AP-PCR confirmed the recovery of ME-3 in the faeces of all study group members (Fig. 2). However, in different trials the administration of ME-3 strain did not lead to the predominance of Lactobacillus fermentum species (Table 1).In the probiotic capsule trial the strain ME-3 was not detectable between L. fermentum isolates by AP-PCR. Table 1 Changes in fecal recovery of L. fermentum during healthy human volunteer trials L. fermentum Groups * Prevalence (%) † Count (log10) ‡ Proportion (%) Day 0 Day 21 Day 0 Day 21 Day 0 Day 21 Goat milk trial, ME-3 (n = 16) 25 (4/16) 100 (16/16) 7.0 ± 0.7 7.3 ± 1.4** 21 13 Control (n = 5) - 20 (1/5) - 3.6 - 28 Capsule trial, ME-3 (n = 11) 16.7(2/12) 33.3 (2/12) 4.3 ± 0.5 5.8 ± 1.6 4 9 Placebo (n = 12) 25 (3/12) 16.7 (2/12) 6.3 ± 2.5 8.0 ± 1.6 11 19 * Percentage of subjects with fecal L. fermentum inside the group ** Significantly different from the pre-treatment values (paired t-test): p < 0.001 † Median value ± SD ‡ Proportion of L. fermentum among fecal LAB Figure 2Confirmation for the survival of L. fermentum ME-3 in GIT in subjects receiving ME-3 fermented goat milk by AP-PCR in a horizontal 2% agarose gel. a) From the left: M – DNA 1 kb Ladder, Line 1 -L. bervis ATCC 14869, Line 2 – L. buchneri ATCC 4005, Line 3 – L. reuteri DSM 20016, Line 4 – L. fermentum ATCC 14931, Line 5 – L. fermentum ME-3 b) From the left: M – molecular weight marker, Line 1 ...16 – ME-3 like profiles from feces of goat milk trial study group participants\n\nBODY.RESULTS.HUMAN VOLUNTEER TRIALS.ANTIOXIDATIVE HEALTH EFFECT OF ME-3:\nThe positive effect on the blood ox stress markers as TAA and TAS was seen in the case of both formulations (Fig. 3). Particularly, the additive increase in goat milk group was 6% and 9% respectively as compared to control group, however only 4% for TAA and 2.5% for TAS in probiotic capsule study group as compared to placebo. Figure 3Effect of ME-3 consumption in fermented goat milk and capsules on human blood oxidative stress markers a) TAA (%) and b) TAS(mmol/l). 1 – Day 0, 2 – Day 21 Significantly different from pre-treatment values: *p < 0.05 (paired t-test); **p ≤ 0.01 (Student's t-test and paired t-test);ME-3 goat milk effect different from the effect of the ME-3 in capsule-form (ANOVA): ‡p ≤ 0.001 The effect of goat-milk consumption on the TAA and TAS values was significantlyhigher (p < 0.001) than by the consumption of the capsulated probiotic (Fig. 3). The decrease of the glutathione red-ox ratio was significant in both groups: the study Group (from 0.15 ± 0.01 to 0.11 ± 0.04 μg/ml, p < 0.01) and control (from 0.14 ± 0.03 to 0.11 ± 0.02 μg/ml, p < 0.01) in the goat milk trial. When the probiotic was consumed in capsulated form, no significant decrease was noticed in the glutathione red-ox ratio.\n\nBODY.DISCUSSION:\nOur aim was to evaluate the functional efficacy of the antimicrobial and antioxidative probiotic L. fermentum ME-3 in normal population with variable food intake. First of all the safety of the L. fermentum strain ME-3 was confirmed as no adverse side effects were registered in volunteers. Even relatively high (>1011 CFU) doses of consumed ME-3 had no negative impact on the hosts' general well-being. Lactobacillus fermentum as species, used in various food applications, has a well-established history of safe use and is evaluated as GRAS according to the Food and Drug Administration of the USA [21]. Second, a clear improvement of laboratory indices of antioxidative defence system of a healthy host was documented, using both formulations as fermented by L. fermentum ME-3 goat-milk and probiotic capsules. This effect was simultaneous with the increase of intestinal lactoflora of healthy volunteers even without necessity for faecal recovery of the strain. In the human population, persons without clinical symptoms have still a quite different health status, including stability, capacity and potency of antioxidative defence to counteract sufficiently to oxidative stress-caused adverse effects [7]. If a probiotic is able to exhibit a positive functionality on oxidative stress-related indices, it helps both to stabilize and promote the potency of the whole body antioxidative defence system in subclinical situations without disease symptoms. That in turn may have an impact for lowering the risk of atherosclerotic damage of blood vessels associated with several cardiovascular and neurodegenerative diseases [22-24]. In our study of healthy volunteers for validation of the antioxidative functionality of probiotic, four well-known oxidative stress markers of blood were chosen [25-27]. The state of the lipid fraction (including also LDL) in the antioxidative defence system of the blood is evaluated by TAA. TAS on the other hand reflects more the antioxidativity of the water-soluble fraction of the human blood. Among the measured blood sera markers both the TAA and TAS values were also improved in the two different study groups. However, there was found a significantly lower improvement of TAA and TAS values in cases of capsule than fermented goat-milk where the recovery of the strains was assessed by AP-PCR. Similarly, the reduction of the glutathione red-ox ratio was detected after the consumption of fermented by ME-3 goat-milk but not with the capsule. The crucial non-enzymatic cellular antioxidant is GSH [28] present in the millimolar range mainly in the red blood cells, liver, pancreas, kidneys, spleen, eyes, lungs and intestinal cells [29]. The oxidized form of glutathione becomes even at low concentrations toxic, and therefore in the cells the glutathione red-ox ratio is kept as low as possible. In the case of inflammation this balance is shifted towards the oxidized form, indicating non-physiological intracellular oxidative stress. Thus, our study shows that there is a good association between the mode of formulation of probiotic and expression of its functional properties inside the healthy host. The antioxidative potential of the food supplement containing ME-3 was excellent, as reisolates of the strain from capsule expressed significantly higher TAA in comparison with the base values of the strain in vitro (data not shown). Unexpectedly, the shifts in the antioxidativity markers in blood serum of participants of the probiotic capsule trial were less pronounced in comparison with ME-3 fermented goat milk. Particularly, the explanation for more expressed positive shifts in oxidative stress markers of volunteers of the fermented goat milk trial could be due to the synergistic effect of the probiotic and the substrate. Milk is not just a carrier for the probiotic Lactobacillus strain, but contains natural \"lactogenic\" factors like lactose, minerals, vitamins and other components that enhance the metabolic activity of ingested probiotic strain in GIT. Both fermented goat milk and goat milk elevated the values of TAA and TAS The goat milk contains different biomolecules (e.g. casomorphins, lactorphins, casokinins, etc) having certain antioxidative properties, which can contribute to consumers' plasma antioxidative capacity [30-36]. This was proved by some antioxidative effect also in persons consuming non-fermented goat milk. However, the elevation these indices were remarkably more expressed in the fermented goat milk group, thus the goat milk fermentation with L. fermentum ME-3 results in additive increase of total antioxidativity. Therefore, the provisional FAO regulations [37] suggesting the need for health claims by specified formulations of probiotic seem to be of the utmost importance. Additionally, in our study with experimental fermented milk the average daily dose of L. fermentum ME-3 being 11.5 logs CFU was clearly higher than that of capsule (max 9.5 log CFU). It is possible that the dose excesses the amount of bacteria necessary for interacting with intestinal mucosa and the unattached lactobacilli are excreted with faeces. The finding of Saxelin and colleagues confirmed that the faecal recovery of the probiotic strain started from the consumption of more than 9.0 log CFU daily doses of capsulated LGG [38]. To our surprise, in the present study the similar dose did not result in faecal recovery of the strain. It is possible that the ME-3 strain germinated mainly in some upper parts of intestinal tract where the advantageous conditions for survival and metabolic activity of probiotic lactobacilli were present. Using molecular tools, Marteau et al. showed that lactobacilli figuring only 7% of faecal microflora performed up to 30% of microbial communities in human colon [39]. If administered in lower quantities as in case of capsule trial, ME-3 did not reach the detectable level in faecal samples. Yet, its presence in gut was proved by the positive antioxidative health effect in blood but not in urine. Therefore it is understandable that the higher load of metabolically active probiotic bacteria in goat-milk resulted also in their faecal recovery and the highest impact on the oxidative stress indices. Moreover, in our study the positive impact of ME-3 consumption on the host lactoflora was proved by the increase of faecal lactobacilli counts in all participants of human volunteer studies. In experimental settings the high counts of intestinal lactobacilli have been shown as an important defensive factor against enteric infections [40,41]. Though up to now the period of consumption of probiotics has not been defined, the 3-week ingestion of fermented goat-milk and capsule seemed enough for reaching the aims. It is important to mention that after consumption of ME-3, a strain with high antagonistic activity, neither the species nor the strain predominated among total lactoflora. This shows a well-granted microbial balance inside the gut, which cannot be disturbed by high load of probiotic bacteria. Apparently, the interconnected advanced metabolism of large gut microbiota keeps the proportions of different species quite stable. Some other investigators have obtained similar results showing the proportional increase of different microbial populations (bifidobacteria, coliforms) after administration of Lactobacillus sp. probiotic [42,43]. Thus, the functional efficacy of different formulations of anti-infectious and antioxidative probiotic L. fermentum ME-3 were proved both by the increase of the lactobacilli counts providing putative defence against infectious agents in gut and by reduction of the oxidative stress indices of blood and urine of healthy volunteers. Further, studies evaluating the efficacy of ME-3 as adjunct to conventional therapy in patients with atherosclerotic damages and a high-grade oxidative stress are ongoing.\n\nBODY.CONCLUSION:\nIn non-diseased host, the probiotic health claims can be assessed by improvement of some measurable laboratory indices of well-established physiological functions of organism. In our case, the possibility for augmentation of the antioxidative defence system by the probiotic L. fermentum ME-3 in normal population can be proposed.\n\nBODY.COMPETING INTERESTS:\nMarika Mikelsaar, Mihkel Zilmer, Tiiu Kullisaar, Heidi Annuk (Hynes) and Epp Songisepp are sharing the Estonian patent application: no. EE 2001 00356 29.06.01 and International Patent application: no. WO03002131.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nEpp Songisepp, and Pirje Hütt have been in charge of the microbiological analysis. The former has been also in charge of analyzing the results and writing of the manuscript. Jaak Kals was responsible for performance and management of the volunteer trials. Tiiu Kullisaar has been in charge of the biochemical analysis and writing the manuscript. Mihkel Zilmer has conducted the biochemical estimations and writing the manuscript. Reet Mändar has been in charge of the molecular analysis and revising the manuscript. Marika Mikelsaar is the main conductor of the L. fermentum ME-3 research; for this paper she has been in charge of the clinical trial design and writing the manuscript.\n\n**Question:** Compared to goat milk what was the result of L. fermentum ME-3 fermented goat milk on glutathione red-ox ratio?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
902
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Safety of a Lidocaine Gel in Patients from 6 Months up to 8 Years with Acute Painful Sites in the Oral Cavity: A Randomized, Placebo-Controlled, Double-Blind, Comparative Study\n\n ABSTRACT:\nLidocaine is a well-accepted topical anaesthetic, also used in minors to treat painful conditions on mucosal membranes. This randomized, double-blind, placebo-controlled study (registered prospectively as EudraCT number 2011-005336-25) was designed to generate efficacy and safety data for a lidocaine gel (2%) in younger children with painful conditions in the oral cavity. One hundred sixty-one children were included in two subgroups: 4–8 years, average age 6.4 years, treated with verum or placebo and 6 months–<4 years, average age 1.8 years, treated only with verum. Pain reduction was measured from the time prior to administration to 10 or 30 minutes after. In addition, adverse events and local tolerability were evaluated. In group I, pain was reduced significantly after treatment with verum compared to placebo at both time points. In group II, the individual pain rating shift showed statistically significant lower pain after treatment. Only seven out of 161 patients reported an adverse event but none were classified as being related to the study medication. The local tolerability was assessed as very good in over 97% of cases. For painful sites in the oral cavity, a 2% lidocaine gel is a meaningful tool for short-term treatment in the paediatric population.\n\nBODY.1. INTRODUCTION:\nThe use of medicinal products in younger children, even if approved and accepted for decades in adults and older children, often takes place with some uncertainties on the parents' or physician's site. Younger patients not only are smaller and have less weight but also have at least partly different pharmacokinetic characteristics [1]. Although for topically applied and locally acting medicinal products the potential negative consequences of use are considered less important, controlled in-use-data is highly appreciated. Clinical trials according to accepted rules represent the highest standard of clinical evaluation to prove efficacy and safety but are difficult to perform in younger patients for various reasons [2]. Ethical aspects are more critical and also the design of a study is more demanding, for example, choice of reliable efficacy criteria, especially if laboratory tests or noninvasive measurements are not suitable. Pain perception shows a high degree of individual variation in intensity, onset, and degree of toleration [3]. In patients not reliably able to specify pain perception, due to their age, it can be very difficult to measure whether a pain medication was successful, especially when the pain is mostly moderate and local. Lidocaine is a well-established and approved local anaesthetic alone and in combination used in a variety of topically applied preparations, mostly as medicinal product [4]. It is also used to treat painful oral conditions in children, like aphthous ulcers, blisters, teething, or gingivitis. The objectives of this randomized, double-blind, placebo-controlled, phase IV study were to determine the efficacy and safety of a gel with 2% lidocaine in children from 6 months up to 8 years with painful conditions in the oral cavity as a meaningful tool for short-term treatment in the paediatric population. The verum preparation used is available in Germany for short-term, symptomatic treatment of pain on the oral mucosa, on the gums, or on lips. The dosing recommendation for children is up to 4 times daily as a pea size piece of gel (approximately 0.2 g gel or 4 mg lidocaine hydrochloride).\n\nBODY.2. PATIENTS AND METHODS:\nThis randomized, double-blind, placebo-controlled, single-centre, phase IV clinical trial (EudraCT number 2011-005336-25) was initiated after approval by the responsible ethics committee of the Medical Chamber of the German Federal State of Thuringia and by the German National Health Authority (BfArM). The study was conducted in accordance with the World Medical Association Declaration of Helsinki 2013, the Guidelines of ICH Good Clinical Practice (GCP) as well as the requirements of German national drug and data protection laws. All patients and their parents or legal guardians were given sufficient time and necessary information to consider the benefits and risks of involving the children in the clinical trial before written informed consent was obtained and before assent was sought from the child. Two different age groups (I: 4–8 years and II: 6 months–<4 years) were included. Children with painful site/s in the oral cavity aged 4 years or older were randomly assigned to be treated either with verum or with placebo, whereas children younger than 4 years were only treated with verum.\n\nBODY.2. PATIENTS AND METHODS.2.1. STUDY MEDICATION, DOSAGE, AND ADMINISTRATION:\nThe 2% lidocaine gel (commercially available as Dynexan Mundgel, Kreussler Pharma, Germany) and the placebo preparation (identical composition without lidocaine, both products filled in identical 20 mg tubes to guarantee blindness) were locally applied once by or under supervision of a member of the study staff to the painful site/s as a pea size amount of gel (0.2 g gel) corresponding to 4 mg lidocaine hydrochloride in case of verum. The whole study took place at a clinical trial facility, specialized in conducting clinical studies with children. Different patient recruitment arrangements, like advertisements in the local newspaper, referrals by paediatricians, and distribution of flyers, were performed to enlarge the number of suitable patients, after obtaining approval by the ethics committee and health authority.\n\nBODY.2. PATIENTS AND METHODS.2.2. PAIN MEASUREMENT:\nThe painful sites were defined as open breaks of the oral mucosa, lips, or tongue as a symptom of a variety of mild diseases, or trauma, for example, bites, aphthous ulcer, mouth blisters, cold sores, partial gingivitis, hand-foot-mouth disease, herpes simplex, teething, and trauma from dental braces. The primary efficacy variable was the pain reduction from T1 (prior to administration) to T2 (10 ± 5 min after application) as assessed by the Wong-Baker FACES Pain Rating Scale (Figure 1). This scale is a self-report measure used to assess the intensity of children's pain. The rating scale was used, because of its suitability for children from 3 years of age on as well as its appropriateness in pre- or nonverbal children. Children were instructed to indicate their pain by pointing to one of the faces. If the child was not old enough and/or not able to use this scale, the assessment of the children's pain by the parents on the same scale was used instead. A secondary efficacy variable was pain reduction from T1 (prior to administration) to T3 (30 ± 10 min after application, Wong-Baker FACES Pain Rating Scale). This \"mixed\" analysis of children's and parents' assessments was performed to consider the child's assessment as often as possible according to the protocol and statistical analysis plan (SAP). To investigate whether this approach was reliable, an additional analysis was performed with assessments of the same person (either child, if available from T1 and T2 (or T3), or parent, if the child's assessment was missing at one or two points of time). Additionally, assessment of satisfaction (5-point verbal rating scale with \"very unsatisfied,\" \"somewhat unsatisfied,\" \"slightly satisfied,\" \"satisfied,\" and \"very satisfied\") and assessment of local tolerability by the investigator (\"poor,\" \"moderate,\" \"good,\" and \"very good\") were evaluated.\n\nBODY.2. PATIENTS AND METHODS.2.3. SAMPLE SIZE ESTIMATION:\nSample size calculation was performed within a parametric test scenario. In age group I, a sample number of 64 subjects per treatment arm was considered sufficient to obtain reliable data (assumptions: difference to be detected: one scale point, alpha-level: 0.05, power: 80%, and standard deviation: two scale points). For age group II, a sample size of 32 patients was chosen. In order to recruit 160 evaluable patients, about 222 subjects were planned to be screened, due to an assumed screening failure rate of 15% and a drop-out rate of 15%.\n\nBODY.2. PATIENTS AND METHODS.2.4. STATISTICAL ANALYSES:\nThe statistical analysis was conducted in compliance with the ICH E9 Note for Guidance on Statistical Principles for Clinical Trials. The efficacy variables were analysed by using the Mann-Whitney U test (differences between groups) and the Wilcoxon test (differences within groups, changes over time), whereby the null hypothesis of no difference between both groups was tested. To evaluate the robustness of results, additional analyses were applied (ANCOVA with baseline values as covariable and sensitivity analyses with several variants of assessments, e.g., using parents' instead of children's assessments). All analyses were performed using Statistical Analysis System software, version 9.2 or higher (SAS Institute Inc., Cary, NC, USA).\n\nBODY.3. RESULTS:\nStudy data were collected between May 21st, 2012, and June 14th, 2014, from 129 patients (age group I) and additional 32 patients (age group II). Within age group I, 63 patients were treated with 2% lidocaine gel (verum) and 66 patients with placebo. No patient dropped out of the study (Figure 2).\n\nBODY.3. RESULTS.3.1. BASELINE PATIENT CHARACTERISTICS:\nIn age group I, 54 male (verum N = 30, placebo N = 24) and 75 female subjects (verum N = 33, placebo N = 42) were included. In age group II, both genders were equally distributed (16 male and 16 female patients). The average age in group I was 6.4 ± 1.4 years, ranging from 4.0 to 8.9 years. In age group II, the mean average age was 1.8 ± 0.9 years, ranging from 6 months to 4.0 years. Except for one Black and one Latvian-Korean patient, all other patients were Caucasians. In age group I aphthous ulcer (36.0%) and in age group II teething (63.6%) were the main cause of painful sites (Table 1). The mean average size of the painful sites was 19.9 ± 15.7 mm2, ranging from 5 mm2 to 150 mm2. Gingiva and/or oral mucosa were affected in the majority of cases (age group I: 92.7%, age group II: 81.6%), whereas lips or tongue were involved in the remaining cases. The parents in age group I often reported grouching (20.2%), changes in behaviour (24.1%), and poor feeding (26.0%), whereas in age group II weeping (14.2%), crying (13.5%), grouching (18.4%), and sleepiness (18.4%) were often reported.\n\nBODY.3. RESULTS.3.2. PAIN ASSESSMENT:\nThe self-rating by the children of age group I could be used in 107 cases (verum N = 53, placebo N = 54) at T1 (prior to administration) and in 109 cases (verum N = 57, placebo N = 52) at T2 (10 ± 5 min after application). The assessments by parents were used in 22 cases at T1 and in 20 cases at T2, because of children's lack of ability to rate their pain properly or showing signs of unwillingness or lack of concentration. For the secondary efficacy parameter (change of pain rating from T1 to T3 (30 ± 10 min after application)), the self-rating by the children was analysed in 108 patients (verum N = 56, placebo N = 52) at T3. The assessments by parents were used for 21 patients at T3.\n\nBODY.3. RESULTS.3.3. EFFICACY RESULTS:\nOn average (median) pain has been reduced by 2.0/5 (verum) versus 1.2/5 (placebo) (Table 3). Nonparametric analysis (Mann-Whitney U test) of treatment related difference in pain assessment yielded a statistical significance (p < 0.001) of the observed effect in favour of the 2% lidocaine gel (verum). A similar result (p < 0.002) was achieved also for the measurement of pain reduction from T1 to T3. These results were reassured also if the assessment of the same person (either child or parent) was evaluated at both endpoints. The children's satisfaction as assessed by parents on a 5-point verbal rating scale one hour after administration tends to yield a higher proportion of verum patients showing satisfaction, although observed differences compared to placebo arm did not reach a statistical significance at α = 0.05 level (χ\n2-Test including age group II subjects: p value = 0.060; χ\n2-Test age group I only: p value = 0.259). Age group II (aged <4 years) patients were all treated with lidocaine 2% gel (verum). In order to assess the efficacy, the individual shift in pain rating had been evaluated. There were no patients with a worsening of pain, 4 patients (12.5%) without a change, 13 patients (40.6%) with an improvement of one, 7 patients (21.9%) of two, 5 patients (15.6%) of three, and 3 patients (9.4%) of four categories in pain rating on T2 compared to T1 (Table 3). This improvement in pain rating was even more pronounced at T3, with 16 patients (50%) indicating \"No Hurt,\" 13 patients (40.6%) indicating \"Hurts Little Bit,\" and 3 patients (9.4%) indicating \"Hurts Little More\" (Table 2). The observed ratings consequently showed statistically significant reduction of pain (Wilcoxon signed rank test results; T2: p value < 0.0001 and T3: p value < 0.0001).\n\nBODY.3. RESULTS.3.4. SAFETY EVALUATION:\nSeven out of 161 patients (4.4%) reported in total 7 adverse events (worsening of hand-mouth-foot disease, stomach ache, minimal lesion of the upper lip after a fall, bronchitis, single loose bowels, stabbing pain mouth due to aggravating hand-mouth-foot disease, and laceration of the gingiva due to a fall). Two adverse events were registered in each treatment arm in age group I and three adverse events in age group II after verum application (no placebo patients treated). None of these adverse events were assessed as related to study drug. For most patients (post study examination: 97.5%; one day after the study day: 98.1%) the local tolerability has been rated as \"very good.\"\n\nBODY.4. DISCUSSION:\nIn general, clinical studies with younger patients are more difficult to conduct not only because of the regulatory situation, but also because of the lower motivation of the children and their parents to participate and less ability of the young patients to understand the study principles [2, 5]. For minor to moderate and usually short-lasting pain situations, the psychological strain is weak, which affects the willingness to participate in a placebo-controlled clinical study, because the benefit seems not to be substantially relevant. Pain perception in children as a primary efficacy criterion is ambitious and the choice of a suitable method is important [6–8]. Most children aged 4-5 years or older can provide meaningful self-reports of pain intensity, depending on training and provision with age-appropriate tools. Children younger than 4-5 years have a tendency to use only the extremes of a scale. Other response sets or biases include giving the same answer to all questions and responding in an upward or downward sequence to successive questions. Pain rating may be affected by their perception of the consequences of the rating, for example, reward for being brave and fear of getting a syringe. Many other cognitive, social, and cultural factors influence pain ratings (e.g., previous experiences, temperament, presence of an audience, and family and community norms for pain expressions) making them idiosyncratic and difficult to interpret [6]. The assessment of efficacy criteria in studies investigating moderate pain relies on the self-assessment of pain. Patient independent criteria, like lab test or ultrasound, cannot be used. Topically applied locally acting medicinal products are more difficult to investigate than systemically acting products. Self-healing properties and placebo effects are presumably higher. For local skin or mucous membrane defects, the placebo formulations by themselves may have a healing or protecting effect. To solve these and other obstacles, we performed this study in a clinical trial facility and not in private practices, made sufficient patient enrolment possible with a variety of recruitment arrangements, excluded younger patients from the possibility to receive a placebo, and used an accepted pain rating scale for children, the Wong-Baker FACES Pain Rating Scale [9]. The participating children were trained and asked by the blinded study personnel in a language the child is familiar with. Additionally, to substitute and fortify the child assessment, the parents or legal guardians were asked to indicate on the same scale, how they consider the child's pain. More than 80% of children in group I (4–8 years) were able to rate pain by themselves whereas for the remaining children self-reporting could not be used. It is interesting that on average children in this age group rated their pain slightly lower than their parents. However, it should be realized that comparing the child's face with the expression on the faces picture is indirect rating and may be sensitive to bias. As expected and anticipated in age group II mostly only the parents' assessment of how they consider the level of the children's pain before and after administration could be evaluated. These children were too young to allow the use of the pain scale. The medicinal product used in this study is authorized and on the market in Germany and France for many years. In Germany, it can be applied to children without age restriction. In a recently published clinical study in patients between 6 years and 15 years with pain because of clamp placement, oral trauma, or aphthous ulcers, the identical medicinal product (also due to regulatory circumstances called a cream in France) showed significant superiority compared to the respective placebo formulation [10]. In the present study, significant superiority could be measured in children even down to 4 years of age. Additionally descriptive efficacy and safety data in children between 6 months and below 4 years were obtained, including patients with teething pain, one of the major sources of oral pain in children up to 2 years of age. Lidocaine alone or in combination with another local anaesthetic is a well-known product for local pain reduction [11, 12]. In different clinical studies conducted with children, the efficacy of topically applied lidocaine could be demonstrated [13–16]. But not in every study could the expected positive outcome be achieved, like in a recent study with children from 6 months up to 8 years suffering from acute infectious ulcerative mouth conditions analysing the effect of a 2% lidocaine viscous solution in comparison to a placebo on the amount of ingested fluid within 60 minutes of administration [17]. At least such an efficacy criterion seems questionable to judge the effectiveness of lidocaine. Seven out of 161 subjects (4.4%) reported in total 7 adverse events (in age group I 2 adverse events in the verum and the placebo arm each, in age group II 3 adverse events after verum treatment and no patients treated with placebo). None of these adverse events were assessed as study drug related, clearly demonstrating the safety of the product.\n\nBODY.5. CONCLUSIONS:\nThe 2% lidocaine gel, as successfully tested in our clinical trial in comparison to a placebo gel with the same composition without lidocaine, is an adequate tool to reduce or prevent pain on the oral mucosa or gingivae, like bites, aphthous ulcers, mouth blisters, cold sores, gingivitis, hand-foot-mouth disease, dental braces, or teething in younger children.\n\n**Question:** Compared to placebo what was the result of lidocaine gel (2%) on pain in the paediatric population?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
904
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A dose-ranging study of the bronchodilator effects of abediterol (LAS100977), a long-acting β\n\n ABSTRACT.BACKGROUND:\nLong-acting β2-adrenergic agonists (LABAs) are recommended in combination with inhaled corticosteroids (ICSs) for asthma management. Abediterol is a novel, selective, potent, once-daily LABA in development for treatment of asthma and chronic obstructive pulmonary disease. This study aimed to determine abediterol doses with similar peak bronchodilatory effect to salbutamol 400 μg, and duration of action compatible with once-daily dosing in patients with persistent, stable asthma.\n\nABSTRACT.METHODS:\nThis was a Phase II, randomized, double-blind, double-dummy, crossover, placebo-controlled, dose-ranging study (ClinicalTrials.gov NCT01425801) in 62 patients with mild-to-moderate asthma who were also receiving an ICS. Patients received single doses of abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg, or placebo in the morning. Spirometry was performed up to 36 h post-dose; safety and tolerability were assessed throughout the study. The primary endpoint was change from baseline in peak forced expiratory volume in 1 s (FEV1). Additional endpoints included trough FEV1, normalized area under the FEV1 curve (FEV1 AUC) up to 24 h post-dose, and peak and trough forced vital capacity (FVC).\n\nABSTRACT.RESULTS:\nAbediterol produced dose-dependent improvements in peak FEV1 from baseline compared with placebo, from 0.274 (95% CI 0.221, 0.327) to 0.405 L (95% CI 0.353, 0.458) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001 all doses). Abediterol 0.625, 1.25, and 2.5 μg had similar magnitude of peak FEV1 effect to salbutamol. Dose-dependent changes from baseline in trough FEV1 versus placebo were 0.219 (95% CI 0.136, 0.302) to 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313 to 2.5 μg, respectively (p < 0.0001). All abediterol doses achieved significant improvements versus placebo in FEV1 AUC 0–6, 0–12, and 0–24 h, and peak and trough FVC (p < 0.05). Less than 10% of patients experienced treatment-related adverse events for each dose of abediterol; most were mild to moderate in intensity and the most common were headache and nasopharyngitis. There were no clinically relevant changes in heart rate.\n\nABSTRACT.CONCLUSIONS:\nAbediterol 0.625–2.5 μg provided dose-dependent, clinically and statistically significant bronchodilation versus placebo in patients with asthma, with a peak effect similar to salbutamol and duration of action compatible with once-daily dosing. All doses of abediterol were well tolerated.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/1471-2466-14-176) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nThe use of β2-adrenergic agonist bronchodilators in combination with inhaled corticosteroids (ICSs) to manage the symptoms of asthma is recommended by Global Initiative for Asthma (GINA) guidelines [1]. Currently licensed bronchodilators include salbutamol, a short-acting β2-adrenergic agonist (SABA) administered as required [2], salmeterol and formoterol, two long-acting β2-adrenergic agonists (LABAs) administered twice daily in combination with an ICS [3, 4], and the recently licensed vilanterol, a LABA administered once daily in fixed-dose combination with the ICS fluticasone furoate [5]. There is ongoing interest to develop once-daily medications to further simplify treatment regimens and improve patient compliance [6]. Abediterol is a novel, once-daily LABA in clinical development as a fixed-dose combination with an ICS for the treatment of asthma and chronic obstructive pulmonary disease (COPD) [7]. Abediterol displays high affinity for β2-adrenoceptors, plus a higher functional selectivity for β2-adrenoceptors over β1-adrenoceptors than formoterol, indacaterol, vilanterol and olodaterol in cellular models [7–9]. The potency of abediterol is also higher than salmeterol and indacaterol in isolated human bronchi and, in addition to these two LABAs, vilanterol and olodaterol in animal models [7, 10]. Abediterol has a rapid onset and prolonged duration of action [7, 10]. A Phase II clinical trial in patients with asthma provided results that were consistent with the preclinical data; abediterol resulted in significant improvements in lung function at 5 min post-dose that were greater than improvements seen with salmeterol, and significant bronchodilation was sustained up to 24 h post-dose [11]. These characteristics make abediterol an ideal candidate for once-daily dosing. Moreover, abediterol has a favorable cardiovascular safety and tolerability profile: in anesthetized dogs, abediterol had a lesser effect on heart rate compared with salmeterol, formoterol, and indacaterol at concentrations that produced comparable bronchodilation [7, 12]. In early phase clinical trials, doses of abediterol ≤10 μg were safe and well tolerated and did not demonstrate any clinically relevant effects on heart rate in healthy subjects [13], patients with asthma [14], or patients with COPD [15]. Drug regulatory authorities have concerns that excessively high LABA doses may cause unacceptable side effects [16]. The trough forced expiratory volume in 1 s (FEV1), measured 24 h after dosing, is an important endpoint to assess once-daily LABA efficacy [16]. The maximum effect achieved soon after dosing, otherwise known as the 'peak' effect, is also of interest for efficacy and safety reasons. A very high peak effect may indicate high levels of acute drug exposure that could predispose patients to β-agonist-mediated side effects, such as cardiovascular adverse events, when receiving chronic treatment [17]. Moreover, the ideal pharmacologic characteristic of a once-daily LABA is a low 'peak to trough ratio', ensuring relatively stable lung function throughout the day [18]. The peak effect of salbutamol is well characterized, and causes acute bronchodilation without safety concerns. The bronchodilator response following single-dose administration of salbutamol (4 x 100 μg) is commonly used as one of the diagnostic tests for asthma, as recommended by a joint European Respiratory Society (ERS)/American Thoracic Society (ATS) taskforce [19]. This effect is, therefore, a reasonable benchmark for a novel LABA to achieve and this degree of acute bronchodilation is considered clinically meaningful. A major regulatory agency recommended that the peak effects of abediterol should be compared to those of salbutamol to guide dose selection for further clinical trials in order to avoid a dose that induces a much greater degree of bronchodilation than is necessary for clinical effectiveness, which could increase the potential for safety concerns during chronic treatment. We report a placebo-controlled, dose-ranging, crossover study comparing the acute bronchodilator effects of abediterol to salbutamol in patients with persistent stable asthma. The primary aim of the study was to investigate the peak bronchodilatory effect of abediterol versus placebo and to confirm it was in the same range as that of salbutamol. The full profile of lung function over 24 h, including the trough FEV1, was also measured. The goal of this study was to enable the selection of abediterol doses for future long-term clinical trials, based on a peak effect similar to that of salbutamol 400 μg metered dose inhaler (MDI), coupled with a trough effect compatible with a 24-h duration of action. This trial design was a specific recommendation by a major regulatory agency.\n\nBODY.METHODS.STUDY DESIGN:\nThis was a Phase IIa, randomized, double-blind, double-dummy, 6-way crossover, single-dose, multicenter, dose-ranging study conducted between August 2011 and January 2012. The study was conducted in two sites in the UK and seven in Germany. The study protocol was approved by an independent ethics committee in each country (see Additional file 1) and complied with the Declaration of Helsinki and the International Conference on Harmonisation and Good Clinical Practice guidelines. Patients provided written, informed consent. Following a screening visit and a 12–16 day run-in period to assess clinical stability, participants were randomized to 1 of 6 treatment sequences according to a balanced Williams design for crossover studies (1:1:1:1:1:1). Block randomization was performed by the Sponsor, using a computer-generated schedule to assign a treatment sequence to each patient randomization number. The block size and randomization schedule were not communicated to investigators. Patients received abediterol 0.313, 0.625, 1.25, or 2.5 μg, salbutamol 400 μg (administered as four inhalations with 100 μg per inhalation), or placebo (at 09:00 h ±1 h) at each visit, with a washout period of 7–14 days. A follow-up telephone call to assess safety was made approximately 2 weeks following the final treatment visit or premature discontinuation. Abediterol and placebo to abediterol were administered via a multidose dry powder inhaler (Genuair®). Salbutamol and placebo to salbutamol were administered via a pressurized MDI (VentolinTM EvohalerTM). In order to preserve study blinding, independent, trained personnel prepared the medication, delivered the device to patients, and collected the device following inhalation.\n\nBODY.METHODS.PATIENTS:\nEligible participants were men or women aged 18–70 years with a clinical diagnosis of persistent asthma (according to GINA guidelines) for ≥6 months prior to screening. Eligible patients had an FEV1 >60% and ≤85% of the predicted normal value, with an FEV1 reversibility ≥12%, and an absolute increase in FEV1 of ≥200 mL over their baseline value following inhalation of salbutamol 400 μg at screening. Comparable FEV1 measurements were required at screening and pre-dose at each visit in order to ensure there were no carry-over effects from previous treatment periods (limit of variability ±20%). At enrollment, all patients were using only an ICS to control their asthma, and had been on a stable dose and regimen for at least 4 weeks prior to screening, up to the equivalent of 1600 μg/day of beclometasone dipropionate, and had not previously been exposed to abediterol. Exclusion criteria included: a smoking history within the past 6 months or >10 pack-years; presence or history of relevant pulmonary disease or any other clinically relevant disease or abnormality as judged by the investigator; difficult-to-treat asthma; any respiratory tract infection ≤6 weeks prior to screening; hospitalization or emergency room treatment for asthma ≤3 months prior to screening; treatment with β2-antagonists; and positive laboratory tests for illicit drugs. The use of anticholinergics, SABAs (other than salbutamol), LABAs, continuous oral or parenteral corticosteroids, methyl-xanthines, cromolyn sodium, nedocromil, leukotriene modifiers, anti-immunoglobulin E, β1-blocking agents, and other investigational drugs was prohibited during the study. Patients using prohibited medications underwent up to 6 weeks' washout before the screening visit. Salbutamol pressurized MDI (100 μg/puff) was the only reliever medication permitted and was used on an as-needed basis and avoided 6 h prior to and during a treatment visit unless absolutely necessary.\n\nBODY.METHODS.EFFICACY ASSESSMENTS:\nPulmonary function tests were conducted at screening, pre-dose, and at 0.25, 0.5, 1, 2, 3, 4, 6, 8, 12, 14, 23, 24, and 36 h post-dose at each treatment visit using standardized spirometers at each investigational site (eResearchTechnology, Philadelphia, PA, USA). Prior to the first spirometry measure, the technician demonstrated the procedure and allowed the patient two practice attempts. At each study time point, the maneuvers were repeated until three technically adequate measurements were made according to ERS/ATS guidelines for acceptability and repeatability [20], with a maximum of eight attempts allowed.\n\nBODY.METHODS.SAFETY AND TOLERABILITY ASSESSMENTS:\nTreatment-emergent adverse events (TEAEs), including serious adverse events, were recorded throughout the study and follow-up period. A physical examination was performed at screening and at 36 h post-dose following the last treatment period (Visit 6), and blood pressure and 12-lead electrocardiogram measures were made at screening, pre-dose, and at 0.75, 2, 6, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6. Clinical laboratory tests (blood chemistry, hematology, urinalysis, and a serum pregnancy test) were performed at screening and at 36 h post-dose at Visit 6. Blood glucose and serum potassium levels were assessed at screening, pre-dose, and at 4, 24 h post-dose for each treatment period, and 36 h post-dose at Visit 6.\n\nBODY.METHODS.STATISTICS:\nThe primary efficacy endpoint was change from baseline in peak FEV1 (defined as the highest FEV1 value observed within 4 h of the administration of the investigational medicinal product [IMP]) on Day 1. Secondary endpoints included change from baseline in peak forced vital capacity (FVC) on Day 1, trough FEV1 (defined as the highest average of the maneuvers performed at 23–24 h post IMP administration) and FVC on Day 2, change from baseline in FEV1 and FVC at each time point up to 36 h post-dose, and change from baseline in normalized FEV1 area under the curve (AUC; defined as the area between baseline FEV1 and the FEV1 curve, from 0 to × hours divided by × hours) at 0–6, 0–12, and 0–24 h post-dose. In accordance with the GINA guidelines, a change in FEV1 of 200 mL was considered clinically relevant [1]. A population of 48 patients was necessary to achieve 80% power to detect a 150 mL change from baseline in peak FEV1 for any dose of abediterol compared with placebo. Assuming a 15% withdrawal rate, the protocol required at least 54 patients to be randomized. All reported efficacy assessments were analyzed in the intention-to-treat population, defined as all randomized patients who took at least one dose of the IMP (placebo, abediterol, or salbutamol) and had at least one baseline and one post-dose FEV1 value for at least one treatment visit. Safety analyses are reported in the safety population, which consisted of all randomized patients who took at least one dose of the IMP. All statistical analyses were performed using Statistical Analysis System (SAS Institute Inc., Cary, NC, USA) version 9.1.3 software. Efficacy variables were analyzed using an analysis of covariance model for crossover designs, with fixed-effect factors of sequence, treatment, and period, patient within sequence as a random effect, and baseline FEV1 or FVC as a covariate, as appropriate. Between-treatments comparisons were performed by means of contrasts on the treatment factor. The difference between treatments was estimated by the difference between the least squares means. Statistical comparisons between active treatments and placebo were two-sided hypothesis tests with a significance level of 0.05.\n\nBODY.RESULTS.STUDY POPULATION:\nA total of 115 patients were screened, with 62 patients randomized (40/62 [64.5%] male, mean age 40.9 ± 11.2 years). Patient demographics are presented in Table 1. A diagram of patient flow is presented in Figure 1. All 62 patients were included in the intention-to-treat and safety populations. One protocol violation occurred when a patient was inadvertently given the treatment scheduled for Visit 6 at Visit 5, and vice versa. Data for this patient were analyzed according to the treatments received. Four patients withdrew or were discontinued from the study: two due to inability to comply with the study schedule, one who did not fulfill stability criteria, and one for unknown reasons (patient did not return for the premature discontinuation visit).Table 1\nPatient demographics at baseline: safety population\nPatients (N = 62)Age, mean no. years (SD)40.9 (11.2)Male gender, n (%)40 (64.5)Race, n (%) White58 (93.5) Other4 (6.5)Cigarette smoking status, n (%) Former smoker21 (33.9) Never smoked41 (66.1)Smoking history, mean no. pack-years (SD)4.6 (3.3)Asthma duration, mean no. years (SD)25.5 (13.2)Asthma severity, n (%) 60% ≤ FEV1 < 80% of predicted46 (74.2) FEV1 ≥80% of predicted16 (25.8)Bronchial reversibilitya, mean % (SD)21.3 (8.2)FEV1 absolute reversibilityb, mean L (SD)0.568 (0.261)\nacalculated as 100 x (FEV1 [post-bronchodilator] – FEV1 [pre-bronchodilator]/FEV1 [pre-bronchodilator]); bcalculated as FEV1 (post-bronchodilator) – FEV1 (pre-bronchodilator).FEV1, forced expiratory volume in 1 s; SD, standard deviation.Figure 1\nDiagram of patient flow.\n\n\nBODY.RESULTS.EFFICACY:\nMean baseline FEV1 values were similar across all treatment periods, ranging from 2.63 L to 2.69 L. The full profile of lung function after each treatment is shown in Figure 2. The onset of action of abediterol was rapid, with significant bronchodilation compared with placebo observed at the first study time point (15 min) post-dose (p < 0.0001 for all doses of abediterol versus placebo, Figure 2a). All doses of abediterol produced significantly greater improvements in peak FEV1 from baseline compared with placebo (p < 0.0001 for all doses, Table 2), and the magnitude of effect of abediterol 0.625, 1.25, and 2.5 μg was not significantly different from salbutamol 400 μg (Figure 3, Table 3). Least squares means differences in change from baseline in peak FEV1 versus placebo were 0.274, 0.322, 0.371, and 0.405 L for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively, and 0.353 L for salbutamol 400 μg (Table 3). Median time to peak FEV1 was 3 h post-dose for abediterol (all doses), 2 h post-dose for placebo, and 1 h post-dose for salbutamol 400 μg.Figure 2\nChange from baseline in FEV\n1\nover time. Data are presented as least squares means with 95% confidence intervals: a) p < 0.0001 versus placebo for salbutamol and all doses of abediterol at all time points; b) p < 0.0001 for all doses of abediterol 0–24 h post-dose and abediterol 1.25 and 2.5 μg 36 h post-dose versus placebo, p = 0.002 and p = 0.01 for abediterol 0.313 and 0.625 μg, respectively, at 36 h post-dose versus placebo. FEV1, forced expiratory volume in 1 s. Table 2 \n Change from baseline in pulmonary function parameters (L): intention-to-treat population \n Placebo N = 59 Salbutamol 400 μg N = 58 Abediterol 0.313 μg N = 60 Abediterol 0.625 μg N = 60 Abediterol 1.25 μg N = 60 Abediterol 2.5 μg N = 61 Peak FEV 1 \n 0.202 (0.045) 0.555**** (0.045) 0.477**** †† (0.045) 0.524**** (0.045) 0.573**** (0.045) 0.608**** (0.045) Trough FEV 1 \n a \n -0.054 (0.039) -0.076 (0.039) 0.166**** †††† (0.039) 0.205**** †††† (0.039) 0.278**** †††† (0.039) 0.346**** †††† (0.039) Normalized FEV 1 AUC 0–6 \n 0.088 (0.039) 0.347**** (0.039) 0.354**** (0.038) 0.403**** † (0.038) 0.461**** †††† (0.038) 0.496**** †††† (0.038) Normalized FEV 1 AUC 0–12 \n 0.063 (0.040) 0.219**** (0.040) 0.348**** †††† (0.039) 0.393**** †††† (0.039) 0.455**** †††† (0.039) 0.496**** †††† (0.040) Normalized FEV 1 AUC 0–24 \n 0.007 (0.039) 0.100** (0.039) 0.282**** †††† (0.039) 0.321**** †††† (0.038) 0.390**** †††† (0.038) 0.446**** †††† (0.039) Peak FVC 0.222 (0.034) 0.361**** (0.034) 0.317** (0.034) 0.329*** (0.034) 0.361**** (0.034) 0.380**** (0.034) Trough FVC a \n -0.028 (0.035) -0.065 (0.035) 0.094** †††† (0.035) 0.050* †† (0.035) 0.111*** †††† (0.035) 0.120*** †††† (0.035) Data are least squares means (standard error). \n a trough values are the mean of the 23 and 24 h post-dose measurements. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus placebo; † p < 0.05, †† p < 0.01, †††† p < 0.0001 versus salbutamol 400 μg (analysis of covariance). AUC, area under the FEV 1 curve; FEV 1 , forced expiratory volume in 1 s; FVC, forced vital capacity. Figure 3\nChange from baseline in peak FEV\n1\nat Day 1. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol and salbutamol 400 μg; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Table 3 \n Comparison of peak FEV \n 1 \n for abediterol versus placebo and abediterol versus salbutamol \n Least–squares mean difference (standard error) 95% confidence interval Abediterol 2.5 μg vs. placebo 0.405 (0.027) 0.353, 0.458 Abediterol 1.25 μg vs. placebo 0.371 (0.027) 0.318, 0.424 Abediterol 0.625 μg vs. placebo 0.322 (0.027) 0.269, 0.375 Abediterol 0.313 μg vs. placebo 0.274 (0.027) 0.221, 0.327 Salbutamol 400 μg vs. placebo 0.353 (0.027) 0.299, 0.406 Abediterol 2.5 μg vs. salbutamol 400 μg 0.0529 (0.0270) -0.0002, 0.1061 Abediterol 1.25 μg vs. salbutamol 400 μg 0.0181 (0.0269) -0.0350, 0.0711 Abediterol 0.625 μg vs. salbutamol 400 μg -0.0308 (0.0269) -0.839, 0.0223 Abediterol 0.313 μg vs. salbutamol 400 μg -0.0783 (0.0269) -0.1313, -0.0253 Data included for comparisons of all doses of abediterol with placebo and with salbutamol. Statistically significant increases in FEV1 were sustained at 24 h post-dose compared with placebo (p < 0.0001 for all doses of abediterol 0–24 h post-dose, Figure 2b). Abediterol significantly improved trough FEV1 from baseline compared with placebo (p < 0.0001 for all doses; Figure 4). Least squares means differences in change from baseline in trough FEV1 versus placebo were 0.219 L (95% CI 0.136, 0.302), 0.259 L (95% CI 0.176, 0.342), 0.332 L (95% CI 0.249, 0.415), and 0.400 L (95% CI 0.317, 0.483) for abediterol 0.313, 0.625, 1.25, and 2.5 μg, respectively (Table 2). The effects of salbutamol on FEV1 were not sustained after 6 h post-dose, as expected for a SABA (Figure 2b).Figure 4\nChange from baseline in trough FEV\n1\nat Day 2. Data are presented as least squares means ± standard error: p < 0.0001 versus placebo for all doses of abediterol; **p < 0.01, ***p < 0.001, ****p < 0.0001 (analysis of covariance). FEV1, forced expiratory volume in 1 s. Change from baseline in normalized FEV1 AUC0–6, FEV1 AUC0–12, and FEV1 AUC0–24 was significantly higher for all doses of abediterol versus placebo (p < 0.0001 for all, Table 2), with dose-related effects observed. The change from baseline in normalized FEV1 AUC0–6 was significantly greater for abediterol 0.625–2.5 μg versus salbutamol 400 μg (p < 0.05, Table 2).\n\nBODY.RESULTS.EFFICACY.FVC:\nMean baseline FVC values were comparable across treatment periods, ranging from 4.23 L to 4.30 L. Abediterol produced significantly greater change from baseline in peak FVC compared with placebo (p < 0.01 for all doses, Table 2). The change from baseline in peak FVC was similar to that observed with salbutamol. Additionally, abediterol significantly improved trough FVC from baseline compared with placebo (p < 0.05 for all doses, Table 2). All doses of abediterol significantly (p < 0.05) increased FVC versus placebo up to 24 h post-dose except for the 0.625 μg dose at 12 and 23 h.\n\nBODY.RESULTS.SAFETY AND TOLERABILITY:\nThe percentage of patients experiencing TEAEs was similar across all treatment groups (Table 4). Most TEAEs were mild to moderate in intensity, no patient discontinued the study due to a TEAE, and there were no serious adverse events or deaths. The most common TEAEs were headache and nasopharyngitis, reported by 41.9 and 27.4% of patients, respectively. The percentage of patients that experienced at least one treatment-related TEAE was <10% for all doses of abediterol, and was similar to that of salbutamol and placebo. No clinically significant changes in serum glucose or potassium concentrations were observed over time in any of the treatments groups (Table 5). Minor variations in heart rate over time were observed for all treatment groups, including placebo; none were considered to be dose related or clinically relevant (Table 5). There were no clinically relevant changes in clinical laboratory tests, blood pressure, or electrocardiogram parameters.Table 4\nTEAEs occurring in ≥2% of patients in any treatment group: safety population\nNumber (%) of patients reporting TEAEsPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Total N = 62Any TEAE16 (27.1)12 (20.7)18 (30.0)16 (26.7)18 (30.0)23 (37.7)51 (82.3) Headache7 (11.9)6 (10.3)4 (6.7)7 (11.7)8 (13.3)7 (11.5)26 (41.9) Nasopharyngitis1 (1.7)08 (13.3)3 (5.0)4 (6.7)2 (3.3)17 (27.4) Chest discomfort2 (3.4)01 (1.7)002 (3.3)5 (8.1) Wheezing1 (1.7)2 (3.4)1 (1.7)01 (1.7)04 (6.5) Oropharyngeal pain01 (1.7)01 (1.7)03 (4.9)4 (6.5) Cough2 (3.4)001 (1.7)003 (4.8)TEAE, treatment-emergent adverse event.Table 5\nChange in clinical safety parameters from pre-dose values following treatment: safety population\nPlacebo N = 59Salbutamol 400 μg N = 58Abediterol 0.313 μg N = 60Abediterol 0.625 μg N = 60Abediterol 1.25 μg N = 60Abediterol 2.5 μg N = 61Serum glucose (mmol/L) 4 hours post-dose-0.37 (0.56)-0.24 (0.55)-0.19 (0.71)-0.27 (0.70)-0.16 (0.94)-0.23 (0.74) 24 hours post-dose0.01 (0.38)0.00 (0.33)0.13 (0.51)0.06 (0.62)0.11 (0.45)0.15 (0.59)Serum potassium (mmol/L) 4 hours post-dose-0.01 (0.36)-0.09 (0.38)-0.01 (0.35)0.10 (0.37)0.02 (0.36)-0.04 (0.35) 24 hours post-dose-0.02 (0.37)0.04 (0.34)0.05 (0.33)0.15 (0.32)0.04 (0.28)0.05 (0.38)Heart rate (bpm) 0.75 h post-dose-0.6 (9.1)2.0 (10.1)-0.7 (6.1)-0.8 (7.5)-1.5 (9.0)-0.8 (6.2) 2 h post-dose-3.4 (8.7)-1.0 (8.0)-2.0 (6.6)-3.4 (8.1)-4.3 (8.4)-3.2 (6.7) 6 h post-dose5.1 (10.2)7.2 (7.7)6.2 (7.1)6.2 (8.0)6.4 (9.8)5.3 (7.2) 24 h post-dose-2.1 (9.8)-1.9 (8.0)-1.6 (6.7)-0.8 (8.4)-0.9 (8.7)-0.6 (7.6)Data are mean (standard deviation).bpm, beats per minute.\n\nBODY.DISCUSSION:\nIn this study, abediterol 0.313, 0.625, 1.25, and 2.5 μg provided clinically and statistically significant improvements in peak bronchodilation versus placebo in patients with persistent, stable asthma. Abediterol doses ≥0.625 μg also caused bronchodilation compatible with once a day administration, as improvements in trough FEV1 >250 mL compared with placebo were observed. The peak bronchodilatory effect of abediterol was similar to that of salbutamol 400 μg for all doses ≥0.625 μg. A mean increase in peak FEV1 >200 mL is considered clinically relevant in asthma [1]. This magnitude of change was exceeded for all doses of abediterol, including the lowest dose, 0.313 μg. The primary objective of this study was to define abediterol doses that caused peak bronchodilation similar to salbutamol, the most widely prescribed SABA [21]. High doses of β2-agonists may increase the degree of bronchodilation but also can cause excessive systemic side effects through systemic absorption [22]. However, the current study demonstrates that the doses of abediterol from 0.625 to 2.5 μg induce acute bronchodilation similar to salbutamol, and therefore treatment with higher doses may not be warranted. Significant bronchodilation compared with placebo was achieved with a range of abediterol doses from 0.313 to 2.5 μg, consistent with the high potency of abediterol demonstrated in preclinical studies that suggested a potential for significant bronchodilation in vivo[7]. Abediterol has higher potency than formoterol, salmeterol, indacaterol, olodaterol, and vilanterol in anesthetized guinea pigs [7, 10] as well as higher potency than salmeterol and indacaterol with comparable potency to formoterol in isolated human bronchi [7]. Based on the results of this preliminary dose-ranging study and the guidelines published by Chowdhury et al. [16], the clinically effective dose in asthma is considered most likely to be in the range of 0.625–2.5 μg. Significant bronchodilation compared with placebo was observed for salbutamol and all doses of abediterol at the first-assessed time point (15 min) post-dose; this indicates a rapid onset of action for both salbutamol and abediterol. Salmeterol and formoterol, both twice-daily LABAs, are widely used as maintenance therapy in COPD and asthma due to their longer durations of action compared with salbutamol [6, 18]. In clinical studies in patients with asthma, the onset of action of formoterol 24 μg is consistently faster than that of salmeterol 50 μg, with a median onset of action of 3.6 min compared with 31.0 min in one study [23, 24]. In vitro experiments in isolated human bronchi suggest that the onset of action of abediterol is faster than salmeterol and is not significantly different from formoterol and indacaterol [7]. The prolonged duration of action of abediterol, with significant improvements in trough FEV1, normalized FEV1 AUC0–24, and trough FVC relative to placebo, and significant bronchodilation maintained up to 36 h post-dose is consistent with the results of previous clinical studies in healthy subjects and patients with COPD or asthma [11, 13, 25]. For example, a recent Phase II clinical study in patients with COPD demonstrated that abediterol 2.5, 5, and 10 μg achieved >200 mL increases in FEV1 at 24 h, and that this bronchodilatory effect was greater than that of indacaterol 150 μg, and the duration of action was similar [25]. Taken together, these results are compatible with a once-daily dosing regimen in asthma and COPD. The convenience of once-daily dosing is likely to improve compliance in patients with asthma, leading to better patient outcomes [26, 27], as shown in various studies, including that of Wells et al. whereby once-daily ICS led to improved compliance in patients with asthma compared with ICS administered multiple times daily [28]. LABA monotherapy treatment is contra-indicated in asthma; co-administration with an ICS is mandated, preferably as a fixed-dose combination to reduce the risk of monotherapy administration. Development of once-daily LABAs for use in fixed-dose combinations with once-daily ICS is important for simplifying treatment regimens and, therefore, helping improve patient compliance and general asthma management. The safety and tolerability profile of abediterol in this study is comparable to that reported in other studies in the abediterol development program [11–13, 25]. Single doses of abediterol 0.313–2.5 μg were safe and well tolerated, and the TEAEs were consistent with the known safety profile of β2-agonists.\n\nBODY.CONCLUSION:\nAbediterol 0.625–2.5 μg demonstrated a peak bronchodilatory response similar to that of a high dose of salbutamol (400 μg) at very low doses, and showed a clear dose–response relationship. Abediterol is a promising new, highly potent, LABA with a sustained duration of action and a favorable safety and tolerability profile. These results suggest that further studies are warranted to further evaluate the efficacy and safety of abediterol in combination with ICS in both asthma and COPD.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nAdditional file 1:\nIndependent Ethics Committees (IEC).\n(DOCX 17 KB)\n\n**Question:** Compared to salbutamol what was the result of Abediterol on peak FEV1?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
913
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The efficacy of resiniferatoxin in prevention of catheter related bladder discomfort in patients after TURP - a pilot, randomized, open study\n\n ABSTRACT.BACKGROUND:\nResiniferatoxin (RTX) has been shown to have variable efficacy in the treatment of intractable lower urinary tract symptoms and symptoms associated with neurogenic detrusor overactivity. It has also been used successfully in treating overactive bladder (OAB). In order to evaluate the potential and efficiency of RTX in treatment of catheter related bladder discomfort (CRBD) over post-operative period of transurethral resection of the prostate (TURP), we conducted the current pilot, randomized open study.\n\nABSTRACT.METHODS:\nThe study was comprised of 48 patients undergoing TURP in a single institute from September 2007 to September 2008. Patients were randomly divided into Group-RTX and Group-control. CRBD was classified into 4 degrees: degree 0, 1, 2, and 3. Patients with the most severed CRBD (degree-3) were treated with tolterodine.\n\nABSTRACT.RESULTS:\nThere were no statistical difference of patients' age, prostate volume, IPSS score, storage score and void score in IPSS, and QOL score between the two study groups(P=0.213, 0.356, 0.471, 0.554, 0.055 and 0.380, respectively). RTX pre-treatment reduced incidence of CRBD. 25% of the patients in RTX group had CRBD, at 6 hours/1 day and 3 days after TURP, significantly lower than the control group in which 75% of the patients during the same period (P=0.001). During the three days post-operative period, RTX also reduced severity of CRBD, compared with the control group. No patient in Group RTX sufferred CRBD of degree 3, a clear contrast to the control group in which 7 patients had degree 3 CRBD, during three days post-operative. Finally, less patients in RTX group required tolterodine, compared with control, P=0.009. Interestingly, the difference of CRBD incidence between Group RTX (50%) and control (75%) diminished 5 days after TURP, P=0.135.\n\nABSTRACT.CONCLUSIONS:\nPretreatment with intravesical resiniferatoxin significantly reduces the incidence and severity of catheter related bladder discomfort in patients after TURP.\n\nBODY.INTRODUCTION:\nPatients of benign prostatic hyperplasia undergoing transurethral resection of the prostate (TUR-P) often have urinary bladder catheterized. The catheter is frequently left in situ for the purpose of postoperative bladder drainage. As results of catheterization and lesions in proximal part of the urethra or bladder after TUR-P, patients often complain of bladder discomfort (an urge to void or discomfort in the suprapubic region) due to irritation from the catheterization over postoperative period. Severe symptoms of catheter related bladder discomfort (CRBD) are suprapubic pain and problems with outflow from the urethral catheter, conditions so-called bladder spasm. It is a result of lesions in the proximal part of the urethra or bladder due to the presence of indwelling urethral catheter (1). These symptoms are similar to symptoms of overactive bladder (OAB) (2). Muscarinic receptor antagonists oxybutynin and tolterodine which have been used successfully in the management of overactive bladder were also reported to be able to reduce catheter related bladder discomfort (3-7). According to the previous reports, these antagonists can reduce the incidence and severity of bladder discomfort in about half of the patients. However, the adverse events of muscarinic receptor antagonists (such as dry mouth, constipation, dysuria, and dyspepsia) have prevented them from being used as a routine management for CRBD (8-11). A number of investigators have tried other intervention methods, such as using intravesical diamorphine, oral opioids, or i.v. ketamine, to prevent CRBD (5,12). All of them reduce the incidence and severity of CRBD to some degree. However, the anaethetic nature of these drugs rendered them being less popular. As a potent capsaicin analogue, resiniferatoxin (RTX) has the capacity to excite and then to desensitize type C primary afferent fibers, which in turn leads to desensitization of bladder (13-16). RTX has been shown to have variable efficacy in the treatment of intractable lower urinary tract symptoms (LUTS) and symptoms associated with neurogenic detrusor overactivity (17,18). It has also been used successfully in treating OAB (14-19). However, RTX's efficacy on CRBD remains unknown. In order to evaluate the potential and efficiency of RTX in treatment of CRBD over post-operative period of TURP, we conducted the current pilot, randomized open study.\n\nBODY.METHODS.PATIENTS AND TREATMENT:\nApproval was received from the Institute's ethics committee and written informed consent was obtained from all the patients. The study was designed to be a pilot, randomized, open investigation. The study consisted of 48 consecutive patients who had undergone standard monopolar TURP at our institute from September 2007 to September 2008. Patients' prostate volume was measured by transrectal ultrasound (TRUS). Patients were also identified as bladder outlet obstruction according to results of urodynamics. Eligible patients were randomly divided. Twenty-four patients in RTX group (Group I) received, intravesically, 40 mL of 20 nM RTX (Hainan Newway Pharmaceutical Co., LTD, P.R. China) 20 hours before operation. Patients were asked to hold the drug for 30 minutes. Before introducing RTX, patients received intravesical instillation of 20 mL 2% lidocaine solution and 10 mL 0.75% bupivacaine solution, in order to prevent the temporary pain of bladder and urethra (20). Twenty-four patients in control Group (Group II) received no treatment before operation. After TUR-P, patients were indwelled with a three-way 22 Fr Foley catheter with 50 mL balloon, which was discharged 5 days after operation. All patients' age, prostates volume, IPSS (International Prostate Symptom Score), Storage Score in IPSS, Void Score in IPSS, and quality of life (QOL) score were collected. In Group I, the average age was 72+8.33 years (range from 56 to 88 years), the average prostates volume 64+19.06 mL (range from 32 to 92 mL), the average international prostate symptom score (IPSS) 22.7+3.16 (range from 16 to 28), the average Storage Score in IPSS 13.1+2.66 (range from 8 to 17), the average Void Score in IPSS 9.6+1.88 (range from 5 to 13), the average QOL score 4.2+0.88 (range from 2 to 6). In Group II, the average age was 69+6.93 years (range from 54 to 86 years), prostates volume 69+17.46 mL (range from 34 to 97 mL), IPSS 23.5+3.57 (range from 16 to 31), Storage Score in IPSS 12.6+2.66 (range from 8 to 17), Void Score in IPSS 10.7+2.21 (range from 6 to 14) QOL score 4.4+1.06 (range from 2 to 6).\n\nBODY.METHODS.ASSESSMENT OF CRBD:\nCatheter related bladder discomfort (CRBD) was assessed by the investigators at 6 hours, 1, 3 and 5 days after TURP. Severity of bladder discomfort was recorded as degree 0 (did not complain of any bladder discomfort), degree 1 (mild, reported by the patient only on questioning and does not need treatment), degree 2 (moderate, reported by the patient on their own, such as an urge to pass urine or discomfort in the suprapubic region, and does not need treatment), and degree 3 (severe, bladder spasm, reported by the patient without questioning, accompanied by behavioral responses) (4,5). All patients of degree 3 received tolterodine.\n\nBODY.METHODS.STATISTICS:\nThe Independent samples t test and Fisher's exact test were used for statistical analyses of the different age, prostates volume, IPSS score and severity of CRBD between the Group I and Group II. SPSS 16.0 software was used to perform all statistical computations.\n\nBODY.RESULTS.GENERAL INFORMATION OF PATIENTS:\nThere was no statistical difference between Group I and Group II in patients' age, prostates volume, IPSS score, Storage Score in IPSS, Void Score in IPSS, and QOL, P values were 0.213, 0.356, 0.471, 0.554, 0.055, and 0.380, respectively.\n\nBODY.RESULTS.EFFECT OF RTX ON CRBD:\nDuring the first three days of post-operative period, the incidence of patients' CRBD was lower in Group I than that in Group II (25% vs. 75%, P=0.001, Fisher's exact test). On the fifth day after surgery, although the incidence of CRBD was still lower in Group I than that in Group II (50% vs. 75%), there was no significant difference between them (P=0.135, Fisher's exact test). The incidence and severity of patients' CRBD was detailed in Tables 1 and 2. Table 1 Incidence of CRBD in Group I and II after TURP Group 6 h* 1 d* 3 d* 5 d I II I II I II I II Incidence of CRBD 25% (6/24) 75% (18/24) 25% (6/24) 75% (18/24) 25% (6/24) 75% (18/24) 50% (12/24) 75% (18/24) * The asterisk indicates P<0.01 between Group I and Group II. Table 2 Severity of CRBD in Group I and II after TURP Group 6 h 1 d 3 d 5 d I II I II I II I II Degree 0 18 6 18 6 18 6 12 6 Degree 1 4 6 4 8 4 11 10 12 Degree 2 2 5 2 10 2 7 2 6 Degree 3 0 7* 0 0 0 0 0 0 *Received tolterodine. No patient in Group-1 required muscarinic receptor antagonist (tolterodine). However, seven patients with degree 3 CRBD in Group-II received tolterodine during the first day after operation. Degree 3 symptoms in all these patients in Group II were controlled, and the degree of CRBD in these patients were reduced to lower grade after the treatment in days to follow. The number of patients who received tolterodine was statistically different between the two groups, P=0.009. Table 2 summarizes the details of severity of patients' CRBD in Group I and II. The intravesical instillation RTX caused temporal pain of bladder and urethra which were relieved without treatment. No other obvious adverse events were otherwise observed.\n\nBODY.DISCUSSION:\nGenerally, the incidence of CRBD caused by catheter related bladder irritation during the postoperative period was about 55% (4,5). The incidence was much higher in patients after TUR-P (1). CRBD, especially bladder spasm not only results in severe suffering of the patients but also increase the chance of post-operative bleeding, clot forming, and may lead to a re-operation after TUR-P. Muscarinic receptor antagonists and anesthesia agents, such as oxybutynin, tolterodine, and ketamine have been used successfully for the management of CRBD (6,9-12). However, adverse event of these drugs, such as dry mouth, constipation, dyspepsia, dysuria, and urinary retention limited the use of these drugs post TUR-P (8-11). The post-TURP condition is therefore requires more efficient and safer management. In a reflex response, afferent input determines efferent output. Any approach which blocks the conduction of afferent nerve fibers may also be used to control CRBD. As a C fiber neurotoxin, resiniferatoxin (RTX) is an extract from the latex of Euphorbia resinifera. It is an ultrapotent capsaicin analog, and has a significant advantage of being less irritable. It binds to transient receptor potential vanilloid 1 (TRPV1), previously known as vanilloid receptor type 1 (VR1) (13,14,21). It has the capacity to excite and then to desensitize type C primary afferent fibers. Following the TRPV1 opening, the excitation leads to spike-like calcium currents. The transient high levels of intracellular calcium and other ions cause neuropeptide release (such as substance P) (22,23). The temporary bladder pain patients experienced during RTX treatment is known to be the result of detrusor contractions. In C afferent fibers, activation of bioelectricity will eventually stop and thus block bladder sensory transmission. The desensitization mechanisms by resiniferatoxin include: decreases in sensory input conveyed in bladder C-fibers, and decreases in the number of spinal neurons that become excited by bladder stimuli (24). There have been reported that RTX can be used to treat detrusor instability, BPH associated storage lower urinary tract symptoms, and overactive bladder (25). Thus, we hypothesized that intravesical administration of RTX may reduce the incidence and severity of CRBD after TUR-P. According the results of our study, the pre-operative administration of RTX intravesically could reduce both incidence and severity of CRBD during the first 3 days of post-operative period. The need for using antimuscarinic drugs was also reduced. In Group I, no patients had CRBD of degree 3. RTX was effective for detrusor instability, detrusor hyperreflexia, and alleviating bladder pain. In the US, most patients have indwelling catheter only 1-2 days after TURP. However, it may be companied severe lower urinary tract syndrome (LUTS). So, most patients usually indwelling catheter 3-5 days in China. Our result suggested that patients of RTX Group may not need catheter for 5 days either. However, patients will experience phasic detrusor contractions shortly after the intravesical instillation RTX. The detrusor contractions result in the patients having urge sensation to urinate. In addition, patients felt itching or warm in the lower abdomen during the initial period of RTX administration (26,27). All of these temporary discomforts can be prevented by intravesical administration lidocaine and bupivacaine solution. There was no patient suffering from severe adverse events in our study. In addition, in order to avoid intravesical instillation RTX related bladder discomfort overlapping with CRBD post-operation, we instilled RTX 20 hours before operation. Compared with instilling RTX half an hour before operation after patient received spinal anesthesia which we used in our preliminary study, the new procedure has the advantage of minimum waiting time prior and during anaesthesia, thus allowing less time to be spent in the operating theatre. On the other hand, the temporary discomfort related to RTX will have disappeared before the start of the operation. The 20-hour pre-operative period is therefore long enough for the temporary discomfort by RTX to diminish. The lowest dose to decrease bladder sensitivity and bladder pain was 10 nM. With an increase in dosage, significant improvement in symptoms and prolonged duration of improvement were obtained. In our study, with a dosage of RTX 20 nM, we achieved a significant reduction of incidence and severity of CRBD. The maximum effect was noted during the first three post-operative days. On the fifth day post-operative, the advantage of Group I over that of Group II no longer exists. Probably, with the recovery from lesions in proximal part of the urethra or bladder, the longer duration after TUR-P the more tolerable of patients become to indwelling urethral catheter. In additional, a portion of the patients in Group II had used tolterodine which would also reduce the incidence and severity of CRBD. Finally, we view that the dosage of RTX used in the present study was relatively low. A higher and safe dosage of RTX may reduce incidence and severity of CRBD more significantly and last longer time. On the other hand, it is proposed that intravesical instillation of RTX again on the third day after operation may be more effective for CRBD, and may be a useful adjunct for lower urinary tract symptoms post TUR-P. Clearly, more studies are necessary in testing different dosages and repeated intravesical instillation RTX.\n\nBODY.CONCLUSIONS:\nIn our pilot randomized open study, a single intravesical administration of 40 mL of 20 nM RTX before TUR-P reduced both incidence and severity of post-operative CRBD via desensitization of bladder. The procedure has no contraindication and does not have other adverse event caused by RTX. The temporary pain of bladder and urethra can be prevented by analgesia with intravesical instillation of lidocaine and bupivacaine solution. Therefore, the procedure can be considered as a safe pretreatment option for decreasing CRBD in patients after TUR-P. Whether using a higher dose of RTX or repeated intravesical instillation RTX may reduce incidence and severity of CRBD more significantly require further investigation.\n\n**Question:** Compared to no treatment before operation. what was the result of received, intravesically, 40 mL of 20 nM Resiniferatoxin 20 hours before operation. on catheter related bladder discomfort During the first three days of post-operative period?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
914
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The efficacy of resiniferatoxin in prevention of catheter related bladder discomfort in patients after TURP - a pilot, randomized, open study\n\n ABSTRACT.BACKGROUND:\nResiniferatoxin (RTX) has been shown to have variable efficacy in the treatment of intractable lower urinary tract symptoms and symptoms associated with neurogenic detrusor overactivity. It has also been used successfully in treating overactive bladder (OAB). In order to evaluate the potential and efficiency of RTX in treatment of catheter related bladder discomfort (CRBD) over post-operative period of transurethral resection of the prostate (TURP), we conducted the current pilot, randomized open study.\n\nABSTRACT.METHODS:\nThe study was comprised of 48 patients undergoing TURP in a single institute from September 2007 to September 2008. Patients were randomly divided into Group-RTX and Group-control. CRBD was classified into 4 degrees: degree 0, 1, 2, and 3. Patients with the most severed CRBD (degree-3) were treated with tolterodine.\n\nABSTRACT.RESULTS:\nThere were no statistical difference of patients' age, prostate volume, IPSS score, storage score and void score in IPSS, and QOL score between the two study groups(P=0.213, 0.356, 0.471, 0.554, 0.055 and 0.380, respectively). RTX pre-treatment reduced incidence of CRBD. 25% of the patients in RTX group had CRBD, at 6 hours/1 day and 3 days after TURP, significantly lower than the control group in which 75% of the patients during the same period (P=0.001). During the three days post-operative period, RTX also reduced severity of CRBD, compared with the control group. No patient in Group RTX sufferred CRBD of degree 3, a clear contrast to the control group in which 7 patients had degree 3 CRBD, during three days post-operative. Finally, less patients in RTX group required tolterodine, compared with control, P=0.009. Interestingly, the difference of CRBD incidence between Group RTX (50%) and control (75%) diminished 5 days after TURP, P=0.135.\n\nABSTRACT.CONCLUSIONS:\nPretreatment with intravesical resiniferatoxin significantly reduces the incidence and severity of catheter related bladder discomfort in patients after TURP.\n\nBODY.INTRODUCTION:\nPatients of benign prostatic hyperplasia undergoing transurethral resection of the prostate (TUR-P) often have urinary bladder catheterized. The catheter is frequently left in situ for the purpose of postoperative bladder drainage. As results of catheterization and lesions in proximal part of the urethra or bladder after TUR-P, patients often complain of bladder discomfort (an urge to void or discomfort in the suprapubic region) due to irritation from the catheterization over postoperative period. Severe symptoms of catheter related bladder discomfort (CRBD) are suprapubic pain and problems with outflow from the urethral catheter, conditions so-called bladder spasm. It is a result of lesions in the proximal part of the urethra or bladder due to the presence of indwelling urethral catheter (1). These symptoms are similar to symptoms of overactive bladder (OAB) (2). Muscarinic receptor antagonists oxybutynin and tolterodine which have been used successfully in the management of overactive bladder were also reported to be able to reduce catheter related bladder discomfort (3-7). According to the previous reports, these antagonists can reduce the incidence and severity of bladder discomfort in about half of the patients. However, the adverse events of muscarinic receptor antagonists (such as dry mouth, constipation, dysuria, and dyspepsia) have prevented them from being used as a routine management for CRBD (8-11). A number of investigators have tried other intervention methods, such as using intravesical diamorphine, oral opioids, or i.v. ketamine, to prevent CRBD (5,12). All of them reduce the incidence and severity of CRBD to some degree. However, the anaethetic nature of these drugs rendered them being less popular. As a potent capsaicin analogue, resiniferatoxin (RTX) has the capacity to excite and then to desensitize type C primary afferent fibers, which in turn leads to desensitization of bladder (13-16). RTX has been shown to have variable efficacy in the treatment of intractable lower urinary tract symptoms (LUTS) and symptoms associated with neurogenic detrusor overactivity (17,18). It has also been used successfully in treating OAB (14-19). However, RTX's efficacy on CRBD remains unknown. In order to evaluate the potential and efficiency of RTX in treatment of CRBD over post-operative period of TURP, we conducted the current pilot, randomized open study.\n\nBODY.METHODS.PATIENTS AND TREATMENT:\nApproval was received from the Institute's ethics committee and written informed consent was obtained from all the patients. The study was designed to be a pilot, randomized, open investigation. The study consisted of 48 consecutive patients who had undergone standard monopolar TURP at our institute from September 2007 to September 2008. Patients' prostate volume was measured by transrectal ultrasound (TRUS). Patients were also identified as bladder outlet obstruction according to results of urodynamics. Eligible patients were randomly divided. Twenty-four patients in RTX group (Group I) received, intravesically, 40 mL of 20 nM RTX (Hainan Newway Pharmaceutical Co., LTD, P.R. China) 20 hours before operation. Patients were asked to hold the drug for 30 minutes. Before introducing RTX, patients received intravesical instillation of 20 mL 2% lidocaine solution and 10 mL 0.75% bupivacaine solution, in order to prevent the temporary pain of bladder and urethra (20). Twenty-four patients in control Group (Group II) received no treatment before operation. After TUR-P, patients were indwelled with a three-way 22 Fr Foley catheter with 50 mL balloon, which was discharged 5 days after operation. All patients' age, prostates volume, IPSS (International Prostate Symptom Score), Storage Score in IPSS, Void Score in IPSS, and quality of life (QOL) score were collected. In Group I, the average age was 72+8.33 years (range from 56 to 88 years), the average prostates volume 64+19.06 mL (range from 32 to 92 mL), the average international prostate symptom score (IPSS) 22.7+3.16 (range from 16 to 28), the average Storage Score in IPSS 13.1+2.66 (range from 8 to 17), the average Void Score in IPSS 9.6+1.88 (range from 5 to 13), the average QOL score 4.2+0.88 (range from 2 to 6). In Group II, the average age was 69+6.93 years (range from 54 to 86 years), prostates volume 69+17.46 mL (range from 34 to 97 mL), IPSS 23.5+3.57 (range from 16 to 31), Storage Score in IPSS 12.6+2.66 (range from 8 to 17), Void Score in IPSS 10.7+2.21 (range from 6 to 14) QOL score 4.4+1.06 (range from 2 to 6).\n\nBODY.METHODS.ASSESSMENT OF CRBD:\nCatheter related bladder discomfort (CRBD) was assessed by the investigators at 6 hours, 1, 3 and 5 days after TURP. Severity of bladder discomfort was recorded as degree 0 (did not complain of any bladder discomfort), degree 1 (mild, reported by the patient only on questioning and does not need treatment), degree 2 (moderate, reported by the patient on their own, such as an urge to pass urine or discomfort in the suprapubic region, and does not need treatment), and degree 3 (severe, bladder spasm, reported by the patient without questioning, accompanied by behavioral responses) (4,5). All patients of degree 3 received tolterodine.\n\nBODY.METHODS.STATISTICS:\nThe Independent samples t test and Fisher's exact test were used for statistical analyses of the different age, prostates volume, IPSS score and severity of CRBD between the Group I and Group II. SPSS 16.0 software was used to perform all statistical computations.\n\nBODY.RESULTS.GENERAL INFORMATION OF PATIENTS:\nThere was no statistical difference between Group I and Group II in patients' age, prostates volume, IPSS score, Storage Score in IPSS, Void Score in IPSS, and QOL, P values were 0.213, 0.356, 0.471, 0.554, 0.055, and 0.380, respectively.\n\nBODY.RESULTS.EFFECT OF RTX ON CRBD:\nDuring the first three days of post-operative period, the incidence of patients' CRBD was lower in Group I than that in Group II (25% vs. 75%, P=0.001, Fisher's exact test). On the fifth day after surgery, although the incidence of CRBD was still lower in Group I than that in Group II (50% vs. 75%), there was no significant difference between them (P=0.135, Fisher's exact test). The incidence and severity of patients' CRBD was detailed in Tables 1 and 2. Table 1 Incidence of CRBD in Group I and II after TURP Group 6 h* 1 d* 3 d* 5 d I II I II I II I II Incidence of CRBD 25% (6/24) 75% (18/24) 25% (6/24) 75% (18/24) 25% (6/24) 75% (18/24) 50% (12/24) 75% (18/24) * The asterisk indicates P<0.01 between Group I and Group II. Table 2 Severity of CRBD in Group I and II after TURP Group 6 h 1 d 3 d 5 d I II I II I II I II Degree 0 18 6 18 6 18 6 12 6 Degree 1 4 6 4 8 4 11 10 12 Degree 2 2 5 2 10 2 7 2 6 Degree 3 0 7* 0 0 0 0 0 0 *Received tolterodine. No patient in Group-1 required muscarinic receptor antagonist (tolterodine). However, seven patients with degree 3 CRBD in Group-II received tolterodine during the first day after operation. Degree 3 symptoms in all these patients in Group II were controlled, and the degree of CRBD in these patients were reduced to lower grade after the treatment in days to follow. The number of patients who received tolterodine was statistically different between the two groups, P=0.009. Table 2 summarizes the details of severity of patients' CRBD in Group I and II. The intravesical instillation RTX caused temporal pain of bladder and urethra which were relieved without treatment. No other obvious adverse events were otherwise observed.\n\nBODY.DISCUSSION:\nGenerally, the incidence of CRBD caused by catheter related bladder irritation during the postoperative period was about 55% (4,5). The incidence was much higher in patients after TUR-P (1). CRBD, especially bladder spasm not only results in severe suffering of the patients but also increase the chance of post-operative bleeding, clot forming, and may lead to a re-operation after TUR-P. Muscarinic receptor antagonists and anesthesia agents, such as oxybutynin, tolterodine, and ketamine have been used successfully for the management of CRBD (6,9-12). However, adverse event of these drugs, such as dry mouth, constipation, dyspepsia, dysuria, and urinary retention limited the use of these drugs post TUR-P (8-11). The post-TURP condition is therefore requires more efficient and safer management. In a reflex response, afferent input determines efferent output. Any approach which blocks the conduction of afferent nerve fibers may also be used to control CRBD. As a C fiber neurotoxin, resiniferatoxin (RTX) is an extract from the latex of Euphorbia resinifera. It is an ultrapotent capsaicin analog, and has a significant advantage of being less irritable. It binds to transient receptor potential vanilloid 1 (TRPV1), previously known as vanilloid receptor type 1 (VR1) (13,14,21). It has the capacity to excite and then to desensitize type C primary afferent fibers. Following the TRPV1 opening, the excitation leads to spike-like calcium currents. The transient high levels of intracellular calcium and other ions cause neuropeptide release (such as substance P) (22,23). The temporary bladder pain patients experienced during RTX treatment is known to be the result of detrusor contractions. In C afferent fibers, activation of bioelectricity will eventually stop and thus block bladder sensory transmission. The desensitization mechanisms by resiniferatoxin include: decreases in sensory input conveyed in bladder C-fibers, and decreases in the number of spinal neurons that become excited by bladder stimuli (24). There have been reported that RTX can be used to treat detrusor instability, BPH associated storage lower urinary tract symptoms, and overactive bladder (25). Thus, we hypothesized that intravesical administration of RTX may reduce the incidence and severity of CRBD after TUR-P. According the results of our study, the pre-operative administration of RTX intravesically could reduce both incidence and severity of CRBD during the first 3 days of post-operative period. The need for using antimuscarinic drugs was also reduced. In Group I, no patients had CRBD of degree 3. RTX was effective for detrusor instability, detrusor hyperreflexia, and alleviating bladder pain. In the US, most patients have indwelling catheter only 1-2 days after TURP. However, it may be companied severe lower urinary tract syndrome (LUTS). So, most patients usually indwelling catheter 3-5 days in China. Our result suggested that patients of RTX Group may not need catheter for 5 days either. However, patients will experience phasic detrusor contractions shortly after the intravesical instillation RTX. The detrusor contractions result in the patients having urge sensation to urinate. In addition, patients felt itching or warm in the lower abdomen during the initial period of RTX administration (26,27). All of these temporary discomforts can be prevented by intravesical administration lidocaine and bupivacaine solution. There was no patient suffering from severe adverse events in our study. In addition, in order to avoid intravesical instillation RTX related bladder discomfort overlapping with CRBD post-operation, we instilled RTX 20 hours before operation. Compared with instilling RTX half an hour before operation after patient received spinal anesthesia which we used in our preliminary study, the new procedure has the advantage of minimum waiting time prior and during anaesthesia, thus allowing less time to be spent in the operating theatre. On the other hand, the temporary discomfort related to RTX will have disappeared before the start of the operation. The 20-hour pre-operative period is therefore long enough for the temporary discomfort by RTX to diminish. The lowest dose to decrease bladder sensitivity and bladder pain was 10 nM. With an increase in dosage, significant improvement in symptoms and prolonged duration of improvement were obtained. In our study, with a dosage of RTX 20 nM, we achieved a significant reduction of incidence and severity of CRBD. The maximum effect was noted during the first three post-operative days. On the fifth day post-operative, the advantage of Group I over that of Group II no longer exists. Probably, with the recovery from lesions in proximal part of the urethra or bladder, the longer duration after TUR-P the more tolerable of patients become to indwelling urethral catheter. In additional, a portion of the patients in Group II had used tolterodine which would also reduce the incidence and severity of CRBD. Finally, we view that the dosage of RTX used in the present study was relatively low. A higher and safe dosage of RTX may reduce incidence and severity of CRBD more significantly and last longer time. On the other hand, it is proposed that intravesical instillation of RTX again on the third day after operation may be more effective for CRBD, and may be a useful adjunct for lower urinary tract symptoms post TUR-P. Clearly, more studies are necessary in testing different dosages and repeated intravesical instillation RTX.\n\nBODY.CONCLUSIONS:\nIn our pilot randomized open study, a single intravesical administration of 40 mL of 20 nM RTX before TUR-P reduced both incidence and severity of post-operative CRBD via desensitization of bladder. The procedure has no contraindication and does not have other adverse event caused by RTX. The temporary pain of bladder and urethra can be prevented by analgesia with intravesical instillation of lidocaine and bupivacaine solution. Therefore, the procedure can be considered as a safe pretreatment option for decreasing CRBD in patients after TUR-P. Whether using a higher dose of RTX or repeated intravesical instillation RTX may reduce incidence and severity of CRBD more significantly require further investigation.\n\n**Question:** Compared to no treatment before operation. what was the result of received, intravesically, 40 mL of 20 nM Resiniferatoxin 20 hours before operation. on required tolterodine?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
915
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The efficacy of resiniferatoxin in prevention of catheter related bladder discomfort in patients after TURP - a pilot, randomized, open study\n\n ABSTRACT.BACKGROUND:\nResiniferatoxin (RTX) has been shown to have variable efficacy in the treatment of intractable lower urinary tract symptoms and symptoms associated with neurogenic detrusor overactivity. It has also been used successfully in treating overactive bladder (OAB). In order to evaluate the potential and efficiency of RTX in treatment of catheter related bladder discomfort (CRBD) over post-operative period of transurethral resection of the prostate (TURP), we conducted the current pilot, randomized open study.\n\nABSTRACT.METHODS:\nThe study was comprised of 48 patients undergoing TURP in a single institute from September 2007 to September 2008. Patients were randomly divided into Group-RTX and Group-control. CRBD was classified into 4 degrees: degree 0, 1, 2, and 3. Patients with the most severed CRBD (degree-3) were treated with tolterodine.\n\nABSTRACT.RESULTS:\nThere were no statistical difference of patients' age, prostate volume, IPSS score, storage score and void score in IPSS, and QOL score between the two study groups(P=0.213, 0.356, 0.471, 0.554, 0.055 and 0.380, respectively). RTX pre-treatment reduced incidence of CRBD. 25% of the patients in RTX group had CRBD, at 6 hours/1 day and 3 days after TURP, significantly lower than the control group in which 75% of the patients during the same period (P=0.001). During the three days post-operative period, RTX also reduced severity of CRBD, compared with the control group. No patient in Group RTX sufferred CRBD of degree 3, a clear contrast to the control group in which 7 patients had degree 3 CRBD, during three days post-operative. Finally, less patients in RTX group required tolterodine, compared with control, P=0.009. Interestingly, the difference of CRBD incidence between Group RTX (50%) and control (75%) diminished 5 days after TURP, P=0.135.\n\nABSTRACT.CONCLUSIONS:\nPretreatment with intravesical resiniferatoxin significantly reduces the incidence and severity of catheter related bladder discomfort in patients after TURP.\n\nBODY.INTRODUCTION:\nPatients of benign prostatic hyperplasia undergoing transurethral resection of the prostate (TUR-P) often have urinary bladder catheterized. The catheter is frequently left in situ for the purpose of postoperative bladder drainage. As results of catheterization and lesions in proximal part of the urethra or bladder after TUR-P, patients often complain of bladder discomfort (an urge to void or discomfort in the suprapubic region) due to irritation from the catheterization over postoperative period. Severe symptoms of catheter related bladder discomfort (CRBD) are suprapubic pain and problems with outflow from the urethral catheter, conditions so-called bladder spasm. It is a result of lesions in the proximal part of the urethra or bladder due to the presence of indwelling urethral catheter (1). These symptoms are similar to symptoms of overactive bladder (OAB) (2). Muscarinic receptor antagonists oxybutynin and tolterodine which have been used successfully in the management of overactive bladder were also reported to be able to reduce catheter related bladder discomfort (3-7). According to the previous reports, these antagonists can reduce the incidence and severity of bladder discomfort in about half of the patients. However, the adverse events of muscarinic receptor antagonists (such as dry mouth, constipation, dysuria, and dyspepsia) have prevented them from being used as a routine management for CRBD (8-11). A number of investigators have tried other intervention methods, such as using intravesical diamorphine, oral opioids, or i.v. ketamine, to prevent CRBD (5,12). All of them reduce the incidence and severity of CRBD to some degree. However, the anaethetic nature of these drugs rendered them being less popular. As a potent capsaicin analogue, resiniferatoxin (RTX) has the capacity to excite and then to desensitize type C primary afferent fibers, which in turn leads to desensitization of bladder (13-16). RTX has been shown to have variable efficacy in the treatment of intractable lower urinary tract symptoms (LUTS) and symptoms associated with neurogenic detrusor overactivity (17,18). It has also been used successfully in treating OAB (14-19). However, RTX's efficacy on CRBD remains unknown. In order to evaluate the potential and efficiency of RTX in treatment of CRBD over post-operative period of TURP, we conducted the current pilot, randomized open study.\n\nBODY.METHODS.PATIENTS AND TREATMENT:\nApproval was received from the Institute's ethics committee and written informed consent was obtained from all the patients. The study was designed to be a pilot, randomized, open investigation. The study consisted of 48 consecutive patients who had undergone standard monopolar TURP at our institute from September 2007 to September 2008. Patients' prostate volume was measured by transrectal ultrasound (TRUS). Patients were also identified as bladder outlet obstruction according to results of urodynamics. Eligible patients were randomly divided. Twenty-four patients in RTX group (Group I) received, intravesically, 40 mL of 20 nM RTX (Hainan Newway Pharmaceutical Co., LTD, P.R. China) 20 hours before operation. Patients were asked to hold the drug for 30 minutes. Before introducing RTX, patients received intravesical instillation of 20 mL 2% lidocaine solution and 10 mL 0.75% bupivacaine solution, in order to prevent the temporary pain of bladder and urethra (20). Twenty-four patients in control Group (Group II) received no treatment before operation. After TUR-P, patients were indwelled with a three-way 22 Fr Foley catheter with 50 mL balloon, which was discharged 5 days after operation. All patients' age, prostates volume, IPSS (International Prostate Symptom Score), Storage Score in IPSS, Void Score in IPSS, and quality of life (QOL) score were collected. In Group I, the average age was 72+8.33 years (range from 56 to 88 years), the average prostates volume 64+19.06 mL (range from 32 to 92 mL), the average international prostate symptom score (IPSS) 22.7+3.16 (range from 16 to 28), the average Storage Score in IPSS 13.1+2.66 (range from 8 to 17), the average Void Score in IPSS 9.6+1.88 (range from 5 to 13), the average QOL score 4.2+0.88 (range from 2 to 6). In Group II, the average age was 69+6.93 years (range from 54 to 86 years), prostates volume 69+17.46 mL (range from 34 to 97 mL), IPSS 23.5+3.57 (range from 16 to 31), Storage Score in IPSS 12.6+2.66 (range from 8 to 17), Void Score in IPSS 10.7+2.21 (range from 6 to 14) QOL score 4.4+1.06 (range from 2 to 6).\n\nBODY.METHODS.ASSESSMENT OF CRBD:\nCatheter related bladder discomfort (CRBD) was assessed by the investigators at 6 hours, 1, 3 and 5 days after TURP. Severity of bladder discomfort was recorded as degree 0 (did not complain of any bladder discomfort), degree 1 (mild, reported by the patient only on questioning and does not need treatment), degree 2 (moderate, reported by the patient on their own, such as an urge to pass urine or discomfort in the suprapubic region, and does not need treatment), and degree 3 (severe, bladder spasm, reported by the patient without questioning, accompanied by behavioral responses) (4,5). All patients of degree 3 received tolterodine.\n\nBODY.METHODS.STATISTICS:\nThe Independent samples t test and Fisher's exact test were used for statistical analyses of the different age, prostates volume, IPSS score and severity of CRBD between the Group I and Group II. SPSS 16.0 software was used to perform all statistical computations.\n\nBODY.RESULTS.GENERAL INFORMATION OF PATIENTS:\nThere was no statistical difference between Group I and Group II in patients' age, prostates volume, IPSS score, Storage Score in IPSS, Void Score in IPSS, and QOL, P values were 0.213, 0.356, 0.471, 0.554, 0.055, and 0.380, respectively.\n\nBODY.RESULTS.EFFECT OF RTX ON CRBD:\nDuring the first three days of post-operative period, the incidence of patients' CRBD was lower in Group I than that in Group II (25% vs. 75%, P=0.001, Fisher's exact test). On the fifth day after surgery, although the incidence of CRBD was still lower in Group I than that in Group II (50% vs. 75%), there was no significant difference between them (P=0.135, Fisher's exact test). The incidence and severity of patients' CRBD was detailed in Tables 1 and 2. Table 1 Incidence of CRBD in Group I and II after TURP Group 6 h* 1 d* 3 d* 5 d I II I II I II I II Incidence of CRBD 25% (6/24) 75% (18/24) 25% (6/24) 75% (18/24) 25% (6/24) 75% (18/24) 50% (12/24) 75% (18/24) * The asterisk indicates P<0.01 between Group I and Group II. Table 2 Severity of CRBD in Group I and II after TURP Group 6 h 1 d 3 d 5 d I II I II I II I II Degree 0 18 6 18 6 18 6 12 6 Degree 1 4 6 4 8 4 11 10 12 Degree 2 2 5 2 10 2 7 2 6 Degree 3 0 7* 0 0 0 0 0 0 *Received tolterodine. No patient in Group-1 required muscarinic receptor antagonist (tolterodine). However, seven patients with degree 3 CRBD in Group-II received tolterodine during the first day after operation. Degree 3 symptoms in all these patients in Group II were controlled, and the degree of CRBD in these patients were reduced to lower grade after the treatment in days to follow. The number of patients who received tolterodine was statistically different between the two groups, P=0.009. Table 2 summarizes the details of severity of patients' CRBD in Group I and II. The intravesical instillation RTX caused temporal pain of bladder and urethra which were relieved without treatment. No other obvious adverse events were otherwise observed.\n\nBODY.DISCUSSION:\nGenerally, the incidence of CRBD caused by catheter related bladder irritation during the postoperative period was about 55% (4,5). The incidence was much higher in patients after TUR-P (1). CRBD, especially bladder spasm not only results in severe suffering of the patients but also increase the chance of post-operative bleeding, clot forming, and may lead to a re-operation after TUR-P. Muscarinic receptor antagonists and anesthesia agents, such as oxybutynin, tolterodine, and ketamine have been used successfully for the management of CRBD (6,9-12). However, adverse event of these drugs, such as dry mouth, constipation, dyspepsia, dysuria, and urinary retention limited the use of these drugs post TUR-P (8-11). The post-TURP condition is therefore requires more efficient and safer management. In a reflex response, afferent input determines efferent output. Any approach which blocks the conduction of afferent nerve fibers may also be used to control CRBD. As a C fiber neurotoxin, resiniferatoxin (RTX) is an extract from the latex of Euphorbia resinifera. It is an ultrapotent capsaicin analog, and has a significant advantage of being less irritable. It binds to transient receptor potential vanilloid 1 (TRPV1), previously known as vanilloid receptor type 1 (VR1) (13,14,21). It has the capacity to excite and then to desensitize type C primary afferent fibers. Following the TRPV1 opening, the excitation leads to spike-like calcium currents. The transient high levels of intracellular calcium and other ions cause neuropeptide release (such as substance P) (22,23). The temporary bladder pain patients experienced during RTX treatment is known to be the result of detrusor contractions. In C afferent fibers, activation of bioelectricity will eventually stop and thus block bladder sensory transmission. The desensitization mechanisms by resiniferatoxin include: decreases in sensory input conveyed in bladder C-fibers, and decreases in the number of spinal neurons that become excited by bladder stimuli (24). There have been reported that RTX can be used to treat detrusor instability, BPH associated storage lower urinary tract symptoms, and overactive bladder (25). Thus, we hypothesized that intravesical administration of RTX may reduce the incidence and severity of CRBD after TUR-P. According the results of our study, the pre-operative administration of RTX intravesically could reduce both incidence and severity of CRBD during the first 3 days of post-operative period. The need for using antimuscarinic drugs was also reduced. In Group I, no patients had CRBD of degree 3. RTX was effective for detrusor instability, detrusor hyperreflexia, and alleviating bladder pain. In the US, most patients have indwelling catheter only 1-2 days after TURP. However, it may be companied severe lower urinary tract syndrome (LUTS). So, most patients usually indwelling catheter 3-5 days in China. Our result suggested that patients of RTX Group may not need catheter for 5 days either. However, patients will experience phasic detrusor contractions shortly after the intravesical instillation RTX. The detrusor contractions result in the patients having urge sensation to urinate. In addition, patients felt itching or warm in the lower abdomen during the initial period of RTX administration (26,27). All of these temporary discomforts can be prevented by intravesical administration lidocaine and bupivacaine solution. There was no patient suffering from severe adverse events in our study. In addition, in order to avoid intravesical instillation RTX related bladder discomfort overlapping with CRBD post-operation, we instilled RTX 20 hours before operation. Compared with instilling RTX half an hour before operation after patient received spinal anesthesia which we used in our preliminary study, the new procedure has the advantage of minimum waiting time prior and during anaesthesia, thus allowing less time to be spent in the operating theatre. On the other hand, the temporary discomfort related to RTX will have disappeared before the start of the operation. The 20-hour pre-operative period is therefore long enough for the temporary discomfort by RTX to diminish. The lowest dose to decrease bladder sensitivity and bladder pain was 10 nM. With an increase in dosage, significant improvement in symptoms and prolonged duration of improvement were obtained. In our study, with a dosage of RTX 20 nM, we achieved a significant reduction of incidence and severity of CRBD. The maximum effect was noted during the first three post-operative days. On the fifth day post-operative, the advantage of Group I over that of Group II no longer exists. Probably, with the recovery from lesions in proximal part of the urethra or bladder, the longer duration after TUR-P the more tolerable of patients become to indwelling urethral catheter. In additional, a portion of the patients in Group II had used tolterodine which would also reduce the incidence and severity of CRBD. Finally, we view that the dosage of RTX used in the present study was relatively low. A higher and safe dosage of RTX may reduce incidence and severity of CRBD more significantly and last longer time. On the other hand, it is proposed that intravesical instillation of RTX again on the third day after operation may be more effective for CRBD, and may be a useful adjunct for lower urinary tract symptoms post TUR-P. Clearly, more studies are necessary in testing different dosages and repeated intravesical instillation RTX.\n\nBODY.CONCLUSIONS:\nIn our pilot randomized open study, a single intravesical administration of 40 mL of 20 nM RTX before TUR-P reduced both incidence and severity of post-operative CRBD via desensitization of bladder. The procedure has no contraindication and does not have other adverse event caused by RTX. The temporary pain of bladder and urethra can be prevented by analgesia with intravesical instillation of lidocaine and bupivacaine solution. Therefore, the procedure can be considered as a safe pretreatment option for decreasing CRBD in patients after TUR-P. Whether using a higher dose of RTX or repeated intravesical instillation RTX may reduce incidence and severity of CRBD more significantly require further investigation.\n\n**Question:** Compared to no treatment before operation. what was the result of received, intravesically, 40 mL of 20 nM Resiniferatoxin 20 hours before operation. on catheter related bladder discomfort On the fifth day after surgery?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
917
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Haptoglobin Phenotype, Preeclampsia Risk and the Efficacy of Vitamin C and E Supplementation to Prevent Preeclampsia in a Racially Diverse Population\n\n ABSTRACT:\nHaptoglobin's (Hp) antioxidant and pro-angiogenic properties differ between the 1-1, 2-1, and 2-2 phenotypes. Hp phenotype affects cardiovascular disease risk and treatment response to antioxidant vitamins in some non-pregnant populations. We previously demonstrated that preeclampsia risk was doubled in white Hp 2-1 women, compared to Hp 1-1 women. Our objectives were to determine whether we could reproduce this finding in a larger cohort, and to determine whether Hp phenotype influences lack of efficacy of antioxidant vitamins in preventing preeclampsia and serious complications of pregnancy-associated hypertension (PAH). This is a secondary analysis of a randomized controlled trial in which 10,154 low-risk women received daily vitamin C and E, or placebo, from 9-16 weeks gestation until delivery. Hp phenotype was determined in the study prediction cohort (n = 2,393) and a case-control cohort (703 cases, 1,406 controls). The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death. Preeclampsia was a secondary outcome. Odds ratios were estimated by logistic regression. Sampling weights were used to reduce bias from an overrepresentation of women with preeclampsia or the primary outcome. There was no relationship between Hp phenotype and the primary outcome or preeclampsia in Hispanic, white/other or black women. Vitamin supplementation did not reduce the risk of the primary outcome or preeclampsia in women of any phenotype. Supplementation increased preeclampsia risk (odds ratio 3.30; 95% confidence interval 1.61–6.82, p<0.01) in Hispanic Hp 2-2 women. Hp phenotype does not influence preeclampsia risk, or identify a subset of women who may benefit from vitamin C and E supplementation to prevent preeclampsia.\n\nBODY.INTRODUCTION:\nPreeclampsia affects 2–7% of pregnancies [1], [2]. Diagnosed as new onset hypertension and proteinuria after 20 weeks gestation [1], [2], preeclampsia is a leading cause of maternal [3] and fetal [4] morbidity and mortality. Oxidative stress (including low antioxidant levels) and angiogenic imbalance are believed to contribute to preeclampsia [5], [6], [7]. No intervention has been proven to prevent preeclampsia. Vitamin C and E lowered preeclampsia incidence by 60% in a pilot randomized controlled trial (RCT) in high-risk women [8]. Unfortunately, this effect was not confirmed in subsequent RCTs involving high [9], [10], [11], [12], [13] or low-risk women [13], [14], [15]. Although the discordant effects in large and small populations are likely due to reduced power in small studies, differences might also be explained by increased heterogeneity in large multicenter studies obscuring a responsive subset of women. The phenotype of the antioxidant [16] and pro-angiogenic [17] protein haptoglobin (Hp) predicts cardiovascular disease risk, and responsiveness to Vitamin E [18], [19], [20] or C and E [21] in some non-pregnant populations. We examined whether Hp phenotype might influence preeclampsia risk, or identify a subset of women who responded to vitamin supplementation. Hp stimulates angiogenesis [17] and acts as a powerful antioxidant by clearing free hemoglobin following hemolysis [16]. Hp's two different α alleles (α1, α2) give rise to three genetically determined phenotypes (1-1, 2-1 and 2-2) [16]. Size and structural differences (Figure 1) lead to functional differences between phenotypes [16]. Hp 1-1 is a stronger antioxidant [16], whereas Hp 2-2 is more angiogenic [17]. Less is known about the function of Hp 2-1, a highly variable heterodimer which is not commercially available. Hp 2-1 individuals have no Hp 2-2, very little Hp 1-1, and large amounts of the 2-1 forms which are structurally very different from Hp 1-1 or 2-2 [16]. 10.1371/journal.pone.0060479.g001Figure 1Hp phenotypes by Native and SDS PAGE.Panel A: Hp phenotyping of Hb-supplemented serum by Native PAGE on a 6% gel. Hp 2-2 (Lane 1) is a heterodimer with a series of slow moving bands. Hp 1-1 (Lane 4) is a homodimer with one fast moving band. Hp 2-1 (Lanes 2, 3, 5) has a band between the Hb/Hp 1-1 and Hb/Hp 2-2 bands, and several slow-moving bands in the same region as the Hb/Hp 2-2 bands. Panel B: In the rare Hp 2-1 M phenotype, a modification of the Hp α2 allele leads to overproduction of the fast migrating Hp 1-1 band, relative to the slower migrating Hp 2-1 bands. This results in a stronger Hb/Hp 1-1 band, and weaker Hb/Hp 2-1 bands, than in Hp 2-1. Panel C: Hp phenotyping of serum by SDS PAGE on a 12% gel. The Hp β band is common to all phenotyes. The presence of the Hp 1 and 2 alleles is indicated by the α1 and α2 bands, respectively. We chose to examine Hp phenotype and preeclampsia for three reasons. First, Hp's antioxidant and pro-angiogenic properties depend on phenotype [17], [22], [23]. Oxidative stress and reduced angiogenesis contribute to preeclampsia [5], [6]. In our recent case-control study, Hp 2-1 was associated with increased preeclampsia risk in non-hispanic white women, but not in black women [24]. We sought to confirm these findings in a larger cohort while extending the existing literature to include Hispanics. Second, any potential effect of Hp phenotype on preeclampsia risk or treatment response would affect a significant number of women. Hp 1-1, 2-1 and 2-2 are all common in whites, blacks and Hispanics (prevalences of 16%-50%) [25]. Third, Hp phenotype predicts cardiovascular disease risk, and responsiveness to Vitamin E [18], [19], [20] or C and E [21] in other non-pregnant populations. Hp 2-2 diabetics are twice as likely to have a cardiovascular event as Hp 1-1 and 2-1 diabetics [18], [21], [26]. Vitamin E supplementation eliminates this increased risk in Hp 2-2 diabetics, but has no effect in Hp 1-1 or 2-1 diabetics [18], [19], [27]. In contrast, Vitamin C may be harmful in Hp 2-2 diabetics as oxidation of Vitamin C may exacerbate pre-existing oxidative stress [28]. In post-menopausal women with coronary artery disease, Vitamin C and E decreased coronary artery diameter in Hp 2-2 diabetic women [21]. This harmful effect was not universal, as Vitamin C and E benefited Hp 1-1 women by significantly increasing coronary artery diameter [21]. We sought to determine whether Hp phenotype was associated with serious complications of pregnancy-associated hypertension (PAH) or preeclampsia in white, black or Hispanic women. We also examined whether Hp phenotype might influence responsiveness to antioxidant vitamins in preventing preeclampsia. Based on our previous findings, we hypothesized that Hp 2-1 would be associated with increased preeclampsia risk in white women, but not in black women. We also posited that phenotype would affect treatment response although directionality was difficult to predict based upon data from non-pregnant populations.\n\nBODY.MATERIALS AND METHODS.POPULATION:\nThis was a secondary analysis of a Eunice Kennedy Shriver National Institute of Child Health and Human Development Maternal-Fetal Medicine Units Network double-blind, multicenter RCT (NCT 00135707) in which 10,154 nulliparous, low risk women received daily doses of 400 IU of vitamin E and 1000 mg of vitamin C, or placebo, from 9-16 weeks gestation until delivery. The trial was conducted at 16 centers in the US between 2003 and 2008. Full details of the study were reported previously [14]. The trial was conducted in accordance with the Declaration of Helsinki of the World Medical Association. All subjects provided written, informed consent before participating. Hp phenotyping was performed at the University of Pittsburgh (Institutional Review Board approval PRO10010368).\n\nBODY.MATERIALS AND METHODS.OUTCOMES:\nThe primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes (aspartate aminotransferase ≥100 U per liter), elevated serum creatinine (≥1.5 mg/dl), thrombocytopenia (<100,000 platelets/mm3), eclampsia, fetal growth restriction (<3rd centile adjusted for sex and race/ethnicity), medically indicated preterm birth before 32 weeks due to PAH, or fetal death after 20 weeks gestation, or neonatal death. Hypertension was diagnosed using blood pressure measurements obtained during or after 20 weeks gestation, excluding intraoperative and intrapartum systolic blood pressures. Severe PAH was defined as a systolic blood pressure of 160 mm Hg or more, or a diastolic blood pressure of 110 mm Hg or more on two occasions 2 to 240 hours apart, or a single severely elevated blood pressure measurement that led to treatment with antihypertensive medication. Mild PAH was defined as systolic pressure between 140 and 159 mm Hg or a diastolic pressure between 90 mm Hg and 109 mm Hg on two occasions 2 to 240 hours apart. Mild preeclampsia was defined as mild PAH with proteinuria documented within the 3 days preceding or following the elevated blood pressure measurement. Proteinuria was defined as protein excretion ≥300 mg/24 hours, a dipstick urinary protein ≥2+, or protein:creatinine ≥0.35 in the absence of a 24-hour urine sample. Early onset preeclampsia was defined as preeclampsia requiring delivery before 37 weeks gestation. Severe preeclampsia was defined as preeclampsia with either severe PAH or protein excretion 35 g/24 hours, or as mild PAH with oliguria (<500 ml urine/24 hours), pulmonary edema (confirmed by radiography), or thrombocytopenia (<100,000 platelets/mm3).\n\nBODY.MATERIALS AND METHODS.DESIGN:\nThe RCT included 10,154 women, of whom 2,434 were enrolled in a nested prediction cohort. The RCT design specified that secondary analyses requiring blood be conducted in the prediction cohort, as these women consented to extra blood collection for prediction and pathophysiology studies. If additional subjects were required, investigators could obtain samples from the 7,720 women not in the prediction cohort, in whom very little blood was collected. We needed samples from all white, black, and Hispanic women with preeclampsia in the RCT to detect a 2-fold increase in preeclampsia risk in Hp 2-1, compared to Hp 1-1, in each racial group (α = 0.05; Power: 83% white, 96% black, 98% Hispanic). We determined Hp phenotype in the prediction cohort; then added a case-control cohort composed of women who were not in the prediction cohort. The case-control cohort allowed us to capture all cases without phenotyping 5,500 additional controls. Cases in the case-control cohort included all women with preeclampsia, the primary outcome, or both (n = 703 cases) who were not in the prediction cohort. We selected 2 controls per case (n = 1,406) [29], matched for center and race/ethnicity (Figure 2). 10.1371/journal.pone.0060479.g002Figure 2Study Flow Chart.Subjects lost to follow-up include 183 women who miscarried, one subject whose data were removed at the patient's request, and one subject whose data were removed at the institutional review board's request.\n\nBODY.MATERIALS AND METHODS.HP PHENOTYPING:\nHp phenotyping was performed as described previously [24], [30]. Serum samples (5 μl) supplemented with human hemoglobin (Sigma-Aldrich, St. Louis, MO) were phenotyped by hemoglobin-Hp complex peroxidase activity on native polyacrylamide gel electrophoresis (PAGE). Hemolyzed samples and samples with low Hp concentrations were phenotyped by Western blot following sodium dodecyl sulfate (SDS) PAGE of 1 to 6 μl of serum. Proteins from native and SDS gels were transferred to a polyvinylidene fluoride (PVDF) membrane (Millipore, Billerica, MA). SDS PAGE was followed by Western blotting with primary (1∶5,000, Polyclonal Rabbit Anti-Human Haptoglobin, DakoCytomation, Carpinteria, CA) and secondary (1∶25,000, Goat anti-Rabbit IgG horseradish peroxidase, Millipore) antibodies. Membranes were stained for peroxidase activity (SuperSignal West Pico Chemiluminescent Substrate, Fisher Scientific, Pittsburgh, PA) and imaged (FlouroChem Q System, Cell Biosciences, Santa Clara, CA). Hp phenotypes were identified by banding patterns of the Hp-hemoglobin complexes (native PAGE, Figure 1) or the Hp α1 and α2 alleles (SDS PAGE).\n\nBODY.MATERIALS AND METHODS.PLASMA ASCORBATE:\nBlood was collected in vials containing metaphosphoric acid, and frozen at −80 C [31]. Plasma ascorbic acid was measured by high performance liquid chromatography in the Molecular Epidemiology and Biomarker Laboratory (University of Minnesota) as described previously [31].\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nThe prediction and case-control cohorts were analyzed separately (statistical methods in Data S1); then pooled to maximize power. In the pooled analysis, sampling weights were applied to reduce potential bias arising from an overrepresentation of women with the primary outcome (but not preeclampsia) in the control group for preeclampsia analyses, and an oversampling of women with preeclampsia (but not the primary outcome) in the control group for primary outcome analyses. Patients from the prediction cohort were assigned a weight of one. Cases and controls were assigned a sample weight defined as the inverse of their probability of selection [32]. Sample weights included a post-stratification adjustment for race/ethnicity. Baseline characteristics were compared using the t-test for continuous variables and the Rao-Scott chi-square test [33] for categorical variables. Logistic regression models were constructed to assess the multivariable association between Hp phenotype and the incidence of each outcome, adjusting for variables that were associated with Hp phenotype in univariate analyses. Odds ratios and 95% confidence intervals were estimated from unconditional survey logistic regression models. Taylor series replicate weights were created to estimate standard errors and p values in the weighted analyses. Model covariates included randomization group, and demographic variables that differed between women with the Hp 1-1, 2-1 and 2-2 phenotypes (age, self-reported vitamin use, education level, diastolic blood pressure at screening). The weighted analysis gave similar results to an unweighted analysis (i.e. samples were not weighted by the post-stratification sample weights; data not shown). Results were considered to be statistically significant at α = 0.05 with no adjustment for multiple comparisons [34]. All p values are 2-sided. SAS Version 9.2 (SAS Institute Inc.) and STATA (StataCorp, 2011 Stata Statistical Software: Release 12, College Station, TX, StataCorp LP), using procedures appropriate for sample-weighted data, were used for all analyses.\n\nBODY.RESULTS.COHORTS:\nWe phenotyped 2,393 of the 2,434 women in the prediction cohort (Figure 2), excluding 40 women who miscarried, and one who had no samples collected. The case-control cohort included all remaining subjects who developed the primary outcome only (severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death) (n = 190), preeclampsia only (n = 280), or the primary outcome and preeclampsia (n = 233), and 1,406 controls matched for center and race/ethnicity. Women in the prediction cohort entered the study at an earlier gestational age (11.4±1.1) than women in the case-control cohort (14.0±2.0). Results of separate analyses for the prediction (Data S1; Tables S1, S2) and case-control cohorts (Data S1; Tables S3, S4, S5, S6, S7) are presented in Data S1. Results for the pooled cohort are described below.\n\nBODY.RESULTS.SUBJECTS:\nThe pooled dataset included 1,008 Hp 1-1, 2,129 Hp 2-1 and 1,261 Hp 2-2 women. Ninety-six women had the rare 2-1 M phenotype, which results from a modification of the α2 allele [16]. These women were excluded from subsequent analyses. There were 1,840 white women, 1,392 black women, 1,188 Hispanic women, and 74 women of \"Other\" race.\n\nBODY.RESULTS.WEIGHTING:\nThe weighted sample included 2,201 Hp 1-1 women, 4,840 Hp 2-1 women, 2,760 Hp 2-2 women, and 146 Hp 2-1 M women. The weighted cohort accurately reproduced the demographic characteristics of the original sample (Table S8). Demographic characteristics in the weighted cohort were not significantly different between the placebo and vitamin groups.\n\nBODY.RESULTS.SUBJECT CHARACTERISTICS:\nAs expected, race/ethnicity differed between women with the Hp 1-1, 2-1, 2-2 and 2-1 M phenotypes (Table 1). The phenotype distribution for each race/ethnicity was similar to previous reports (Table 1, S9) [25]. Hp phenotype groups did not differ with respect to pre-pregnancy BMI, blood pressure at randomization, smoking, history of a previous pregnancy, or family history of preeclampsia (Table 1). Subsequent analyses were adjusted for characteristics that differed between the Hp 1-1, 2-1 and 2-2 groups (race/ethnicity, age, education, vitamin use). Analyses were also adjusted for diastolic blood pressure at randomization, which differed between phenotype groups in the prediction cohort (Table S10) but not in the pooled cohort. The white and other groups were pooled for race/ethnicity adjustments, as phenotype prevalence in the other group was most similar to whites (Table S9). 10.1371/journal.pone.0060479.t001 Table 1 Subject Characteristics for the Weighted Pooled Cohort by Hp phenotype. Subject Characteristics Hp 1-1 (n = 2,201) Hp 2-1 (n = 4,840) Hp 2-2 (n = 2,760) Hp 2-1 M (n = 161) p Age – years 22.4±4.9 23.6±5.1* 24.0±5.1* 20.2±3.7* ,†,‡ \n <0.01 Gestational age at randomization – week 13.4±2.1 13.3±2.0 13.4±2.1 13.0±2.5 0.48 Race or ethnicity - % within phenotype NA White 618 (28%) 2,142 (44%) 1,408 (51%) 3 (2%) Black 779 (35%) 1,059 (22%) 542 (20%) 132 (82%) Hispanic 779 (35%) 1,555 (32%) 724 (26%) 25 (6%) Other 25 (1%) 84 (2%) 86 (3%) 1 (<1%) Pre-pregnancy body mass index - kg/m 2 \n 25.3±6.1 24.9±5.4 25.0±5.6 26.6±8.7 0.20 Smoked during pregnancy – n (%) 352 (16%) 721 (15%) 515 (19%) 23 (15%) 0.12 Education - years 12.2±2.9 12.7±2.7* 13.1±2.6* ,† \n 12.0±2.0 †,‡ \n <0.01 Prenatal/multivitamin use prior to randomization – n (%) 1,517 (69%) 3,759 (78%)* 2,194 (79%)* 132 (82%)* <0.01 Previous pregnancy – n (%) 520 (24%) 1,084 (22%) 731 (26%) 33 (21%) 0.16 Family history of preeclampsia – n (%) 319 (14%) 634 (13%) 341 (12%) 18 (11%) 0.64 Blood pressure at entry (9-12 weeks) Systolic – mm Hg 108±10 108±10 108±10 110±9 0.23 Diastolic – mm Hg 65±8 65±8 65±8 64±9 0.25 Values are mean ± SD or n (%). Abbreviations: NA; not applicable. Statistical comparisons for race/ethnicity were not performed, as cases and controls were matched for race/ethnicity. Significant difference (p<0.05) from: *Hp 1-1, † Hp 2-1, ‡ Hp 2-2. \n\nBODY.RESULTS.PRIMARY OUTCOME AND PREECLAMPSIA RISK:\nHp phenotype was not significantly associated with the risk of the primary outcome, preeclampsia, severe preeclampsia, or early onset preeclampsia in white/other, Hispanic or black women (Table 2). Women of \"other\" race comprised 4% of the white/other group, and results were not different when these women were excluded (data not shown). The relationship between Hp phenotype on early onset preeclampsia approached statistical significance (p = 0.06) in Hispanics. 10.1371/journal.pone.0060479.t002 Table 2 Outcomes according to race/ethnicity and Hp phenotype in the weighted pooled cohort. Outcome Weighted Cases (n) Hp 2-1 † OR (95% CI) Hp 2-2 † OR (95% CI) White/Other Primary Outcome * \n 216 0.87 (0.56, 1.35) 0.92 (0.58, 1.45) Preeclampsia 222 0.98 (0.62, 1.54) 1.18 (0.74, 1.88) Severe Preeclampsia 90 1.30 (0.65, 2.62) 1.31 (0.64, 2.69) Early Onset Preeclampsia 58 0.96 (0.41, 2.22) 1.40 (0.61, 3.22) Late Onset Preeclampsia 163 0.98 (0.59, 1.64) 1.10 (0.65, 1.87) Black Primary Outcome 243 1.26 (0.90, 1.76) 1.00 (0.66, 1.51) Preeclampsia 237 1.03 (0.74, 1.45) 0.92 (0.61, 1.38) Severe Preeclampsia 115 1.14 (0.73, 1.80) 0.88 (0.49, 1.55) Early Onset Preeclampsia 74 1.18 (0.67, 2.07) 0.95 (0.48, 1.88) Late Onset Preeclampsia 163 0.97 (0.66, 1.44) 0.90 (0.56, 1.46) Hispanic Primary Outcome 116 0.79 (0.49, 1.29) 0.85 (0.47, 1.54) Preeclampsia 217 0.77 (0.53, 1.12) 0.74 (0.46, 1.19) Severe Preeclampsia 76 0.71 (0.40, 1.24) 0.67 (0.32, 1.40) Early Onset Preeclampsia 42 0.41 (0.20, 0.86) 0.53 (0.21, 1.37) Late Onset Preeclampsia 175 0.90 (0.60, 1.37) 0.82 (0.49, 1.39) Hp 1-1 was the reference group for odds ratios. Hp phenotype was determined in 1,573 black women, 1,164 Hispanic women, and 1,432 women of white or other race. Each observation was weighted to reflect the pregnancy outcomes and racial distribution of the full cohort. * The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclampsia, fetal growth restriction, medically indicated preterm birth or perinatal death. Abbreviations: OR, odds ratio; CI, confidence interval. † Adjusted for group (treatment vs. placebo), race/ethnicity, age, education, vitamin use and diastolic blood pressure at randomization. \n\nBODY.RESULTS.TREATMENT RESPONSE:\nVitamin C and E supplementation did not prevent the primary outcome or any form of preeclampsia among white/other, black or Hispanic women of any Hp phenotype after adjusting for vitamin use, age, education, and diastolic blood pressure at randomization (data not shown). In Hispanics, the effect of vitamin supplementation differed between Hp phenotype groups for preeclampsia (Hp phenotype*treatment group: p<0.01) and late onset preeclampsia (p = 0.02), but not for severe (p = 0.06) or early onset preeclampsia (p = 0.10). Vitamin supplementation significantly increased preeclampsia (odds ratio (OR) 3.30; 95% confidence interval (CI) 1.61-6.82, p<0.01, Table 3) and late onset preeclampsia risk (3.01; 1.38-6.57, p<0.01) in Hispanic Hp 2-2 women. This adverse effect was not seen in Hp 1-1 or 2-1 Hispanics (p>0.05), or in white/other or black women of any phenotype. 10.1371/journal.pone.0060479.t003 Table 3 Odds ratios for effect of treatment in Hispanic women in the weighted pooled cohort. Outcome Phenotype OR (95% CI) * \n p * \n Preeclampsia Hp 1-1 0.71 (0.39, 1.29) 0.27 Hp 2-1 0.95 (0.61, 1.50) 0.83 Hp 2-2 3.30 (1.61, 6.82) <0.01 Late Onset Preeclampsia Hp 1-1 0.70 (0.36, 1.35) 0.29 Hp 2-1 1.02 (0.63, 1.66) 0.92 Hp 2-2 3.01 (1.38, 6.57) <0.01 * Adjusted for vitamin use, age, education and diastolic blood pressure at randomization. Increased preeclampsia risk in Hispanic Hp 2-2 women was specific to the large quantities of vitamins C and E in study supplements. Prenatal/multivitamin use at randomization was not associated with an increased risk of preeclampsia (2.17; 0.53-8.71, p = 0.27) or late onset preeclampsia (2.85; 0.57–14.36, p = 0.20) in Hispanic Hp 2-2 women receiving placebo. Low rates of prenatal/multivitamin use at randomization (43% in Hispanics vs. 91% in non-Hispanics, Table S10) are unlikely to explain why the adverse effect of supplementation in Hp 2-2 women was only seen in Hispanics. Hispanic 2-2 women who were not taking prenatal vitamins at randomization had a smaller increase in preeclampsia risk with supplementation than Hispanic 2-2 women who were taking prenatal vitamins at randomization (2.01; 0.80–5.01, p = 0.14 vs. 7.83; 2.01–30.53, p<0.01, Table 4). 10.1371/journal.pone.0060479.t004 Table 4 Odds ratios for treatment effect in Hispanic Hp 2-2 women according to prenatal/multivitamin use at randomization. Outcome Vitamin Use at Randomization OR (95% CI) * \n p * \n Preeclampsia Yes 7.83 (2.01, 30.53) <0.01 No 2.01 (0.80, 5.01) 0.14 Late Onset Preeclampsia Yes 8.23 (1.64, 41.28) 0.01 No 1.91 (0.73, 5.00) 0.18 * Adjusted for age, education and diastolic blood pressure at randomization. \n\nBODY.RESULTS.HP PHENOTYPE, PRENATAL/MULTIVITAMIN USE AT RANDOMIZATION, AND PLASMA ASCORBATE:\nHp phenotype influences plasma ascorbate concentration [35], providing a potential mechanism by which Hp phenotype might influence treatment response to vitamin C and E. We examined the relationship between Hp phenotype and plasma ascorbate in a pre-planned analysis. Plasma ascorbate was measured at the pre-randomization screening visit in 264 women with known Hp phenotypes. Plasma ascorbate was lower in women who were not taking a prenatal/multivitamin at randomization (n = 34) compared to those who were (n = 230) (p = 0.02 for main effect, Table 5). Statistical comparisons within phenotype groups were not conducted, as the number of women who were not taking prenatal/multivitamins at randomization within each phenotype group was very small. Hp phenotype was not significantly associated with plasma ascorbate (p = 0.73). The interaction between Hp phenotype and vitamin use at randomization was not significant (p = 0.32). Plasma ascorbate was only measured in six Hispanic Hp 2-2 women; therefore we could not determine whether supplementation had a different effect on plasma ascorbate in this group. 10.1371/journal.pone.0060479.t005 Table 5 Plasma ascorbate concentrations according to Hp phenotype and prenatal/multivitamin use at randomization. Hp Phenotype Not Taking Prenatal/multivitaminsat Randomization Taking Prenatal/multivitaminsat Randomization All 8.4 (7.2, 9.5) (n = 34) 9.9 (9.4, 10.3) (n = 230)* 1-1 9.3 (7.4, 11.3) (n = 12) 9.6 (8.6, 10.5) (n = 48) 2-1 8.0 (6.3, 9.6) (n = 16) 9.8 (9.2, 10.4) (n = 126) 2-2 7.4 (4.7, 10.1) (n = 6) 10.2 (9.3, 11.1) (n = 56) Plasma ascorbate at the pre-randomization study visit was measured in a subset of 264 patients with known Hp phenotype. The vitamin and placebo groups were pooled, as study treatment had not yet started. Women not taking a prenatal vitamin at randomization had lower plasma ascorbate than women taking prenatal/multivitamins at randomization (*p = 0.02). Statistical comparisons within phenotype group were not conducted due to the small number of women not taking prenatal/multivitamins at randomization. Values are mean (95% confidence interval) in mg/L. \n\nBODY.DISCUSSION:\nHp phenotype was not associated with the risk of the primary outcome (a composite of PAH and serious adverse outcomes in the mother, her fetus or neonate) or any type of preeclampsia in white, black or Hispanic women. Moreover, daily vitamin C and E supplementation did not prevent the primary outcome or preeclampsia in women of any Hp phenotype. Supplementation increased the risk of preeclampsia and late onset preeclampsia by 3-fold in Hispanic Hp 2-2 women.\n\nBODY.DISCUSSION.HP PHENOTYPE AND THE RISK OF PREECLAMPSIA AND SERIOUS COMPLICATIONS OF PAH:\nWe found no relationship between Hp phenotype and the primary outcome or preeclampsia risk in white, black or Hispanic women. This study has two key strengths. First, it extends the existing literature by including large numbers of black and Hispanic women. Previous studies focused on predominantly white, non-Hispanic women [24],[36],[37],[38]. Studies examining the relationship between Hp phenotype and preeclampsia risk in different racial groups were recently recommended [39]. Second, this is the largest study of non-Hispanic white women completed to date, and was adequately powered to detect the difference that we observed in our previous case-control study [24]. This is particularly important, as the heterogeneous results and small sample sizes of previous European studies suggests that spurious findings may be common. At the time we initiated this investigation, two small, underpowered European studies had reported that preeclampsia risk was not different [38] or increased [36] in Hp 1-1 white women. Adequately powered studies [39], and studies in North American women, were needed. An Israeli study subsequently reported a 50% reduction in preeclampsia risk among Hp 1-1 women, compared to Hp 2-1 and 2-2 [24], [30], [37]. Our larger case-control study suggested that Hp 1-1 was only protective when compared to Hp 2-1, and not Hp 2-2 [24]. Unfortunately, we were unable to confirm this finding in the present cohort. However, we did confirm our previous preliminary data [24] suggesting that there was no relationship between Hp phenotype and preeclampsia risk in black women in a large adequately powered cohort. The trend (p = 0.06) towards an increased risk of early onset preeclampsia in Hp 2-1 and 2-2 Hispanics was likely a spurious result due to the small number of cases (n = 42).\n\nBODY.DISCUSSION.HP PHENOTYPE AND VITAMIN C AND E:\nAccumulating evidence suggests that preeclampsia and later life cardiovascular disease are intimately linked. These conditions share several risk factors (i.e. black race, obesity, diabetes) and pathophysiological processes (inflammation, oxidative stress, endothelial dysfunction, and impaired angiogenesis) [40], [41]. The American Heart Association recently recognized preeclampsia as a cardiovascular disease risk factor [41]. Hp phenotype affects cardiovascular disease risk and treatment response to antioxidant vitamins in some non-pregnant populations, particularly in diabetics [18], [19], [21], [26]. Uncomplicated pregnancy is an insulin resistant state [42], and gestational diabetes [43] and Type 1 diabetes [44] increase preeclampsia risk. Therefore, we examined the potential for Hp phenotype to influence the effectiveness of vitamin C and E in preventing preeclampsia and serious complications of PAH. Daily vitamin C and E did not prevent preeclampsia or serious complications of PAH in low risk primiparous women of any Hp phenotype. Supplementation actually increased preeclampsia risk by 3-fold in Hispanic Hp 2-2 women, however the mechanism requires further investigation. This effect was not explained by the strikingly lower prevalence of vitamin supplementation at randomization in Hispanics. Hispanic women have an increased risk of the metabolic syndrome and diabetes, conditions associated with oxidative stress [45]. Platelet volume, a marker of accelerated platelet turn over, was increased in Hispanic women compared to white or black women in the prediction cohort [46]. Accelerated platelet turnover could indicate low-grade intravascular coagulation and subsequent mild chronic hemolysis. The resulting oxidative stress would be worse in Hp 2-2 women due to the low antioxidant capacity of this phenotype. We speculate that supplementation may have exacerbated pre-existing oxidative stress, and therefore preeclampsia risk, in Hispanic Hp 2-2 women by increasing ascorbate radical formation. A definitive explanation will be sought in future studies. Supplementation increased the risk of late onset preeclampsia, but not early onset preeclampsia, in Hp 2-2 Hispanics. This could reflect pathophysiological differences between these conditions or lower power due to fewer cases of early onset preeclampsia. Several factors could explain the difference between our results and previous research in other populations [18], [19], [21], [26]. First, cardiovascular disease studies suggest that vitamin E alone has different effects from vitamin C and E combined [18], [19], [27]. Vitamin C and E can be beneficial or harmful, depending on Hp phenotype [27]. Coronary artery diameter increased following vitamin C and E supplementation in Hp 1-1 post-menopausal women with coronary artery disease. However, the pro-oxidant effects of vitamin C [28] combined with the reduced antioxidant capacity of Hp 2-2 may have contributed to accelerated coronary artery narrowing in diabetic Hp 2-2 post-menopausal women taking vitamins C and E [21]. Oxidative stress increases in pregnancy [40], suggesting that a similar mechanism could be active in Hp 2-2 pregnant women. In partial support of this hypothesis, we observed a 3-fold increase in preeclampsia risk in Hp 2-2 Hispanic women receiving vitamin C and E. The original RCT combined vitamins C and E, an approach used in cardiovascular disease prevention trials [47], [48], [49] that was successful in small clinical trials. Therefore, we cannot determine whether vitamin E or C alone would have been efficacious. However, the absence of a beneficial treatment effect in women of any Hp phenotype makes this possibility unlikely. Second, previous studies focused primarily on Type 2 diabetics [18], [19], [21], [26], however women with pre-existing diabetes were excluded from the current study. Insulin resistance increases during pregnancy, and 3% of women in the trial developed gestational diabetes [42]. The most extensively studied mechanism by which vitamin E reduces cardiovascular disease risk in Type 2 diabetes is by preventing the HDL dysfunction caused by cross-linking with glycosylated hemoglobin [28]. Hypothesizing that a similar mechanism might be active in Type 1 diabetes, members of our study team assessed the potential for Hp phenotype to modify the effectiveness of vitamin C and E in preventing preeclampsia in women with Type 1 diabetes [30]. Unfortunately, supplementation was not effective in preventing preeclampsia in women with Type 1 diabetes of any Hp phenotype [30]. Trials in Type 2 diabetics have not been conducted. Third, despite the similarities between preeclampsia and cardiovascular disease, there are also critical differences. Preeclampsia only occurs in pregnancy, and is proposed to result from an interaction between materials released by the placenta, and maternal constitutional characteristics that are largely cardiovascular risk factors [40]. It may be that transient exposure to insults in young women cannot be equated to years of chronic exposure to cardiovascular risk factors in older women.\n\nBODY.DISCUSSION.CONCLUSIONS:\nIn contrast to our hypothesis, our secondary analysis of a large RCT indicates that Hp phenotype does not influence preeclampsia risk, or identify a subset of nulligravidas for whom vitamins C and E would reduce preeclampsia risk in white, black or Hispanic women. Vitamins C and E supplementation may increase preeclampsia risk in Hispanic Hp 2-2 women, but due to the limitations of secondary analysis, this finding requires verification.\n\nBODY.SUPPORTING INFORMATION:\nData S1\nSupplemental Data. Supplemental data file containing methods and results for separate analyses of prediction cohort and case-control cohort, and an appendix listing members of the Eunice Kennedy Shriver National Institute of Child Health and Human Development Maternal-Fetal Medicine Units Network.(DOC)Click here for additional data file. Table S1\nPrediction cohort subject characteristics. Values are mean ± SD or n (% within phenotype). Significant difference (p<0.05) from: *Hp 1-1, †Hp 2-1, ‡Hp 2-2.(DOC)Click here for additional data file. Table S2\nOutcomes according to Hp phenotype in the prediction cohort. *Adjusted for treatment group (vitamins vs. placebo), age, race/ethnicity, education, vitamin use prior to randomization, and diastolic blood pressure at randomization. †The primary outcome was severe PAH, or mild or severe PAH with elevated liver enzymes, elevated serum creatinine, thrombocytopenia, eclamptic seizure, fetal growth restriction, medically indicated preterm birth or perinatal death.(DOC)Click here for additional data file. Table S3\nSubject Characteristics for Subjects not in the Prediction Cohort, and in the Weighted Case-Control Cohort. Values are mean ± SD or n (%).(DOC)Click here for additional data file. Table S4\nCase-control cohort weighted subject characteristics. Values are mean ± SD or n (%). Abbreviations: NA; not applicable. Statistical comparisons for race/ethnicity were not performed, as cases and controls were matched for race/ethnicity. Significant difference (p<0.05) from: *Hp 1-1, †Hp 2-1, ‡Hp 2-2.(DOC)Click here for additional data file. Table S5\nOutcomes according to Hp phenotype and race/ethnicity in the case-control cohort. Hp 1-1 was the reference group for odds ratios. Hp phenotype was determined in 769 black women, 579 Hispanic women, and 716 women of white or other race. Each observation was weighted to reflect the pregnancy outcomes and racial distribution of the full cohort, excluding women in the prediction study (3,170 white/other women, 1,839 black women, 2,459 Hispanic women). Abbreviations: OR, odds ratio; CI, confidence interval. *Adjusted for group (treatment vs. placebo), race/ethnicity, age, education, vitamin use and diastolic blood pressure at randomization †Significantly different from 1, p<0.05.(DOC)Click here for additional data file. Table S6\nInteraction between Hp phenotype and treatment in case-control cohort. *Adjusted for vitamin use, age, education and diastolic blood pressure at randomization(DOC)Click here for additional data file. Table S7\nOdds ratios for the effect of treatment in Hispanic women in the case-control cohort. *Adjusted for vitamin use, age, education and diastolic blood pressure at randomization(DOC)Click here for additional data file. Table S8\nSubject Characteristics for the Original Study and the Weighted Pooled Cohort Values are mean ± SD or n (%). *Values for the 9,969 women with known pregnancy outcomes in the original study. Women who were lost to follow-up (n = 183) were excluded. Weights for this analysis were based on race/ethnicity and pregnancy outcome, therefore these women were not eligible to be selected for the pooled cohort analysis.(DOC)Click here for additional data file. Table S9\nHp phenotype prevalence by race in the weighted pooled cohort Values are n (% within race).\n(DOC)Click here for additional data file. Table S10\nSubject Characteristics for Hispanic vs. non-Hispanic Women in the Weighted Pooled Cohort. Values are mean ± SD unless otherwise indicated.(DOC)Click here for additional data file.\n\n**Question:** Compared to placebo from 9-16 weeks gestation until delivery. what was the result of daily doses of 400 IU of vitamin E and 1000 mg of vitamin C from 9-16 weeks gestation until delivery. on risk of the primary outcome or preeclampsia?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
929
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Cluster Randomized Trial of Text Message Reminders to Retail Staff in Tanzanian Drug Shops Dispensing Artemether-Lumefantrine: Effect on Dispenser Knowledge and Patient Adherence\n\n ABSTRACT:\nArtemisinin combination therapies are available in private outlets, but patient adherence might be compromised by poor advice from dispensers. In this cluster randomized trial in drug shops in Tanzania, 42 of 82 selected shops were randomized to receive text message reminders about what advice to provide when dispensing artemether-lumefantrine (AL). Eligible patients purchasing AL at shops in both arms were followed up at home and questioned about each dose taken. Dispensers were interviewed regarding knowledge of AL dispensing practices and receipt of the malaria-related text messages. We interviewed 904 patients and 110 dispensers from 77 shops. Although there was some improvement in dispenser knowledge, there was no difference between arms in adherence measured as completion of all doses (intervention 68.3%, control 69.8%, p [adjusted] = 0.6), or as completion of each dose at the correct time (intervention 33.1%, control 32.6%, p [adjusted] = 0.9). Further studies on the potential of text messages to improve adherence are needed.\n\nBODY.INTRODUCTION:\nPatient adherence to treatment is an important step in ensuring the effectiveness of artemisinin-based combination therapies (ACTs) for malaria.1 Incomplete adherence to recommended treatment can result in poor clinical outcomes, undermine the effectiveness of case management as a tool for malaria control, and may contribute to the selection of drug-resistant malaria parasites.2,3 ACTs are first-line treatment of Plasmodium falciparum malaria in the public sector of most malaria-endemic countries, with patient adherence reported to range widely from 39% to 100%.4,5 Many patients seek care for malaria in the private retail sector.6–9 The proportion of private sector clients obtaining ACTs has increased over time as ACTs have become more widely known, and their price has fallen, particularly in settings where they have been subsidized by programs such as the Affordable Medicines Facility-malaria (AMFm).10 Although access to effective antimalarials in the private sector may have increased, drug sellers may not always provide patients with appropriate doses or advice, raising concerns about patient adherence, though evidence is very limited. Only five studies have specifically assessed patient adherence to antimalarials obtained in the private retail sector.11–15 Of these, ACTs were used only in one study by Cohen and others (2012)11 in Uganda, which reported 66% of patients seeking care from drug shops were adherent. As ACTs become more available in the private sector, it becomes increasingly important to understand patient adherence and the effects of interventions intended to improve adherence. Supporting interventions, such as shopkeeper training, have previously succeeded in increasing the proportion of patients who receive and complete the recommended dose of non-ACT antimalarials,13,16 but such interventions have yet to be tested on a national scale or applied to ACTs. Mobile phones are a promising tool for the delivery of healthcare interventions as coverage of mobile networks and handset ownership increases.17–19 Text messaging, the least expensive mobile phone function, has been used in malaria control settings for commodity monitoring, disease surveillance, and pharmacovigilance.20 In addition, a trial in public health facilities in Kenya21 showed that 6 months of text message reminders improved public health workers' management of pediatric malaria by 24% points immediately after the intervention, and the improvements were sustained for at least 6 months after the intervention was withdrawn. The text message reminders were well accepted by health workers,22 inexpensive, and cost-effective.23 Given the concerns over inadequate patient adherence to ACTs delivered through the private retail sector, and the potential benefit of text-message interventions to enhance adherence, we designed and completed a cluster randomized trial in southern Tanzania to assess the effect of text message reminders to drug shop workers on patient adherence to artemether-lumefantrine (AL). We also evaluated the effect of text messages on dispenser knowledge and advice. The private retail sector in Tanzania is an important source of treatment of malaria,24,25 and ACT availability in such outlets increased after the implementation of AMFm in 2010. Another key intervention in Tanzania's private sector has been the creation of accredited drug dispensing outlets (ADDOs) by the Tanzania Food and Drug Administration (TFDA) to improve regulation of drug shops and quality of medicines. ADDOs are drug shops that have been upgraded through a process of training and certification and are allowed to sell a limited number of prescription-only drugs, including some antibiotics and ACTs.26,27\n\nBODY.METHODS.STUDY SETTING.:\nThe study was conducted in Mtwara, a rural region in southeastern Tanzania with 35.5% of the population in the lowest national wealth quintile.28 Prevalence of malaria among children 6–59 months of age in Mtwara was 17.4% in the 2011–2012 HIV/AIDS and Malaria Indicator Survey29 and 23% in a survey of patients seeking treatment at private drug shops.30 AL has been recommended as the first-line treatment of malaria since 2004, although it was not available in public health facilities until 2006. The recommended treatment regimen is six doses of AL over 3 days, with 1–4 tablets (20 mg artemether/120 mg lumefantrine) per dose depending on the weight/age band. National guidelines state that the second dose should be taken 8 hours after the first dose, followed by the remaining doses morning and evening of the second and third days.31 In Mtwara, ADDO accreditation commenced in 2006, and officially only accredited drug stores are allowed to function. In reality, a large number of non-accredited shops exist because of a lack of training, unpaid fees, or administrative delays. These shops are tolerated by regulating authorities and considered \"prospective ADDOs.\" Before AMFm, ACTs were not commonly available in ADDOs in Mtwara, and dispenser training on ACTs was limited, but ACT availability significantly increased after AMFm implementation, with 88% of ADDOs stocking ACTs in Mtwara in August 2011.32 To support AMFm roll out, TFDA offered a 1 day refresher training that included treatment of malaria with ACTs to dispensers with previous nursing training in Mtwara in August 2011.\n\nBODY.METHODS.SAMPLE SIZE CALCULATIONS.:\nWe based the sample size for this two arm cluster randomized trial on data from the public sector in Tanzania, where patient adherence to AL was 65–98%(Khatib R, unpublished data).33–35 We assumed lower adherence to AL obtained at ADDOs in Mtwara (60%) and powered the study to detect a 15% point increase in the intervention arm. We wanted to recruit a small number of patients per cluster to reduce the potential bias caused by increasing community awareness of the study's objectives. Assuming a coefficient of variation of 0.25, 80% power, 5% significance, and 20% loss to follow-up, 13 patients from 36 outlets in each arm were required, a total of 468 patients per arm.\n\nBODY.METHODS.SELECTION OF STUDY ADDOS.:\nIn May 2012, we conducted a census of all drug shops in Mtwara (ADDOs or prospective ADDOs). Data were collected on the characteristics of owners and dispensers, ACT stocks and sales, and global positioning system (GPS) coordinates. ADDOs were excluded from the sampling frame if they had sold fewer than five antimalarial treatments in the previous week, no dispensers used a mobile phone, the shop was located on the border with Mozambique or was not accessible, or the owner refused to participate (Figure 1). The final sampling frame consisted of 131 ADDOs. The 82 ADDOs were selected sequentially at random, with any ADDOs within 400 m of a selected ADDO, or any ADDOs where staff from a selected ADDO also worked, removed from the sampling frame. The selected ADDOs were then stratified by location in urban or rural wards, and the intervention was randomly allocated to 29 of 57 urban ADDOs and 13 of 25 rural ADDOs. Figure 1.Consort-like style flow diagram of trial.\n\nBODY.METHODS.INTERVENTION DESIGN.:\nWe designed seven content messages about advice that dispensers should provide when dispensing AL (Figure 2). The messages were derived from the government refresher training booklet and reflected the recommended practices for dispensing AL. Messages were pilot tested in ∼20 ADDOs in a semi-urban district outside of Dar es Salaam. Dispensers at these ADDOs were sent each potential message in turn and asked to explain their understanding of the meaning and relevance of each message. Dispensers were also asked if they would find receipt of the messages helpful, how often they would like to read the messages, and whether complementary components such as quotes or questions would encourage reading. Phrasing and frequency of the messages were adjusted based on feedback received during the pilot. Figure 2.Content of text messages sent to dispensers in the intervention arm. Before sending the first message, the 42 ADDOs in Mtwara randomized to the intervention arm were visited to invite participation and collect an updated list of mobile numbers for all dispensers. Messages began in July 2012 and were sent in random order once per day Monday–Friday for the first 4 weeks, followed by once per day Monday, Wednesday, and Friday for the next 10 weeks. Messages were written in Swahili and scheduled in advance on an automated platform, with each message paired with a different complementary component each time to promote interest. Complementary components consisted of inspirational quotes or proverbs or, once per week, a quiz question on message content that earned correct respondents free air time (500 TSH or $.30). Over the 14-week period, 49 messages were sent to each of 60 dispensers, and detailed delivery reports were kept for each message.\n\nBODY.METHODS.DATA COLLECTION.:\nFrom September through November 2012, dispensers at ADDOs in the intervention and control arms were visited by study supervisors and given a standard introduction about study objectives. To limit patients' awareness of our primary interest in adherence, which could have led to a biased assessment, dispensers were told we were studying how patients chose to treat fever and would visit some, but not all, patients at their homes. They were asked to fill out a registration form for all patients purchasing any treatment of fever, including the day and time of their ADDO visits, the patients' names, the drugs purchased, and a description of where the patients lived. Dispensers were provided with blister packs of AL that they could then sell to patients needing treatment of malaria. Study staff visited ADDOs every day to check and collect registration forms for 1–3 weeks, or until 12–15 patients purchasing AL were registered if quicker. Eligible patients who obtained AL were identified from the registration forms and assigned patient identification numbers (recorded on follow-up forms). Patients were followed up ∼68–72 hours after their ADDO visit, according to a predefined schedule, and all attempts to locate and interview patients were recorded. Where written informed consent was given, patients or their caregivers were asked about demographic and socioeconomic characteristics, treatment-seeking history, illness symptoms, detailed information about each dose of AL taken and the advice provided by the ADDO dispenser. Blister packs were requested for a pill count, and blood samples were collected for a blood smear and a malaria rapid diagnostic test (mRDT) (Pf-specific from ICT Diagnostics, Cape Town, South Africa). Blood smears were stained in the field and transported to the Ifakara Health Institute, where they were double-read by two microscopists blinded to results from each other and the mRDT, with discrepant results read by a third microscopist. Adherence was defined in two ways.5 Patients were considered to have \"verified completed treatment\" if they reported taking all doses by the time of the follow-up visit and a pill count verified that no pills remained in the blister pack, if available. The second, more stringent definition included a time component, based on patient reports of the time each dose was taken using the Swahili times of day: \"alfajiri\" (early morning), \"asubuhi\" (morning), \"mchana\" (afternoon), \"jioni\" (evening), \"usiku\" (night), and \"usiku sana\" (late night). Patients were considered to have \"verified timely completion\" if they took the second dose at the Swahili time of day corresponding with 8 hours after the first dose, and then took each of the remaining doses at the Swahili time of day corresponding with 12 hours after the previous dose, verified by the absence of pills in the blister pack. For ease of reading, we hereafter refer to these definitions as \"completed treatment\" and \"timely completion.\" After completion of patient interviews, dispensers working at the study ADDOs were interviewed on ADDO characteristics, demographics, receipt of study text messages, and advice they would give to patients when dispensing AL. To assess knowledge corresponding with message content, we used an open-ended question, e.g., \"I would like to ask about which advice you think you should provide to a person of any age taking treatment for malaria. For the following topics (e.g., \"when to take the second dose\" or \"what to do with the pills if the patient feels better,\" etc.) tell me if advice on this topic is important or not, and if so, what advice you would provide.\" Responses were recorded verbatim and evaluated by the study leader using predetermined criteria based on message content.\n\nBODY.METHODS.DATA ENTRY AND ANALYSIS.:\nAll patient and dispenser interview data were collected using personal digital assistants, and data extracted from study forms (census, registration, and follow-up forms) were double entered into Microsoft Access databases (Microsoft Corp., Redmond, WA). Data were analyzed in Stata 11.0 (Stata Corporation, College Station, TX). Primary outcomes were analyzed by intention-to-treat. Comparison of adherence was based on a t test of the proportion adherent in each cluster. A list of potentially important confounders was identified a priori consisting of ADDO accreditation certificate, number of customers purchasing ACTs in the previous week, dispenser medical qualification, dispenser training on ACTs, patient age, and patient education. Adjustment for variables on this list found to be unbalanced between arms was performed by fitting a logistic regression model to the individual data and performing analysis on the aggregated residuals, as described by Bennett and colleagues.37\n\nBODY.METHODS.ETHICS.:\nAll questionnaires, consent forms, and other study documents were translated into Swahili and piloted before use. Written informed consent was collected from dispensers before census, patient registration and interview, and from patients or their caregivers prior to interview. The study protocol was approved by the ethical review boards of Ifakara Health Institute and London School of Hygiene and Tropical Medicine. The Centers for Disease Control and Prevention (CDC) investigators provided technical assistance in design and analysis but were not engaged in data collection. The trial is registered with Current Controlled Trials, ISRCTN83765567.\n\nBODY.RESULTS:\nFigure 1 shows the trial profile. Of the 82 randomized ADDOs, 37 from the control arm and 40 from the intervention arm participated in the study. The number of registered patients eligible for follow-up was 537 in the control arm and 518 in the intervention arm, with ∼15% of patients in each arm lost to follow-up. Most outlets were in urban wards (70% in both arms), had a single dispenser, and had at least some ACTs in stock on the day of interview (Table 1). Of 51 dispensers in the control arm and 59 in the intervention arm, ∼80% in both arms were female and had a low-level medical qualification, mostly nurse assistants (Table 2). Though low in both arms, more ADDOs in the control arm compared with the intervention arm were able to show an accreditation certificate (43% versus 20%, respectively). However, the difference in the percentages of dispensers that had received training on ACTs was not as pronounced (69% in the control arm versus 60% in the intervention arm), though the median year of training was more recent in the control arm (2011 versus 2009). Characteristics of patients were well balanced between arms (Table 3). A high percentage of patients (36% in the control arm and 38% in the intervention arm) had sought care before attending the study ADDO, with most patients going to a kiosk or general shop and only 7% of patients in the control arm and 4% in the intervention arm going to a public health facility. Approximately 90% reported symptoms of fever or headache, and approximately half had upset stomachs or nausea. Based on an mRDT taken at the time of follow-up, 28% in the control arm and 25% in the intervention arm tested positive, with only 1.4% and 1.6%, respectively, testing positive by study blood smear. (Some degree of discrepancy is expected because of the persistence of HRP2 detected by the mRDT.) Seventy percent of dispensers received at least 75% of the text messages. The median percentage of messages received was 86%, with 20% of dispensers receiving no messages (Figure 3). Table 4 presents results of the dispenser interviews on knowledge of advice to provide patients when dispensing AL. Dispensers in the intervention arm reported slightly better knowledge of the correct AL regimen for adults in the intervention arm compared with the control arm (90% versus 78%; adjusted prevalence ratio [aPR] = 1.2 (95% confidence interval [CI]: 0.95, 1.5); p [adjusted] = 0.0748), though knowledge of the correct regimen for a child aged four weighing 20 kg was lower than for adults in both arms (75% versus 64%; aPR = 1.2 [0.85, 1.7]; p [adjusted] = 0.2). Dispenser knowledge was considerably higher in the intervention arm than the control arm on advice to take AL with fatty food (60% versus 20%; aPR = 3.4 [95% CI: 1.6, 7.1]; p [adjusted] < 0.0001) and to continue to take AL if minor side effects occur (68% versus 43%; aPR = 1.6 [95% CI: 1.0, 2.4]; p [adjusted] = 0.0188). However, surprisingly high dispenser knowledge (87–99%) was recorded in both arms on advice to complete treatment even if feeling better, advice to return to the ADDO or go to a health facility if the condition worsens, and advice to take the second dose after 8 hours (Table 4). Knowledge on advising patients to take a replacement dose in case of vomiting within half an hour of taking a dose was lower (55% versus 50%), with no difference observed between arms. Figure 3.Percentage of text messages received by dispensers in the intervention arm. Table 5 shows that ∼60% of patients in both arms reported being told how to take AL correctly, with similar percentages reporting being told to take the second dose after 8 hours and to complete treatment even if feeling better, indicating that dispensers were providing some advice even in the absence of the intervention. However, no differences were found between control and intervention arms for any piece of advice. Less than 5% of patients in both arms reported being told about vomiting, side effects, or taking AL with fatty food, and < 10% in both arms took the first dose of AL at the ADDO. There was no difference in patient adherence between arms (Table 6). Completed treatment was 70% in the control arm and 68% in the intervention arm (adjusted risk ratio [aRR] = 0.96 [95% CI: 0.82, 1.1]; p [adjusted] = 0.6), with a similar percentage of patients adherent to each of the four age-appropriate blister packs and no important differences between arms. The mean number of doses taken by non-adherent patients was four in both arms, and the most common reported reasons for non-adherence included planning to take the medication later, forgetting to take the tablets, feeling better, and other reasons (Figure 4). Timely completion was much lower, with 33% of patients in both arms taking all doses at appropriate times. A per protocol analysis, excluding patients attending ADDOs where at least one dispenser did not receive any messages, had no impact on results. Figure 4.Reasons given by patients/caretakers for not completing treatment.\n\nBODY.DISCUSSION:\nWe have reported results from a cluster randomized trial of a text message intervention directed at drug shop dispensers to improve patient adherence to ACTs in Tanzania. The intervention increased dispenser knowledge of some components of advice to provide patients when dispensing AL, but knowledge of other components was already very high in the absence of the intervention. The improvements in knowledge did not translate into an increase in information patients reported receiving, even though patients commonly reported receiving some advice. There was no difference in adherence of patients to the ACT regimen between the arms. Adherence by completed treatment was < 70% to ACTs obtained from ADDOs, comparable to the 66% adherent by the same definition in the study by Cohen and others (2012)11 in the private retail sector in Uganda. However, timely completion was only 33% in our study, indicating that even patients who complete treatment may do so with poor adherence to the recommended schedule. Both of these results are comparable to reported adherence to ACTs obtained from public health facilities, where studies under real life conditions have found adherence of 64–77% for completed treatment verified by pill count and 39–75% for timely completion verified by pill count,5 including one study from Tanzania.35 Two other studies from public health facilities in mainland Tanzania using different definitions and methods have reported higher adherence (88.3% and 90%).33,34 Although the text message intervention targeting dispensers was not effective at improving patient adherence, there was a marked increase in dispenser knowledge of advising patients to take AL with fatty foods or milk and to continue AL even if minor side effects occurred. However, knowledge in both arms was surprisingly high, particularly on advising patients to take the second dose after 8 hours, to complete treatment even if feeling better, and to seek further care if the condition worsens. This could reflect the recent ADDO trainings in Mtwara, raising the possibility that the intervention's impact could have been different in the absence of recent training. Knowledge did not necessarily result in the provision of advice, even though some advice was provided. For example, 98% of dispensers in both arms knew it was important to advise patients to complete treatment even if feeling better, but only 60% of patients reported receiving this advice. Other advice was much less commonly provided, with < 5% of patients in both arms reporting being advised on what to do in case of minor side-effects or vomiting, even though dispenser knowledge of this advice was much higher. This may be because the dispensers did not deem the advice helpful to their business or to the patients, as it could heighten a negative perception about the effects of their products. Dispensers may have also perceived that clients were in a hurry or not receptive to advice. Alternatively, patients or caretakers may not have recalled the advice given to them several days before. Exit interviews or mystery shopper surveys may have been useful in assessing whether advice was communicated, but these methods also have limitations, such as greater potential for a Hawthorne effect and ethical challenges. We intentionally avoided telling dispensers that the purpose of our study was to improve patients' adherence. Although dispensers receiving text messages were aware that the content focused on advising patients about the correct use of AL, we did not mention the objective to dispensers in either arm to avoid this information being relayed to patients, who might then change their behavior because they expected their adherence to be monitored.5 However, if this intervention were to be scaled up outside the study context, one might include greater emphasis on adherence and its value in communications with dispensers, which might in turn increase the likelihood that they would provide appropriate advice. The evaluation of patient adherence relied on self-reported data from patients or their caregivers, which may be subject to recall and social desirability bias. We inspected blister packs for the 80% of patients who could provide them and identified only 10 patients (1%) that had reported completing all doses but had pills remaining. On the other hand, patients may have removed pills from the packaging to consume at a later time. Even though Swahili times of day, based on sunrise and sunset, were used to assess timely completion, patients or their caregivers may not have remembered when each dose was taken or may have provided the expected responses to avoid being seen as negligent. A similar text message intervention targeting health workers in public health facilities in Kenya found significant improvements in health worker case management of pediatric malaria.21 The primary outcome measure by Zurovac and colleagues included completion of four treatment tasks (e.g., prescribing AL) and at least four of six dispensing and counseling tasks, of which the biggest improvements were seen in giving the first dose at the health facility and advising patients to take the second dose after 8 hours, take each dose after a meal, and what to do in case of vomiting. Although we found strong evidence in the intervention arm of improved dispenser knowledge of advice to take each dose with a fatty meal, we recorded high dispenser knowledge in both arms of advice to take the second dose after 8 hours and no difference between arms in knowledge of advice on what to do in case of vomiting. We also recorded < 10% of patients in either arm taking the first dose of AL at the ADDO, even though drinking water was available at many ADDOs. The contrasts between our findings and those of Zurovac and colleagues could reflect the private drug shop setting, as we found patients' relatives often seek care at ADDOs on behalf of patients, in contrast to public health facilities where patients themselves must be present for a clinical exam. Health workers in public health facilities may also be more accustomed to taking on advisory roles than dispensers in private drug shops and less concerned with making a profit.37 Interventions involving training of dispensers in the private retail sector, although limited in number, have improved dispenser knowledge across a range of diseases and settings, but the impact of improved knowledge on dispenser and patient behavior has been mixed.37,38 Even fewer studies have reported effects of an intervention targeted at retail dispensers on patient adherence to antimalarial drugs. One study from 1998–2001 in Kilifi, Kenya found that trained shopkeepers were willing to take on an advisory role, resulting in both increases in advice and the proportion of patients taking adequate doses of chloroquine and sulfadoxine pyrimethamine or sulfadoxine-pyrimethamine.14 Although receiving instructions has been associated with patient adherence to antimalarials in several studies in the public and private sectors,11,39,40 other factors might also influence patient adherence, including patient education, higher socioeconomic status, treatment-seeking behavior, understanding the instructions, knowledge and perceptions of the illness or of the drug, and satisfaction with information received or with the drug.5 The private retail sector is likely to continue to be an important source of treatment of malaria and there is a need to maximize patient adherence to ACTs. Given the effectiveness of text message reminders on health worker case management in Kenya and the low cost of this intervention, there is potential for further evaluations of text message interventions targeted at dispensers in the private retail sector to improve dispenser knowledge, advice provided, and patient adherence, particularly in settings where dispensers have not received recent training on malaria. Such interventions should ensure message content addresses gaps in dispenser knowledge and would benefit from additional research on dispenser readiness to provide advice, and client receptivity to their advice. There should also be further consideration of how best to design the evaluation so that dispensers are motivated to communicate the importance of adherence without biasing study results. The double gap between dispenser knowledge and providing advice and then patients receiving advice and being adherent may also call for other interventions to enhance adherence. Text message reminders to patients have been shown to be a low-cost approach to improve patient adherence to antiretroviral therapy for HIV41–43 and have been used in two recent studies to increase patient adherence to malaria test results and treatment (Goldberg J, unpublished data).44 However, scaling up a text message intervention targeted at malaria patients in Tanzania would require an increase in personal mobile phone use among patients most at risk of malaria,45 as only about half of the households in rural areas own a mobile phone,29 and the phone may be shared among household members. In contrast, nearly all dispensers censused in Mtwara regularly used at least one mobile phone. Other interventions that have been shown to improve patient adherence to antimalarial drugs include packaging and community education.46 ACTs are now mostly available in factory packaged unit dose packs blister packs with illustrated instructions, therefore additional room for improvement may be limited. One possible modification could be improved instructions in local languages. Community education through communication campaigns could be helpful in emphasizing the importance of taking all doses, but it may be challenging to communicate the details of when and how to take each dose to the general population. Finally, the introduction of mRDTs in the private sector might have positive implications for patient adherence, especially if also combined with dispenser advice.5\n\nBODY.CONCLUSION:\nText message reminders improved some aspects of dispenser knowledge of advice to provide to patients when dispensing AL in the private sector. However, patients in the intervention arm were not more likely to report receiving improved advice and did not have higher adherence than patients in the control arm. Adherence to AL among patients in both arms was suboptimal, highlighting the need for studies evaluating other interventions to improve adherence to ACTs obtained in the private retail sector.\n\n**Question:** Compared to no text message what was the result of receive text message reminders about what advice to provide when dispensing artemether-lumefantrine (AL). on patient adherence?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
930
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Cluster Randomized Trial of Text Message Reminders to Retail Staff in Tanzanian Drug Shops Dispensing Artemether-Lumefantrine: Effect on Dispenser Knowledge and Patient Adherence\n\n ABSTRACT:\nArtemisinin combination therapies are available in private outlets, but patient adherence might be compromised by poor advice from dispensers. In this cluster randomized trial in drug shops in Tanzania, 42 of 82 selected shops were randomized to receive text message reminders about what advice to provide when dispensing artemether-lumefantrine (AL). Eligible patients purchasing AL at shops in both arms were followed up at home and questioned about each dose taken. Dispensers were interviewed regarding knowledge of AL dispensing practices and receipt of the malaria-related text messages. We interviewed 904 patients and 110 dispensers from 77 shops. Although there was some improvement in dispenser knowledge, there was no difference between arms in adherence measured as completion of all doses (intervention 68.3%, control 69.8%, p [adjusted] = 0.6), or as completion of each dose at the correct time (intervention 33.1%, control 32.6%, p [adjusted] = 0.9). Further studies on the potential of text messages to improve adherence are needed.\n\nBODY.INTRODUCTION:\nPatient adherence to treatment is an important step in ensuring the effectiveness of artemisinin-based combination therapies (ACTs) for malaria.1 Incomplete adherence to recommended treatment can result in poor clinical outcomes, undermine the effectiveness of case management as a tool for malaria control, and may contribute to the selection of drug-resistant malaria parasites.2,3 ACTs are first-line treatment of Plasmodium falciparum malaria in the public sector of most malaria-endemic countries, with patient adherence reported to range widely from 39% to 100%.4,5 Many patients seek care for malaria in the private retail sector.6–9 The proportion of private sector clients obtaining ACTs has increased over time as ACTs have become more widely known, and their price has fallen, particularly in settings where they have been subsidized by programs such as the Affordable Medicines Facility-malaria (AMFm).10 Although access to effective antimalarials in the private sector may have increased, drug sellers may not always provide patients with appropriate doses or advice, raising concerns about patient adherence, though evidence is very limited. Only five studies have specifically assessed patient adherence to antimalarials obtained in the private retail sector.11–15 Of these, ACTs were used only in one study by Cohen and others (2012)11 in Uganda, which reported 66% of patients seeking care from drug shops were adherent. As ACTs become more available in the private sector, it becomes increasingly important to understand patient adherence and the effects of interventions intended to improve adherence. Supporting interventions, such as shopkeeper training, have previously succeeded in increasing the proportion of patients who receive and complete the recommended dose of non-ACT antimalarials,13,16 but such interventions have yet to be tested on a national scale or applied to ACTs. Mobile phones are a promising tool for the delivery of healthcare interventions as coverage of mobile networks and handset ownership increases.17–19 Text messaging, the least expensive mobile phone function, has been used in malaria control settings for commodity monitoring, disease surveillance, and pharmacovigilance.20 In addition, a trial in public health facilities in Kenya21 showed that 6 months of text message reminders improved public health workers' management of pediatric malaria by 24% points immediately after the intervention, and the improvements were sustained for at least 6 months after the intervention was withdrawn. The text message reminders were well accepted by health workers,22 inexpensive, and cost-effective.23 Given the concerns over inadequate patient adherence to ACTs delivered through the private retail sector, and the potential benefit of text-message interventions to enhance adherence, we designed and completed a cluster randomized trial in southern Tanzania to assess the effect of text message reminders to drug shop workers on patient adherence to artemether-lumefantrine (AL). We also evaluated the effect of text messages on dispenser knowledge and advice. The private retail sector in Tanzania is an important source of treatment of malaria,24,25 and ACT availability in such outlets increased after the implementation of AMFm in 2010. Another key intervention in Tanzania's private sector has been the creation of accredited drug dispensing outlets (ADDOs) by the Tanzania Food and Drug Administration (TFDA) to improve regulation of drug shops and quality of medicines. ADDOs are drug shops that have been upgraded through a process of training and certification and are allowed to sell a limited number of prescription-only drugs, including some antibiotics and ACTs.26,27\n\nBODY.METHODS.STUDY SETTING.:\nThe study was conducted in Mtwara, a rural region in southeastern Tanzania with 35.5% of the population in the lowest national wealth quintile.28 Prevalence of malaria among children 6–59 months of age in Mtwara was 17.4% in the 2011–2012 HIV/AIDS and Malaria Indicator Survey29 and 23% in a survey of patients seeking treatment at private drug shops.30 AL has been recommended as the first-line treatment of malaria since 2004, although it was not available in public health facilities until 2006. The recommended treatment regimen is six doses of AL over 3 days, with 1–4 tablets (20 mg artemether/120 mg lumefantrine) per dose depending on the weight/age band. National guidelines state that the second dose should be taken 8 hours after the first dose, followed by the remaining doses morning and evening of the second and third days.31 In Mtwara, ADDO accreditation commenced in 2006, and officially only accredited drug stores are allowed to function. In reality, a large number of non-accredited shops exist because of a lack of training, unpaid fees, or administrative delays. These shops are tolerated by regulating authorities and considered \"prospective ADDOs.\" Before AMFm, ACTs were not commonly available in ADDOs in Mtwara, and dispenser training on ACTs was limited, but ACT availability significantly increased after AMFm implementation, with 88% of ADDOs stocking ACTs in Mtwara in August 2011.32 To support AMFm roll out, TFDA offered a 1 day refresher training that included treatment of malaria with ACTs to dispensers with previous nursing training in Mtwara in August 2011.\n\nBODY.METHODS.SAMPLE SIZE CALCULATIONS.:\nWe based the sample size for this two arm cluster randomized trial on data from the public sector in Tanzania, where patient adherence to AL was 65–98%(Khatib R, unpublished data).33–35 We assumed lower adherence to AL obtained at ADDOs in Mtwara (60%) and powered the study to detect a 15% point increase in the intervention arm. We wanted to recruit a small number of patients per cluster to reduce the potential bias caused by increasing community awareness of the study's objectives. Assuming a coefficient of variation of 0.25, 80% power, 5% significance, and 20% loss to follow-up, 13 patients from 36 outlets in each arm were required, a total of 468 patients per arm.\n\nBODY.METHODS.SELECTION OF STUDY ADDOS.:\nIn May 2012, we conducted a census of all drug shops in Mtwara (ADDOs or prospective ADDOs). Data were collected on the characteristics of owners and dispensers, ACT stocks and sales, and global positioning system (GPS) coordinates. ADDOs were excluded from the sampling frame if they had sold fewer than five antimalarial treatments in the previous week, no dispensers used a mobile phone, the shop was located on the border with Mozambique or was not accessible, or the owner refused to participate (Figure 1). The final sampling frame consisted of 131 ADDOs. The 82 ADDOs were selected sequentially at random, with any ADDOs within 400 m of a selected ADDO, or any ADDOs where staff from a selected ADDO also worked, removed from the sampling frame. The selected ADDOs were then stratified by location in urban or rural wards, and the intervention was randomly allocated to 29 of 57 urban ADDOs and 13 of 25 rural ADDOs. Figure 1.Consort-like style flow diagram of trial.\n\nBODY.METHODS.INTERVENTION DESIGN.:\nWe designed seven content messages about advice that dispensers should provide when dispensing AL (Figure 2). The messages were derived from the government refresher training booklet and reflected the recommended practices for dispensing AL. Messages were pilot tested in ∼20 ADDOs in a semi-urban district outside of Dar es Salaam. Dispensers at these ADDOs were sent each potential message in turn and asked to explain their understanding of the meaning and relevance of each message. Dispensers were also asked if they would find receipt of the messages helpful, how often they would like to read the messages, and whether complementary components such as quotes or questions would encourage reading. Phrasing and frequency of the messages were adjusted based on feedback received during the pilot. Figure 2.Content of text messages sent to dispensers in the intervention arm. Before sending the first message, the 42 ADDOs in Mtwara randomized to the intervention arm were visited to invite participation and collect an updated list of mobile numbers for all dispensers. Messages began in July 2012 and were sent in random order once per day Monday–Friday for the first 4 weeks, followed by once per day Monday, Wednesday, and Friday for the next 10 weeks. Messages were written in Swahili and scheduled in advance on an automated platform, with each message paired with a different complementary component each time to promote interest. Complementary components consisted of inspirational quotes or proverbs or, once per week, a quiz question on message content that earned correct respondents free air time (500 TSH or $.30). Over the 14-week period, 49 messages were sent to each of 60 dispensers, and detailed delivery reports were kept for each message.\n\nBODY.METHODS.DATA COLLECTION.:\nFrom September through November 2012, dispensers at ADDOs in the intervention and control arms were visited by study supervisors and given a standard introduction about study objectives. To limit patients' awareness of our primary interest in adherence, which could have led to a biased assessment, dispensers were told we were studying how patients chose to treat fever and would visit some, but not all, patients at their homes. They were asked to fill out a registration form for all patients purchasing any treatment of fever, including the day and time of their ADDO visits, the patients' names, the drugs purchased, and a description of where the patients lived. Dispensers were provided with blister packs of AL that they could then sell to patients needing treatment of malaria. Study staff visited ADDOs every day to check and collect registration forms for 1–3 weeks, or until 12–15 patients purchasing AL were registered if quicker. Eligible patients who obtained AL were identified from the registration forms and assigned patient identification numbers (recorded on follow-up forms). Patients were followed up ∼68–72 hours after their ADDO visit, according to a predefined schedule, and all attempts to locate and interview patients were recorded. Where written informed consent was given, patients or their caregivers were asked about demographic and socioeconomic characteristics, treatment-seeking history, illness symptoms, detailed information about each dose of AL taken and the advice provided by the ADDO dispenser. Blister packs were requested for a pill count, and blood samples were collected for a blood smear and a malaria rapid diagnostic test (mRDT) (Pf-specific from ICT Diagnostics, Cape Town, South Africa). Blood smears were stained in the field and transported to the Ifakara Health Institute, where they were double-read by two microscopists blinded to results from each other and the mRDT, with discrepant results read by a third microscopist. Adherence was defined in two ways.5 Patients were considered to have \"verified completed treatment\" if they reported taking all doses by the time of the follow-up visit and a pill count verified that no pills remained in the blister pack, if available. The second, more stringent definition included a time component, based on patient reports of the time each dose was taken using the Swahili times of day: \"alfajiri\" (early morning), \"asubuhi\" (morning), \"mchana\" (afternoon), \"jioni\" (evening), \"usiku\" (night), and \"usiku sana\" (late night). Patients were considered to have \"verified timely completion\" if they took the second dose at the Swahili time of day corresponding with 8 hours after the first dose, and then took each of the remaining doses at the Swahili time of day corresponding with 12 hours after the previous dose, verified by the absence of pills in the blister pack. For ease of reading, we hereafter refer to these definitions as \"completed treatment\" and \"timely completion.\" After completion of patient interviews, dispensers working at the study ADDOs were interviewed on ADDO characteristics, demographics, receipt of study text messages, and advice they would give to patients when dispensing AL. To assess knowledge corresponding with message content, we used an open-ended question, e.g., \"I would like to ask about which advice you think you should provide to a person of any age taking treatment for malaria. For the following topics (e.g., \"when to take the second dose\" or \"what to do with the pills if the patient feels better,\" etc.) tell me if advice on this topic is important or not, and if so, what advice you would provide.\" Responses were recorded verbatim and evaluated by the study leader using predetermined criteria based on message content.\n\nBODY.METHODS.DATA ENTRY AND ANALYSIS.:\nAll patient and dispenser interview data were collected using personal digital assistants, and data extracted from study forms (census, registration, and follow-up forms) were double entered into Microsoft Access databases (Microsoft Corp., Redmond, WA). Data were analyzed in Stata 11.0 (Stata Corporation, College Station, TX). Primary outcomes were analyzed by intention-to-treat. Comparison of adherence was based on a t test of the proportion adherent in each cluster. A list of potentially important confounders was identified a priori consisting of ADDO accreditation certificate, number of customers purchasing ACTs in the previous week, dispenser medical qualification, dispenser training on ACTs, patient age, and patient education. Adjustment for variables on this list found to be unbalanced between arms was performed by fitting a logistic regression model to the individual data and performing analysis on the aggregated residuals, as described by Bennett and colleagues.37\n\nBODY.METHODS.ETHICS.:\nAll questionnaires, consent forms, and other study documents were translated into Swahili and piloted before use. Written informed consent was collected from dispensers before census, patient registration and interview, and from patients or their caregivers prior to interview. The study protocol was approved by the ethical review boards of Ifakara Health Institute and London School of Hygiene and Tropical Medicine. The Centers for Disease Control and Prevention (CDC) investigators provided technical assistance in design and analysis but were not engaged in data collection. The trial is registered with Current Controlled Trials, ISRCTN83765567.\n\nBODY.RESULTS:\nFigure 1 shows the trial profile. Of the 82 randomized ADDOs, 37 from the control arm and 40 from the intervention arm participated in the study. The number of registered patients eligible for follow-up was 537 in the control arm and 518 in the intervention arm, with ∼15% of patients in each arm lost to follow-up. Most outlets were in urban wards (70% in both arms), had a single dispenser, and had at least some ACTs in stock on the day of interview (Table 1). Of 51 dispensers in the control arm and 59 in the intervention arm, ∼80% in both arms were female and had a low-level medical qualification, mostly nurse assistants (Table 2). Though low in both arms, more ADDOs in the control arm compared with the intervention arm were able to show an accreditation certificate (43% versus 20%, respectively). However, the difference in the percentages of dispensers that had received training on ACTs was not as pronounced (69% in the control arm versus 60% in the intervention arm), though the median year of training was more recent in the control arm (2011 versus 2009). Characteristics of patients were well balanced between arms (Table 3). A high percentage of patients (36% in the control arm and 38% in the intervention arm) had sought care before attending the study ADDO, with most patients going to a kiosk or general shop and only 7% of patients in the control arm and 4% in the intervention arm going to a public health facility. Approximately 90% reported symptoms of fever or headache, and approximately half had upset stomachs or nausea. Based on an mRDT taken at the time of follow-up, 28% in the control arm and 25% in the intervention arm tested positive, with only 1.4% and 1.6%, respectively, testing positive by study blood smear. (Some degree of discrepancy is expected because of the persistence of HRP2 detected by the mRDT.) Seventy percent of dispensers received at least 75% of the text messages. The median percentage of messages received was 86%, with 20% of dispensers receiving no messages (Figure 3). Table 4 presents results of the dispenser interviews on knowledge of advice to provide patients when dispensing AL. Dispensers in the intervention arm reported slightly better knowledge of the correct AL regimen for adults in the intervention arm compared with the control arm (90% versus 78%; adjusted prevalence ratio [aPR] = 1.2 (95% confidence interval [CI]: 0.95, 1.5); p [adjusted] = 0.0748), though knowledge of the correct regimen for a child aged four weighing 20 kg was lower than for adults in both arms (75% versus 64%; aPR = 1.2 [0.85, 1.7]; p [adjusted] = 0.2). Dispenser knowledge was considerably higher in the intervention arm than the control arm on advice to take AL with fatty food (60% versus 20%; aPR = 3.4 [95% CI: 1.6, 7.1]; p [adjusted] < 0.0001) and to continue to take AL if minor side effects occur (68% versus 43%; aPR = 1.6 [95% CI: 1.0, 2.4]; p [adjusted] = 0.0188). However, surprisingly high dispenser knowledge (87–99%) was recorded in both arms on advice to complete treatment even if feeling better, advice to return to the ADDO or go to a health facility if the condition worsens, and advice to take the second dose after 8 hours (Table 4). Knowledge on advising patients to take a replacement dose in case of vomiting within half an hour of taking a dose was lower (55% versus 50%), with no difference observed between arms. Figure 3.Percentage of text messages received by dispensers in the intervention arm. Table 5 shows that ∼60% of patients in both arms reported being told how to take AL correctly, with similar percentages reporting being told to take the second dose after 8 hours and to complete treatment even if feeling better, indicating that dispensers were providing some advice even in the absence of the intervention. However, no differences were found between control and intervention arms for any piece of advice. Less than 5% of patients in both arms reported being told about vomiting, side effects, or taking AL with fatty food, and < 10% in both arms took the first dose of AL at the ADDO. There was no difference in patient adherence between arms (Table 6). Completed treatment was 70% in the control arm and 68% in the intervention arm (adjusted risk ratio [aRR] = 0.96 [95% CI: 0.82, 1.1]; p [adjusted] = 0.6), with a similar percentage of patients adherent to each of the four age-appropriate blister packs and no important differences between arms. The mean number of doses taken by non-adherent patients was four in both arms, and the most common reported reasons for non-adherence included planning to take the medication later, forgetting to take the tablets, feeling better, and other reasons (Figure 4). Timely completion was much lower, with 33% of patients in both arms taking all doses at appropriate times. A per protocol analysis, excluding patients attending ADDOs where at least one dispenser did not receive any messages, had no impact on results. Figure 4.Reasons given by patients/caretakers for not completing treatment.\n\nBODY.DISCUSSION:\nWe have reported results from a cluster randomized trial of a text message intervention directed at drug shop dispensers to improve patient adherence to ACTs in Tanzania. The intervention increased dispenser knowledge of some components of advice to provide patients when dispensing AL, but knowledge of other components was already very high in the absence of the intervention. The improvements in knowledge did not translate into an increase in information patients reported receiving, even though patients commonly reported receiving some advice. There was no difference in adherence of patients to the ACT regimen between the arms. Adherence by completed treatment was < 70% to ACTs obtained from ADDOs, comparable to the 66% adherent by the same definition in the study by Cohen and others (2012)11 in the private retail sector in Uganda. However, timely completion was only 33% in our study, indicating that even patients who complete treatment may do so with poor adherence to the recommended schedule. Both of these results are comparable to reported adherence to ACTs obtained from public health facilities, where studies under real life conditions have found adherence of 64–77% for completed treatment verified by pill count and 39–75% for timely completion verified by pill count,5 including one study from Tanzania.35 Two other studies from public health facilities in mainland Tanzania using different definitions and methods have reported higher adherence (88.3% and 90%).33,34 Although the text message intervention targeting dispensers was not effective at improving patient adherence, there was a marked increase in dispenser knowledge of advising patients to take AL with fatty foods or milk and to continue AL even if minor side effects occurred. However, knowledge in both arms was surprisingly high, particularly on advising patients to take the second dose after 8 hours, to complete treatment even if feeling better, and to seek further care if the condition worsens. This could reflect the recent ADDO trainings in Mtwara, raising the possibility that the intervention's impact could have been different in the absence of recent training. Knowledge did not necessarily result in the provision of advice, even though some advice was provided. For example, 98% of dispensers in both arms knew it was important to advise patients to complete treatment even if feeling better, but only 60% of patients reported receiving this advice. Other advice was much less commonly provided, with < 5% of patients in both arms reporting being advised on what to do in case of minor side-effects or vomiting, even though dispenser knowledge of this advice was much higher. This may be because the dispensers did not deem the advice helpful to their business or to the patients, as it could heighten a negative perception about the effects of their products. Dispensers may have also perceived that clients were in a hurry or not receptive to advice. Alternatively, patients or caretakers may not have recalled the advice given to them several days before. Exit interviews or mystery shopper surveys may have been useful in assessing whether advice was communicated, but these methods also have limitations, such as greater potential for a Hawthorne effect and ethical challenges. We intentionally avoided telling dispensers that the purpose of our study was to improve patients' adherence. Although dispensers receiving text messages were aware that the content focused on advising patients about the correct use of AL, we did not mention the objective to dispensers in either arm to avoid this information being relayed to patients, who might then change their behavior because they expected their adherence to be monitored.5 However, if this intervention were to be scaled up outside the study context, one might include greater emphasis on adherence and its value in communications with dispensers, which might in turn increase the likelihood that they would provide appropriate advice. The evaluation of patient adherence relied on self-reported data from patients or their caregivers, which may be subject to recall and social desirability bias. We inspected blister packs for the 80% of patients who could provide them and identified only 10 patients (1%) that had reported completing all doses but had pills remaining. On the other hand, patients may have removed pills from the packaging to consume at a later time. Even though Swahili times of day, based on sunrise and sunset, were used to assess timely completion, patients or their caregivers may not have remembered when each dose was taken or may have provided the expected responses to avoid being seen as negligent. A similar text message intervention targeting health workers in public health facilities in Kenya found significant improvements in health worker case management of pediatric malaria.21 The primary outcome measure by Zurovac and colleagues included completion of four treatment tasks (e.g., prescribing AL) and at least four of six dispensing and counseling tasks, of which the biggest improvements were seen in giving the first dose at the health facility and advising patients to take the second dose after 8 hours, take each dose after a meal, and what to do in case of vomiting. Although we found strong evidence in the intervention arm of improved dispenser knowledge of advice to take each dose with a fatty meal, we recorded high dispenser knowledge in both arms of advice to take the second dose after 8 hours and no difference between arms in knowledge of advice on what to do in case of vomiting. We also recorded < 10% of patients in either arm taking the first dose of AL at the ADDO, even though drinking water was available at many ADDOs. The contrasts between our findings and those of Zurovac and colleagues could reflect the private drug shop setting, as we found patients' relatives often seek care at ADDOs on behalf of patients, in contrast to public health facilities where patients themselves must be present for a clinical exam. Health workers in public health facilities may also be more accustomed to taking on advisory roles than dispensers in private drug shops and less concerned with making a profit.37 Interventions involving training of dispensers in the private retail sector, although limited in number, have improved dispenser knowledge across a range of diseases and settings, but the impact of improved knowledge on dispenser and patient behavior has been mixed.37,38 Even fewer studies have reported effects of an intervention targeted at retail dispensers on patient adherence to antimalarial drugs. One study from 1998–2001 in Kilifi, Kenya found that trained shopkeepers were willing to take on an advisory role, resulting in both increases in advice and the proportion of patients taking adequate doses of chloroquine and sulfadoxine pyrimethamine or sulfadoxine-pyrimethamine.14 Although receiving instructions has been associated with patient adherence to antimalarials in several studies in the public and private sectors,11,39,40 other factors might also influence patient adherence, including patient education, higher socioeconomic status, treatment-seeking behavior, understanding the instructions, knowledge and perceptions of the illness or of the drug, and satisfaction with information received or with the drug.5 The private retail sector is likely to continue to be an important source of treatment of malaria and there is a need to maximize patient adherence to ACTs. Given the effectiveness of text message reminders on health worker case management in Kenya and the low cost of this intervention, there is potential for further evaluations of text message interventions targeted at dispensers in the private retail sector to improve dispenser knowledge, advice provided, and patient adherence, particularly in settings where dispensers have not received recent training on malaria. Such interventions should ensure message content addresses gaps in dispenser knowledge and would benefit from additional research on dispenser readiness to provide advice, and client receptivity to their advice. There should also be further consideration of how best to design the evaluation so that dispensers are motivated to communicate the importance of adherence without biasing study results. The double gap between dispenser knowledge and providing advice and then patients receiving advice and being adherent may also call for other interventions to enhance adherence. Text message reminders to patients have been shown to be a low-cost approach to improve patient adherence to antiretroviral therapy for HIV41–43 and have been used in two recent studies to increase patient adherence to malaria test results and treatment (Goldberg J, unpublished data).44 However, scaling up a text message intervention targeted at malaria patients in Tanzania would require an increase in personal mobile phone use among patients most at risk of malaria,45 as only about half of the households in rural areas own a mobile phone,29 and the phone may be shared among household members. In contrast, nearly all dispensers censused in Mtwara regularly used at least one mobile phone. Other interventions that have been shown to improve patient adherence to antimalarial drugs include packaging and community education.46 ACTs are now mostly available in factory packaged unit dose packs blister packs with illustrated instructions, therefore additional room for improvement may be limited. One possible modification could be improved instructions in local languages. Community education through communication campaigns could be helpful in emphasizing the importance of taking all doses, but it may be challenging to communicate the details of when and how to take each dose to the general population. Finally, the introduction of mRDTs in the private sector might have positive implications for patient adherence, especially if also combined with dispenser advice.5\n\nBODY.CONCLUSION:\nText message reminders improved some aspects of dispenser knowledge of advice to provide to patients when dispensing AL in the private sector. However, patients in the intervention arm were not more likely to report receiving improved advice and did not have higher adherence than patients in the control arm. Adherence to AL among patients in both arms was suboptimal, highlighting the need for studies evaluating other interventions to improve adherence to ACTs obtained in the private retail sector.\n\n**Question:** Compared to no text message what was the result of receive text message reminders about what advice to provide when dispensing artemether-lumefantrine (AL). on Completed treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
933
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of transcranial direct current stimulation on temperature and pain perception\n\n ABSTRACT:\nTranscranial direct current stimulation modifies cortical excitability and in consequence some cerebral functions. In the present study we aimed to elucidate whether tDCS could affect temperature and pain perceptions in healthy subjects testing different stimulation parameters. A total of 20 healthy subjects were studied by means of quantitative sensory testing. Two different experiments were performed. First, we studied the effects of 15 minutes 2 mA anodal transcranial direct current stimulation applied over left M1 and parietal cortex in two separated sessions. Then, we tested the effects of 5 minutes tDCS over M1 by means of a sham controlled design to optimize the possibility to study minimal effects of tDCS using different polarities (cathodal and anodal) and intensities (1 and 2 mA). 2 mA anodal tDCS, when applied for both 15 and 5 minutes over the motor cortex, increased cold perception threshold. Conversely, motor cortex cathodal tDCS modulated cold perception threshold only when 1 mA intensity was used. M1-tDCS can modify the temperature perception; these effects are polarity and intensity dependent. As stimulation intensity seems critical to determine the effects, we suggest that for clinical application strong anodal tDCS (>1 mA) or weak cathodal tDCS (<2 mA) should be used for pain control.\n\nBODY.INTRODUCTION:\nTreatment of chronic pain (CP) is often difficult and satisfactory pain control is not always achieved. However, invasive motor cortex stimulation (MCS) and non-invasive brain stimulation (NIBS) techniques have emerged as a potential therapy and have been reported to be highly successful in a proportion of treated patients1. However, although MCS is generally effective in pain relief, a conspicuous number of patients are not-responders. A high number of not-responders is unwanted and favours the expansion of the less expensive NIBS techniques over the neurosurgical techniques. Non-invasive methods of brain stimulation, including repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS) are able to induce long lasting effects within the human motor cortex2–4 and have been used to treat pain. Anodal tDCS over primary motor cortex (M1) promotes an increase in cortical excitability, while cathodal tDCS induces a reduction of cortical excitability5, furthermore it was shown that the longer the duration of tDCS, the longer the induced after-effects6, 7. As far as it concerns cathodal tDCS, increasing intensity from 1 mA to 2 mA can switch motor cortex excitability inhibition into excitation8. Also, as intensity increases, the induced electric field goes deeper into the brain so that it is conceivable that the orientation and distance of the axonal or dendritic-somatic axis with respect to the electrical field could affect the resulting biological and clinical effects unexpectedly9. Finally, it is important to consider that little variations of electrode size area, shape, or placement (montage) can strongly influence the \"diffusion\" of the current and the geometry of the induced DC fields into the brain10–13. Many prior studies have evaluated the effects of tDCS on different temperature and pain perception in healthy volunteers. A list of the most relevant works on this issue is given in Table 1\n14–21. It should be noted a great divergence across studies about the effects of tDCS on the studied parameters. For this reason further research is warranted.Table 1Most relevant papers about the effects of tDCS on temperature and pain perception in healthy volunteers.ArticlePolarityStimulation Intensity and durationEvaluation protocolResultsTargetBorckardt et al.14\nAnodal Sham HD-tDCS2 mA, 20 minutesQSTAnodal HD-tDCS decreased heat and cold perception thresholds with no effects on heat pain threshold and just a small effect on cold pain thresholdM1Bachmann et al.15\nAnodal Cathodal Sham1 mA, 15 minutesQSTCathodal tDCS increased cold perception thresholdM1Grundmann et al.16\nAnodal Cathodal Sham1 mA, 15 minutesQSTCathodal tDCS increased cold detection thresholds in both hands and warm detection thresholds only in the contralateral handS1Zandieh et al.17\nAnodal Cathodal Sham2 mA, 15 minutesCold Pressor TestAnodal tDCS led to increment in pain thresholdM1Jurgens et al.18\nAnodal Cathodal Sham1 mA, 15 minutesQSTNeither anodal nor cathodal tDCS significantly were able to change somatosensory and pain perceptionM1Mylius et al.20\nAnodal Cathodal Sham2 mA, 20 minutesQSTAnodal tDCS led to an increase of tolerance to heat painDLPCCsifcsak et al.21\nAnodal Cathodal Sham1 mA, 10 minutesLaser evoked potentials (LEP)Cathodal tDCS significantly reduced the amplitude of N2 and P2M1Boggio et al.22\nAnodal Sham2 mA, 5 minutesPeripheral electrical stimulation (PES)Anodal tDCS of M1 increased both perception and pain thresholds, whilst stimulation of the DLPFC increased pain threshold only.M1 DLPC V1\n Quantitative sensory testing (QST) is a standardized method that allows the evaluation of temperature and pain perceptions by means of different kinds of thermal stimuli (warm, hot inducing pain, cold and cold inducing pain) thus providing information about each of the physiological pathways involved22–24. More in detail, QST provides the opportunity for assessment for Aδ-fibres and C-fibres that are involved respectively in temperature cold and warm perception and nociception (cold pain and hot pain)25–27. To elucidate whether tDCS could affect temperature and pain perceptions in healthy subjects, we performed a double-blind study in which we evaluated the effects of 15 min 2 mA-tDCS on temperature and pain perception as measured by QST: we chose this tDCS protocol since it is commonly used to obtain pain relief in clinical applications28. In order to disclose a possible site-specificity of tDCS effects we applied the neuromodulatory protocol over M1 and parietal cortex (Pcor). Moreover, in a separate experimental session, we tested the effects of a very short tDCS protocol (5 min, cathodal and anodal, M1) by using a placebo-controlled double blind study to disclose the polarity and intensity dependency of the effects of tDCS on thermal and pain perceptions. Contemporary, by means of these experiments, we aimed to test which QST parameters can be used as a biomarker of the effects of tDCS, possibly to be used in future study about its application in pain treatment.\n\nBODY.METHODS.SUBJECTS:\nTen healthy volunteers (7 women, 3 men, mean age 31.9 ± 4.9 (SD) years, range 27–41 years) were studied by applying anodal tDCS 2 mA for 15 minutes over two different cerebral targets, M1 and Pcor. Moreover, eleven healthy volunteers (9 women, 2 men, mean age 31.5 ± 7.5 (SD) years, range 24–39 years) were studied with a very short duration tDCS protocol using a sham controlled design. Just one subject participated in both experimental protocols. All subjects gave informed consent prior to participation; the study was approved by the Ethical Committee for Clinical Research (Toledo, Spain) and was conducted in accordance with the Declaration of Helsinki.\n\nBODY.METHODS.EXPERIMENTAL DESIGN:\nSubjects were seated in a comfortable chair, with their arms and the whole body at rest. Sensory testing was evaluated prior to, immediately after, and 25 minutes after tDCS (times pre, post 0, and post 25, respectively). See Figure 1 for a schematic representation of the experimental protocols. The experimental protocols were randomly administered across the subjects and were separated by at least one week. The entire testing algorithm (temperature perception testing and pain thresholds) lasted for an average time of 12 minutes.Figure 1Schematic representation of the experimental setup.\n\n\nBODY.METHODS.MOTOR CORTEX LOCALIZATION: TRANSCRANIAL MAGNETIC STIMULATION:\nTranscranial magnetic stimulation (TMS) was performed with a Magstim 200 magnetic stimulator (Magstim Company, Whiteland, Dyfed, UK) and a figure-of-eight magnetic coil (diameter of one winding, 70 mm; peak magnetic field, 2.2 Tesla). The magnetic stimulus had a biphasic waveform with a pulse width of about 300 μs. During the first phase of the stimulus, the current in the center of the coil flowed towards the handle. The coil was placed tangentially on the scalp inducing a posterior-anterior current in the brain. We determined the optimal position for activation of the right first dorsal interosseous (FDI) by moving the coil in 0.5 cm steps around the presumed motor hand area of the left motor cortex. The site where stimuli of slightly suprathreshold intensity consistently produced the largest motor evoked potentials (MEPs) with the steepest negative slope in the target muscle was marked as the \"hot spot\".\n\nBODY.METHODS.PARIETAL CORTEX LOCALIZATION:\nTo localize the optimal position of the electrodes to stimulate parietal cortex, we used as reference the motor hand area of the right first dorsal interosseus (FDI) localized by TMS (hot spot) and moving 2 cm posteriorly in the parasagittal direction29.\n\nBODY.METHODS.EXPERIMENT 1: COMPARING THE EFFECTS OF THE ANODAL TDCS OVER THE MOTOR AND PARIETAL CORTICES:\nWe tested the effects of a protocol similar to those commonly used to obtain pain relief in clinical applications28 (anodal tDCS, 15 min, 2 mA, M1 or Pcor) on two different cortical targets. With this experiment, we tested if there are effects on QST parameters that can be used as a biomarker of the effects of the tDCS. Furthermore, we tested the effects over the parietal cortex as some studies suggest that this target can be used for pain control23. It should be noted that this experiment was developed to test if Pcor stimulation is effective in obtaining pain control and changes in temperature perception in healthy humans. We compared this site of stimulation to a commonly used protocol in pain treatment (Anodal tDCS 15 min over M1) and for this reason we did not use, in this first experiment, sham stimulation as a control condition. All the subjects underwent two different experimental sessions: (1) anodal motor cortex tDCS, 2 mA, 15 minutes, (2) anodal parietal cortex tDCS 2 mA, 15 minutes. Following the skin preparation to reduce impedance, saline-soaked sponge electrodes (5 × 7 cm) were positioned over the motor/parietal cortex and contralateral orbit, using the hot spot (for motor cortex stimulation) or 2 cm posteriorly in the parasagittal direction (for sensory cortex stimulation) as the center of the cortical electrode. Stimulation was applied for 15 min at a current of 2 mA (8 s phade in/phade out for a total stimulation time of 916 s). The current intensity and duration were within the established safety limits30. For anodal tDCS, the anode was positioned above the motor cortical representation of the right FDI while the cathode was placed above the contralateral orbit.\n\nBODY.METHODS.EXPERIMENT 2: PLACEBO-CONTROLLED STUDY ON POLARITY AND INTENSITY EFFECTS OF TDCS OVER THE MOTOR CORTEX:\nIn Exp 1, we used the M1 stimulation (commonly used protocol in pain treatment) as a control for the Pcor stimulation. As we demonstrated that Pcor stimulation was inefficient in obtaining a significant change in any studied parameters (see results), in Exp 2 we tested the polarity and stimulation intensity dependence of the effects of M1 stimulation. Thus, in Exp 2 we added a sham tDCS as a control condition, cathodal tDCS to test also a different polarity and two different stimulation intensities to test (1 mA and 2 mA). We tested the effects of a very short tDCS protocol (5 min) using a sham controlled design to optimize the possibility to study minimal effects of tDCS over the motor cortex of different polarities (cathodal and anodal) and intensities (1 and 2 mA). For this, we evaluated how tDCS applied to the motor cortex for 5 min was able to induce a change in sensory and pain perception, as measured by QST. This study was sham-controlled. All the subjects underwent five different experimental sessions: (1) anodal tDCS of 1 mA; (2) anodal tDCS of 2 mA; (3) cathodal tDCS of 1 mA; (4) cathodal tDCS of 2 mA; (5) sham tDCS. Following the skin preparation to reduce impedance, saline-soaked sponge electrodes (5 × 7 cm) were positioned over the motor cortex and contralateral orbit12, using the hot spot identified with TMS as the center of the cortical electrode. Stimulation was applied for 5 min at a current of 1 mA or 2 mA (8 s phade in/phade out for a total stimulation time of 316 s). The current intensity and duration were within the established safety limits30. Sham stimulation involved the same electrode placement and duration as the real conditions; however the constant current was delivered for only 30 s. Most subjects experienced a mild tingling sensation at the site of electrode contact that was independent of polarity and usually subsided after a period of a few seconds. For anodal tDCS and sham stimulation, the anode was positioned above the motor cortical representation of the right FDI while the cathode was placed above the contralateral orbit. For cathodal tDCS the opposite montage was used.\n\nBODY.METHODS.QUANTITATIVE SENSORY TESTING:\nQST was conducted by using a Thermal Sensory Analyzer TSA 2001-II (MEDOC, Ramat Yishai, Israel)31, 32. This device uses computer controlled Peltier elements to heat or cool a contact plate in the thermode to the desired temperature. The thermode area was 3 cm × 3 cm (9 cm2). The entire thermode stimulating surface was placed in contact with the skin-testing site and secured by a Velcro band without stretch. Cold and warm thresholds were measured by stimulating right thenar eminence. The initial resting temperature of the thermode was 32 °C and the rate of the temperature change was 0.5 °C/s down or up for cold and warm trials, respectively. Stimulus magnitude was defined in each trial as the difference between the final temperature and the resting temperature of the thermode. Room temperature was controlled and kept between 23–24 °C.\n\nBODY.METHODS.WARM AND COLD THRESHOLD DETERMINATION:\nThe method of levels (MLE) was used, because its measurements have been shown not to be influenced by reaction time33. A single stimulus of predetermined magnitude was presented and the subject indicated, after the cessation of the stimulus, whether it was felt or not. The initial stimulus step size was predetermined at a step of 3 °C (above or below the starting temperature of 32 °C). Subsequently, stimuli were decreased by intermediate steps of 1 °C, until the subject gave a negative response. The subsequent stimuli were increased or decreased by fine search steps of 0.3 °C. The direction changed according to the response, increasing for negative response (not felt), decreasing for positive response (felt). Null stimuli were randomly included. Four negative responses were required, after fine search had begun, to terminate the test. Threshold was determined by taking the mean of all results during the fine search step. Thresholds were expressed as the difference of the obtained value with respect to the starting temperature of 32 °C.\n\nBODY.METHODS.HEAT AND COLD PAIN THRESHOLD DETERMINATION:\nCold pain thresholds were measured over five trials by decreasing the temperature at a rate of 1 °C/s until the subject indicated that the temperature became painful. The threshold for heat pain was measured over five trials in which the temperature was increased until the stimulus was perceived as painful. Pain thresholds were determined by taking the average of the five successive trials. Thresholds were expressed as the difference of the obtained value respect to the starting temperature of 32 °C. The order of heat and cold detection and pain threshold measurements was randomised across participants.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nCold and warm thresholds and cold and heat pain thresholds in baseline condition were compared using unpaired t test. For this comparison only the baseline cold and warm thresholds obtained for anodal (2 measures) and cathodal (2 measures) and sham stimulation protocols have been averaged. The cold and warm perception threshold values were computed across subjects for both real and sham conditions and for the three time points (baseline, post 0 and post 25). For the experiment 1, the effects of motor and sensory cortex 15 minutes tDCS on thermal thresholds (warm and cold) and pain thresholds (hot and cold) were evaluated by means of three-way mixed model ANOVA with TIME (baseline, post 0 and post 25) as within-subject factor and LOCATION (M1 and Pcor) and MODALITY (cold perception, warm perception, cold pain and hot pain) as between-subjects factors, followed – in case of a significant interaction– by four separate two-way repeated-measures follow-up ANOVAs (TIME and LOCATION) for each studied modality. Again, when a significant interaction was found, two separate one-way repeated measures follow-up ANOVAs (TIME) were performed for each LOCATION. When significant main effects or interactions were found, Tukey's honest significant difference test was used for post-hoc comparisons. In the experiment 2, we evaluated the effects of 5 minutes anodal tDCS, cathodal tDCS and sham stimulation when two different intensities were used (1 and 2 mA). To achieve this, we performed two separate three-ways mixed model ANOVAs for 1 mA and 2 mA stimulations whit MODALITY (warm perception, cold perception, heat pain and cold pain) and STIMULATION (anodal, cathodal and sham) as between-subjects factors and TIME (baseline, post 0 and post 25) as a within-subjects factor, followed – in case of a significant interaction – by four separate two-way repeated-measures follow-up ANOVAs (TIME and STIMULATION) for each studied modality. Again, when a significant interaction was found, other three separate one-way repeated measures follow-up ANOVAs (TIME) were performed for each STIMULATION. When main effects or interactions were found, Tukey's honest significant difference test was used for post-hoc comparisons.\n\nBODY.RESULTS:\nAs previously reported by other authors using QST17, 24 baseline cold threshold was lower than warm threshold (0.68 ± 0.35 °C vs 0.92 ± 0.49 °C, p = 0.008 unpaired t test) (Fig. 2). Baseline cold pain threshold was higher than heat pain threshold (21.1 ± 5.2 °C vs 14.4 ± 2.1 °C, p < 0.001 unpaired t test) (Fig. 2).Figure 2Temperature perception thresholds and pain threshold at baseline condition. Thresholds were expressed as the difference of the obtained value with respect to the starting temperature of 32 °C.\n\n\nBODY.RESULTS.EXPERIMENT 1:\nAt baseline, cold perception threshold, warm perception threshold, cold pain threshold and heat pain threshold did not differ between the 2 experimental sessions (paired t-tests, p > 0.5). 2 mA anodal tDCS, when applied for 15 min over the motor cortex, induced a change only in cold perception threshold but not in the other studied parameters (one-way follow-up ANOVA for cold perception: F(2,18) = 3,68; p = 0.046)). In details, 15 minutes 2 mA anodal tDCS induced a significant increase in cold perception thresholds immediately after the end of the stimulation (p = 0.039) (see Fig. 3).Figure 3Temperature perception thresholds and pain thresholds for warm (hot) and cold perception (Anodal tDCS 15 min 2 mA over motor and sensory cortex). Error bars are standard deviations. *p < 0.05.\n Moreover, 2 mA anodal tDCS when applied over the Pcor for 15 minutes induced no significant changes in all the studied parameters (one-way follow-up ANOVA for cold perception: F(2,18) = 3,33; p = 0.058), we only observed a tendency to increase the cold perception (Baseline vs Post 25; p = 0.058).\n\nBODY.RESULTS.EXPERIMENT 2:\nAt baseline, cold perception threshold, warm perception threshold, cold pain threshold and heat pain threshold did not differ between the 5 experimental sessions (paired t-tests, p > 0.5). Cold perception threshold was significantly modulated by tDCS (one-way follow-up ANOVA for cold perception: F (2,20) = 4,60; p = 0.022). Particularly, when 1 mA was used, cathodal tDCS was able to increase cold perception threshold at 25 minutes after the end of the stimulation (p = 0.018) (see Fig. 4).Figure 4Temperature perception thresholds for warm and cold perception (anodal, cathodal and sham stimulation, tDCS 5 min 1 and 2 mA over motor cortex). Error bars are standard deviations. *p < 0.05.\n Anodal and sham 1 mA tDCS were able to change none of the studied parameters, and cathodal 1 mA tDCS was not able to change warm perception threshold, cold pain threshold and heat pain threshold. When the intensity of tDCS was set at 2 mA, the only modality of stimulation able to increase cold perception threshold was anodal tDCS (one-way follow-up ANOVA for cold perception: F (2,20) = 7,46; p = 0.004)). Particularly, cold perception threshold was significantly increased at 25 minutes after the end of the stimulation (p = 0.003) (Fig. 5 left upper panel).Figure 5Temperature pain thresholds for hot and cold perception (anodal, cathodal and sham stimulation, tDCS 5 min 1 and 2 mA over motor cortex). Error bars are standard deviations. *p < 0.05.\n\n\nBODY.DISCUSSION:\nThe present study shows that motor cortex 2 mA-anodal tDCS, applied for both 5 min and 15 min, increased cold perception thresholds. Moreover the absence of effects when stimulating a non-motor cortex such as Pcor were demonstrated by applying 2 mA-anodal tDCS for 15 min. Hitherto, a single session of 15 min of anodal tDCS at 2 mA delivered over the motor or parietal cortex was unable to modulate pain perceptions. Moreover, when anodal tDCS was delivered with a smaller intensity (1 mA) and for 5 minutes, no effects on temperature and pain perceptions were found. As far cold perception concerns, our findings are in line with previous studies. Indeed, Borckardt and co-workers, using a high definition tDCS montage, found that 20 min tDCS delivered at an intensity of 2 mA modified heat and cold sensory perception when anodal stimulation was used while no effects were found in heat pain threshold and just a small effect on cold pain threshold14. More in details, they reported that after stimulation lower temperatures were required to detect a change induced by cold stimuli. Furthermore, higher temperatures were required to detect a change induced by warm stimuli. On the other hand, we did not find any effects on the warm perception and cold and hot pain perception. Moreover another study described that 2 mA anodal stimulation of the primary motor area can be utilized to alleviate cold pain perception17. However it should be considered that these groups found some effects on pain and this discrepancy respect our data could be due to the use of HD-tDCS14 that is characterized by a different spatial profile of induced brain current flow. On the other hand, Zandieh et al. (2012) evaluated the effects of 2 mA tDCS over cold pain threshold by using cold pressor test, so that the different evaluation methodology could have accounted for the slight different results17. In addition, it should be considered that the effects induced by tDCS are considerably affected by the presence of a large inter-subject variability as it was shown by similar studies using other forms of non-invasive brain stimulation34. In this light, the differences between our results and those from other groups could be also influenced by this large intersubjects variability and by the relatively little size of the studied cohorts. Our data also suggest that anodal tDCS increases cold perception threshold by a dose-dependent mechanism in which intensity bigger than 1 mA is required. Furthermore we also observed that 5 minutes 1mA-cathodal tDCS applied over M1 significantly increased cold perception threshold, while no effects were found when using sham stimulation. This result was similar to the effects of 2mA-anodal tDCS. It should be noted that we found positive results only when tDCS was applied over M1 and not over Pcor. Since our main results consist in the modulation of not painful perception (i.e. cold perception), it could be expected that Pcor location was the most effective site of application. However Craig et al. found that the application of innocuous cold stimuli activates insula and not primary somatosensory cortex. Nevertheless, painful heat and cold stimuli activated the contralateral anterior cingulate cortex, contralateral primary motor, primary sensory cortex (S1), bilateral secondary sensory cortex, midinsular cortex, thalamus, and cerebellum35. Furthermore, functional neuroimaging also disclosed remote and widespread effects of tDCS and rTMS when focally applied to the primary motor or premotor cortex through fibres that project to remote cortical or subcortical structures involved in cognitive-emotional or discriminative aspects of sensorial experience and pain3, 36: thus it is not surprising that the stimulation of M1 is crucial in the modulation of thermal perception thresholds. It is important to keep in mind that the tDCS we used cannot be considered \"strictly\" focal. Indeed, due to the large size of the tDCS electrodes (35 cm2) and the montage we used, it cannot be ruled out that when tDCS electrodes are placed over M1, the tDCS stimulation can affect also the parietal cortex (and vice versa). Indeed, a non focal effect of tDCS over M1 may spread to Pcor. This seems not to be the case, at least using the polarity (anodal) and the parameters we used here, as tDCS delivered over Pcor is less effective in modulating temperature sensory perception than M1 stimulation. Hitherto, we found a tendency to increase in the cold perception threshold after 2 mA anodal tDCS over Pcor (similar to the effects of M1 stimulation) that can be explained (may be not exclusively) by a spreading of the currents towards the M1. Another interesting point from our results is that also 5 min cathodal stimulation induced an increase in cold perception threshold. Other groups found similar results, for example Bachmann et al.15 reported that cathodal stimulation of the primary motor area reduced sensitivity of A-fibers to somatosensory input (cold detection thresholds)15. Also, effects of the stimulation of the left S1 on thermal perception were found, with cathodal tDCS increasing cold detection thresholds in both hands and warm detection thresholds only in the contralateral hand16. In both cases tDCS was delivered at 1 mA intensity and cathodal stimulation was the most efficient when compared with anodal or sham stimulation. Furthermore it should be considered that the net effect of motor cortex stimulation may be a mixed effect, as cathodal tDCS may also exert a \"facilitatory effect\" by deactivating inhibitory interneurons37 thus, hypothetically explaining the similar effects induced by cathodal and anodal tDCS in the present study. We found no effect over pain thresholds. Ihle et al.,38, developed a study using functional imaging to explore the underpinnings of the previously suggested antinociceptive effects of tDCS over the motor cortex38. They found that neither cathodal nor anodal tDCS over the left M1 (1 mA, 15 minutes) significantly changed cortical nociceptive processing as a response to a heat pain paradigm when compared with sham stimulation. Only contrasting the interaction between responses to anodal and cathodal stimulation, It was found a substantial polarity-specific differences of regional brain activation after painful stimulation: anodal stimulation induced a decrease of regional Cerebral Blood Flow (rCBF), whereas cathodal stimulation resulted in an increase of rCBF in the hypothalamus, inferior parietal cortex, inferior parietal lobule, anterior insula, and precentral gyrus contralateral to the stimulation site38. In sum, here we suggest the importance of evaluating the most efficient combination of intensity/polarity and site of stimulation for better results, since at the best of our knowledge this is the only work evaluating different intensities, polarities and cortical targets in the same study. As far as the parietal cortex stimulation, we found only mild effects. We cannot exclude that longer stimulation sessions or repeated sessions may have an effects on temperature perception. More studies would be done to better clarify these considerations. We are aware that the findings obtained from healthy subjects have to be cautiously transferred to the patients. On the other hand, the effects of neuromodulation techniques in physiological conditions need to be understood before to think the multiple mechanisms that can condition the clinical response of the patients suffering for pain and sensory disturbances. We can conclude that tDCS delivered over the motor cortex can modify the temperature perception and that these effects are polarity and intensity dependent. As stimulation intensity seems critical to determine the effects, we suggest that for clinical application strong anodal tDCS (>1 mA) or weak cathodal tDCS (<2 mA) should be used for pain control (in the attempt of reducing inconsistencies).\n\n**Question:** Compared to baseline what was the result of 2?mA anodal tDCS when applied over the Pcor for 15?minutes on cold perception threshold, warm perception threshold, cold pain threshold and heat pain threshold?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
950
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Continued stabilization of trabecular metal tibial monoblock total knee arthroplasty components at 5 years—measured with radiostereometric analysis\n\n ABSTRACT.BACKGROUND AND PURPOSE:\n The trabecular metal tibial monoblock component (TM) is a relatively new option available for total knee arthroplasty. We have previously reported a large degree of early migration of the trabecular metal component in a subset of patients. These implants all appeared to stabilize at 2 years. We now present 5-year RSA results of the TM and compare them with those of the NexGen Option Stemmed cemented tibial component (Zimmer, Warsaw IN).\n\nABSTRACT.PATIENTS AND METHODS:\n 70 patients with osteoarthritis were randomized to receive either the TM implant or the cemented component. RSA examination was done postoperatively and at 6 months, 1 year, 2 years, and 5 years. RSA outcomes were translations, rotations, and maximum total point motion (MTPM) of the components. MTPM values were used to classify implants as \"at risk\" or \"stable\".\n\nABSTRACT.RESULTS:\n At the 5-year follow-up, 45 patients were available for analysis. There were 27 in the TM group and 18 in the cemented group. MTPM values were similar in the 2 groups (p = 0.9). The TM components had significantly greater subsidence than the cemented components (p = 0.001). The proportion of \"at risk\" components at 5 years was 2 of 18 in the cemented group and 0 of 27 in the TM group (p = 0.2).\n\nABSTRACT.INTERPRETATION:\n In the previous 2-year report, we expressed our uncertainty concerning the long-term stability of the TM implant due to the high initial migration seen in some cases. Here, we report stability of this implant up to 5 years in all cases. The implant appears to achieve solid fixation despite high levels of migration initially.\n\nBODY:\nSwedish Knee Arthoplasty Register data suggest that uncemented tibial components have worse survivorship than cemented components (Knutson and Robertsson 2010). However, more recent data from Australia have suggested that modern uncemented designs have a similar cumulative revision rate to that of cemented components (Graves et al. 2004). With recent improvements in biomaterials, the potential for improved longevity using uncemented implants is once again being explored. Trabecular metal (Zimmer, Warsaw, IN) is a new material available for use in uncemented total knee arthroplasty. It is a porous biomaterial with morphology and mechanical properties resembling those of trabecular bone (Bobyn et al. 1999a,b, Zardiackas et al. 2001, Levine et al. 2006, Balla et al. 2010). In a previous paper, we presented the 2-year implant migration results of the trabecular metal tibial monoblock component using radiostereometric analysis (RSA) (Dunbar et al. 2009). The results of that study showed high initial migration in 9 out of 28 of the tibial components, with apparent stabilization occurring after 1 year in all cases. These results reflected the results of a similar RSA study on the same implant, performed at another center (Henricson et al. 2008). In addition, in the 2-year report we also documented apparent deformation of the tibial base plate, which occurred in 5 of the cases of high migration (Dunbar et al. 2009). The implications of the high degree of migration and plate deformation for long-term survival are not known. In a subset of the patients enrolled in the study, we also performed RSA for measurement of inducible displacement between 2 and 4 years. The results of this study showed that the trabecular metal component had the lowest inducible displacement ever reported in the literature, indicating excellent stability (Wilson et al. 2010). The results of these 3 studies supported the hypothesis that these components were achieving adequate bone in-growth for long-term survival (Henricson et al. 2008, Dunbar et al. 2009, Wilson et al. 2010). However, longer follow-up is necessary to determine whether the early stability of these implants is durable. In this paper, we present the 5-year longitudinal RSA results from the original cohort of patients randomized to receive either the Nexgen LPS monoblock (trabecular metal) tibial component (Zimmer) or the cemented NexGen Option Stemmed tibial component (Zimmer) (Dunbar et al. 2009).\n\nBODY.PATIENTS AND METHODS:\n70 subjects with severe osteoarthritis were randomized to receive either the Nexgen LPS monoblock (trabecular metal) tibial component (Zimmer, Warsaw, IN) or the cemented NexGen Option Stemmed tibial component (also Zimmer). Surgery was performed by 4 experienced consultant knee surgeons using a standardized protocol: posterior cruciate ligament resection, patellar resurfacing with a cemented inlay component, cementing of the femoral component, and RSA marker placement of 0.8mm beads. 4–6 tantalum markers were placed around the periphery of the polyethylene component; a median of 10 tantalum markers were inserted into the proximal tibia. The postoperative protocol was standardized with the use of continuous passive motion as tolerated, and patients were allowed full weight bearing immediately. No drains were used. At the 5-year postoperative time point, patients were contacted and asked if they would consent to continued participation in the study. Written informed consent was obtained from patients who had verbally agreed to participate, when they arrived for their 5-year follow-up RSA examination. This was in accordance with requirements of our Institutional Ethics Review Board. The knee was placed above a uniplanar calibration box (Medis Specials, Leiden, the Netherlands) and simultaneous digital stereo radiographs were taken with the X-ray tubes oriented obliquely. Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scores were obtained at 5 years. Full details of the original study design can be found in the previously published article (Dunbar et al. 2009). The original 2-year study was registered at ClinicalTrials.gov (NCT00405379).\n\nBODY.PATIENTS AND METHODS.RADIOSTEREOMETRIC ANALYSIS:\nRSA data analysis was performed using commercial software (MB-RSA; MEDIS, Leiden, the Netherlands). All translations and rotations were calculated to comply with the standards presented by Valstar et al. (2005). Rigid-body translations and rotations of the prosthesis were calculated about a coordinate system centered at the volumetric center of the implant, with axes aligned with the anatomical directions. The maximum total point motion of the tibial component for each case was calculated using fictive markers to standardize the calculations in cases where the prosthesis bead placement was not uniform for all subjects (Nilsson and Karrholm 1993, Nilsson et al. 1999, Valstar et al. 2005). The limit of rigid body fitting was a maximum of 0.2 mm for the tibial segment and 0.2 mm for the prosthesis segment. The condition number did not exceed 40 at any follow-up, indicating adequate distribution of beads in the rigid body (Valstar et al. 2005). The accuracy of the RSA system was assessed with a standard phantom study protocol. Accuracy was represented as half of the average width of the 95% prediction interval in a regression analysis of true and measured translations of a phantom. Precision was evaluated with double examination analysis, and represented as the 95% confidence interval of the measurements from 11 double clinical examinations performed at the postoperative follow-up. The marker configuration model-based RSA technique (Kaptein et al. 2005) was used to solve the occluded marker problem in cases where less than 3 beads were matched in an examination at follow-up. Marker configuration models were also used to test for bead loosening and possible deformation of the trabecular metal base plate. At the 2-year follow-up, implants were classified as being \"stable\" (< 0.2 mm MTPM between 1 and 2 years' follow-up) or as being \"at risk\" of early aseptic loosening (> 0.2 mm MTPM between 1 and 2 years' follow-up) (Ryd et al. 1995). Implants categorized as \"at risk\" at 2 years were re-evaluated, and if found to have less then 0.3 mm of motion between 2 and 5 years, they were considered to have stabilized. Implants considered to be \"at risk\" at 2 years with greater than 0.3 mm of motion between 2 and 5 years were still considered to be \"at risk\". Implants classified as being \"stable\" at 2 years but which showed more than 0.3 mm of motion between 2 and 5 years were classified as being \"at risk\". 0.3 mm was chosen as the threshold for migration over 3 years by extrapolating the 0.2 mm between 2-year follow-up periods, originally described as the modified continuous migration (MCM) criteria by Ryd et al. (1995).\n\nBODY.PATIENTS AND METHODS.STATISTICS:\nAnalysis of variance was used to test for differences in age, body mass index (BMI), and subjective measures between implant groups. Maximum total point motion measurements were not normally distributed; therefore, the Kruskal-Wallis test was used to investigate differences in maximum total point motion between implant groups at 5 years. Analysis of variance was used to test for differences in translations and rotations between groups at 5 years. Fisher's exact test was used to investigate differences in proportions of implants found to be \"at risk\" between groups, and Wilson's procedure was used to calculate the 95% confidence intervals of this proportion. There were 8 primary outcome metrics (maximum total point motion, 3 translations, 3 rotations, and proportion of implants at risk). A Bonferroni correction for multiple comparisons was used, giving a significance level of p < 0.006.\n\nBODY.RESULTS.STANDARD OUTCOMES AND FOLLOW-UP:\nAll standard outcome measures for the 45 patients in this study are given in Table 1. Originally, 37 patients were randomized to receive the trabecular metal component and 33 received the cemented component. Age and BMI were similar in the 2 groups at the time of admission. There were no differences in subjective measures (WOMAC) between implant groups at any follow-up point. Table 1. Subject demographics and WOMAC scores. Only matched WOMAC data from earlier follow-ups are included. Values are mean (SD) Trabecular metal Cemented p-value (n = 27) (n = 18) (ANOVA) Sex, M/F 10/17 8/10 Age 60 (8) 61 (9) 0.7 Body mass index 32 (5) 34 (5) 0.2 WOMAC preoperatively 51 (20) 44 (18) 0.2 WOMAC at 6 months 22 (23) 18 (20) 0.5 WOMAC at 12 months 15 (16) 14 (22) 0.9 WOMAC at 24 months 13 (16) 15 (14) 0.6 WOMAC at 60 months 18 (18) 19 (18) 0.8 For the 5-year follow-up, 45 patients could be reached and were willing to participate. There were 27/28 of the trabecular metal subjects and 18/21 cemented subjects with 2-year data recruited. The reasons for loss to follow-up for the other 25 patients are documented in Figure 1 and it happened mainly in the first two years of follow-up. Of the patients with a high degree of prosthesis migration in the original study, all 9 were recruited to the 5-year follow-up. All but 3 of the patients originally recruited to the study are still being followed by the operative surgeon, and there have been no revisions to date. The three patients who were exceptions had left the province. Figure 1.Consolidated Standards of Reporting Trials (CONSORT) flow diagram.\n\nBODY.RESULTS.RSA RESULTS:\nThe accuracy of the RSA system was 0.025 mm, 0.025 mm, 0.063 mm, and 0.026 mm for the x-, y-, and z- directions and maximum total point motion, respectively. The precision of the RSA system was 0.07 mm, 0.07 mm, and 0.11 mm for x-, y-, and z- translations, 0.16°, 0.15°, and 0.12° for x-, y-, and z- rotations, and 0.10 mm for maximum total point motion. At the 2-year follow-up, there were no subjects \"at risk\" in the trabecular metal group (0/28). At the 5-year follow-up, none of the trabecular metal components were found to be \"at risk\". At the 2-year follow-up, there were 4 subjects \"at risk\" in the cemented group (4/21). At the 5-year follow-up, 1 of the cemented components \"at risk\" was lost to follow-up, 2 stabilized (0.09 mm and 0.07 mm of movement between 2 and 5 years) and 1 continued to migrate (0.35 mm of movement between 2 and 5 years). 1 of the cemented implants found to be stable at 2 years migrated an additional 0.42 mm and was classified as being \"at risk\". However, retrospectively, this implant had somewhat anomalous 2-year data (0.64 mm, 0.45 mm, and 0.87 mm at 1, 2, and 5 years), where the difference between 1 year and 5 years would make the implant appear stable (0.23 mm in 4 years) but the difference between 2 years and 5 years would make the implant appear to be continuously migrating (0.42 mm in 3 years). The condition number and mean errors for this subject were within normal limits. All other implants that were found to be stable at 2 years migrated less than 0.3 mm between the 2- and 5-year follow-up, and were still classified as stable. The proportion of \"at risk\" components at 5 years was 2 of 18 (0.11, 95% CI: 0.03–0.33) in the cemented group and was 0 of 27 (95% CI: 0.0–0.13) in the trabecular metal group (p = 0.2). At the 5-year follow-up, there was no difference in MTPM between the cemented group and the trabecular metal group (p = 0.9) (Table 2 and Figure 2). MTPM values for individual subjects are shown in Figure 3. Compared to the cemented components, the trabecular metal tibial components had more subsidence (p = 0.001) (Table 2). Table 2. Summary of radiostereometric analysis (RSA) data by group and follow-up 6-month follow-up 12-month follow-up 24-month follow-up 60-month follow-up p-value a TM Cemented TM Cemented TM Cemented TM Cemented (n=30) (n=24) (n=29) (n=23) (n=28) (n=21) (n=27) (n=18) Mean maximum total point motion (range) 0.85 (0.07–3.11) 0.54 (0.12–1.99) 0.87 (0.12–3.25) 0.57 (0.16–1.87) 0.92 (0.06–3.40) 0.65 (0.18–2.12) 0.98 (0.16–3.11) 0.79 (0.06–2.16) 0.9 Lateral/medial translation (SD) –0.02 (0.13) 0.01 (0.14) –0.02(0.13) –0.00(0.14) –0.01 (0.14) 0.02 (0.16) –0.02 (0.16) 0.01 (0.26) 0.7 Superior/inferior translation (SD) –0.34 (0.31) 0.00 (0.08) –0.34 (0.34) –0.01 (0.11) –0.35 (0.35) –0.03 (0.10) –0.38 (0.37) –0.04 (0.11) 0.001 Anterior/posterior translation (SD) 0.02 (0.12) 0.02 (0.29) 0.03 (0.13) 0.06 (0.18) 0.02 (0.12) 0.09 (0.18) 0.01 (0.12) 0.08 (0.29) 0.3 Anterior/posterior tilt (SD) –0.33 (0.73) 0.02 (0.43) –0.40 (0.80) –0.02 (0.30) –0.39 (0.80) –0.05 (0.40) –0.45 (0.90) –0.11 (0.26) 0.1 Internal/external rotation (SD) 0.19 (0.38) –0.15 (0.47) 0.15 (0.39) –0.14 (0.44) 0.19 (0.45) –0.08 (0.60) 0.17 (0.45) 0.04 (0.82) 0.5 Varus/valgus tilt (SD) –0.05(0.72) 0.07 (0.32) –0.11 (0.73) 0.06 (0.32) –0.10 (0.75) 0.06 (0.33) –0.12 (0.73) –0.02 (0.20) 0.6 \n a 5-year results only Figure 2.Maximum total point motion (MTPM) (mean and standard deviation) according to the duration of follow-up in the trabecular metal and cemented groups. Lines joining data points do not indicate continuous measurement but are included to show the pattern of migration. Figure 3.Individual maximum total point motion (MTPM) according to the duration of follow-up in the trabecular metal group (blue) and the cemented group (red). Lines joining data points do not indicate continuous measurement but are included to show the pattern of migration. At the 2-year follow-up, 9 subjects in the trabecular metal group had shown very high migration in the first 6 months postoperatively (> 1.0 mm). All of these implants migrated less than 0.2 mm between 1 and 2 years and were not considered to be \"at risk\" At 5 years, all of these implants continued to show stability with an average change in MTPM of 0.10 mm over the 3 years between follow-ups. Using the marker configuration models, independent bead motion was detected in 5 of the high-migration cases, between the postoperative and 6-month follow-up examinations. The error of fit of the model marker was reduced dramatically (0.103 mm to 0.0466 mm on average) when the bead in question was removed from the analysis. The position of the bead with respect to that of the bead in the model was inferior, indicating that the motion was not due to loose beads, which would have traveled radially. The magnitude of independent bead motion was from 0.3 mm to 0.5 mm in all 5 cases. At the 5-year follow-up, there was no change in the position of the independently moving bead—indicating that the deformation of the base plate was permanent.\n\nBODY.DISCUSSION:\nIn the original 2-year report on these patients, we expressed our uncertainty concerning the long-term stability of the trabecular metal tibial implant due to the high initial migration seen in some cases. Subsequent to this, we have found stability of this implant up to 5 years and migration below the level of detection of our system in all cases. This implant appears to achieve solid fixation despite high initial levels of migration. The one cemented implant that was originally classified as being stable at 2 years but which appeared to be continuously migrating between 2 and 5 years was an unusual finding. We believe that due to random error, the MTPM of this subject was underestimated at 2 years, leading to a spurious finding of continuous migration between 2 and 5 years. However, this patient will continue to be monitored closely in the clinic. The mechanism that results in high early migration and subsequent stabilization is unknown. Previous work (Bellemans 1999) suggested that uncemented components can migrate extensively early after implantation and still form a robust interface with bone. However, migrations of such high magnitude (> 2.5 mm) are more typically associated with continuous migration (Grewal et al. 1992). In our work, all of the initially high migration stabilized by 1 year, suggesting that long-term stability can be achieved even in cases of quite extensive migration. Furthermore, our previous inducible displacement study on this patient group included subjects with a high degree of early migration, and all showed a low degree of inducible displacement (Wilson et al. 2010). Low inducible displacement suggests that bone ingrowth has occurred, and is inconsistent with fixation characterized by fibrous encapsulation. One possible explanation for this novel behavior may be related to the mechanical properties of the trabecular metal material. Due to its low elastic modulus, the trabecular metal monoblock component may deform, allowing areas of the implant that were well fixed initially to remain stable and achieve ingrowth of bone instead of lifting off as the contralateral side sinks into the bone. In support of this, we did measure permanent deformation of the trabecular metal tibial base plate in some of the high-migration cases (Dunbar et al. 2009). The most important limitation of this study was the small sample size. With only 37 patients randomized to the trabecular metal arm of the study, and of these only 27 being willing to participate in the 5-year follow-up, our ability to draw firm conclusions about the risk of revision in patients with this implant is somewhat limited. Another limitation is the modular nature of the cemented components. As the polyethylene was the part of the implant marked with RSA beads, any motion of the polyethylene component with respect to the tibial base plate would be detected as migration of the implant. It is possible that this contributed to the migration detected in some of the cemented cases. However, the monoblock construction of the trabecular metal implant makes motion between the polyethylene and base plate unlikely and less of a concern in that group.\n\n**Question:** Compared to cemented component. what was the result of The trabecular metal tibial monoblock component (TM) on subsidence?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
951
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Continued stabilization of trabecular metal tibial monoblock total knee arthroplasty components at 5 years—measured with radiostereometric analysis\n\n ABSTRACT.BACKGROUND AND PURPOSE:\n The trabecular metal tibial monoblock component (TM) is a relatively new option available for total knee arthroplasty. We have previously reported a large degree of early migration of the trabecular metal component in a subset of patients. These implants all appeared to stabilize at 2 years. We now present 5-year RSA results of the TM and compare them with those of the NexGen Option Stemmed cemented tibial component (Zimmer, Warsaw IN).\n\nABSTRACT.PATIENTS AND METHODS:\n 70 patients with osteoarthritis were randomized to receive either the TM implant or the cemented component. RSA examination was done postoperatively and at 6 months, 1 year, 2 years, and 5 years. RSA outcomes were translations, rotations, and maximum total point motion (MTPM) of the components. MTPM values were used to classify implants as \"at risk\" or \"stable\".\n\nABSTRACT.RESULTS:\n At the 5-year follow-up, 45 patients were available for analysis. There were 27 in the TM group and 18 in the cemented group. MTPM values were similar in the 2 groups (p = 0.9). The TM components had significantly greater subsidence than the cemented components (p = 0.001). The proportion of \"at risk\" components at 5 years was 2 of 18 in the cemented group and 0 of 27 in the TM group (p = 0.2).\n\nABSTRACT.INTERPRETATION:\n In the previous 2-year report, we expressed our uncertainty concerning the long-term stability of the TM implant due to the high initial migration seen in some cases. Here, we report stability of this implant up to 5 years in all cases. The implant appears to achieve solid fixation despite high levels of migration initially.\n\nBODY:\nSwedish Knee Arthoplasty Register data suggest that uncemented tibial components have worse survivorship than cemented components (Knutson and Robertsson 2010). However, more recent data from Australia have suggested that modern uncemented designs have a similar cumulative revision rate to that of cemented components (Graves et al. 2004). With recent improvements in biomaterials, the potential for improved longevity using uncemented implants is once again being explored. Trabecular metal (Zimmer, Warsaw, IN) is a new material available for use in uncemented total knee arthroplasty. It is a porous biomaterial with morphology and mechanical properties resembling those of trabecular bone (Bobyn et al. 1999a,b, Zardiackas et al. 2001, Levine et al. 2006, Balla et al. 2010). In a previous paper, we presented the 2-year implant migration results of the trabecular metal tibial monoblock component using radiostereometric analysis (RSA) (Dunbar et al. 2009). The results of that study showed high initial migration in 9 out of 28 of the tibial components, with apparent stabilization occurring after 1 year in all cases. These results reflected the results of a similar RSA study on the same implant, performed at another center (Henricson et al. 2008). In addition, in the 2-year report we also documented apparent deformation of the tibial base plate, which occurred in 5 of the cases of high migration (Dunbar et al. 2009). The implications of the high degree of migration and plate deformation for long-term survival are not known. In a subset of the patients enrolled in the study, we also performed RSA for measurement of inducible displacement between 2 and 4 years. The results of this study showed that the trabecular metal component had the lowest inducible displacement ever reported in the literature, indicating excellent stability (Wilson et al. 2010). The results of these 3 studies supported the hypothesis that these components were achieving adequate bone in-growth for long-term survival (Henricson et al. 2008, Dunbar et al. 2009, Wilson et al. 2010). However, longer follow-up is necessary to determine whether the early stability of these implants is durable. In this paper, we present the 5-year longitudinal RSA results from the original cohort of patients randomized to receive either the Nexgen LPS monoblock (trabecular metal) tibial component (Zimmer) or the cemented NexGen Option Stemmed tibial component (Zimmer) (Dunbar et al. 2009).\n\nBODY.PATIENTS AND METHODS:\n70 subjects with severe osteoarthritis were randomized to receive either the Nexgen LPS monoblock (trabecular metal) tibial component (Zimmer, Warsaw, IN) or the cemented NexGen Option Stemmed tibial component (also Zimmer). Surgery was performed by 4 experienced consultant knee surgeons using a standardized protocol: posterior cruciate ligament resection, patellar resurfacing with a cemented inlay component, cementing of the femoral component, and RSA marker placement of 0.8mm beads. 4–6 tantalum markers were placed around the periphery of the polyethylene component; a median of 10 tantalum markers were inserted into the proximal tibia. The postoperative protocol was standardized with the use of continuous passive motion as tolerated, and patients were allowed full weight bearing immediately. No drains were used. At the 5-year postoperative time point, patients were contacted and asked if they would consent to continued participation in the study. Written informed consent was obtained from patients who had verbally agreed to participate, when they arrived for their 5-year follow-up RSA examination. This was in accordance with requirements of our Institutional Ethics Review Board. The knee was placed above a uniplanar calibration box (Medis Specials, Leiden, the Netherlands) and simultaneous digital stereo radiographs were taken with the X-ray tubes oriented obliquely. Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scores were obtained at 5 years. Full details of the original study design can be found in the previously published article (Dunbar et al. 2009). The original 2-year study was registered at ClinicalTrials.gov (NCT00405379).\n\nBODY.PATIENTS AND METHODS.RADIOSTEREOMETRIC ANALYSIS:\nRSA data analysis was performed using commercial software (MB-RSA; MEDIS, Leiden, the Netherlands). All translations and rotations were calculated to comply with the standards presented by Valstar et al. (2005). Rigid-body translations and rotations of the prosthesis were calculated about a coordinate system centered at the volumetric center of the implant, with axes aligned with the anatomical directions. The maximum total point motion of the tibial component for each case was calculated using fictive markers to standardize the calculations in cases where the prosthesis bead placement was not uniform for all subjects (Nilsson and Karrholm 1993, Nilsson et al. 1999, Valstar et al. 2005). The limit of rigid body fitting was a maximum of 0.2 mm for the tibial segment and 0.2 mm for the prosthesis segment. The condition number did not exceed 40 at any follow-up, indicating adequate distribution of beads in the rigid body (Valstar et al. 2005). The accuracy of the RSA system was assessed with a standard phantom study protocol. Accuracy was represented as half of the average width of the 95% prediction interval in a regression analysis of true and measured translations of a phantom. Precision was evaluated with double examination analysis, and represented as the 95% confidence interval of the measurements from 11 double clinical examinations performed at the postoperative follow-up. The marker configuration model-based RSA technique (Kaptein et al. 2005) was used to solve the occluded marker problem in cases where less than 3 beads were matched in an examination at follow-up. Marker configuration models were also used to test for bead loosening and possible deformation of the trabecular metal base plate. At the 2-year follow-up, implants were classified as being \"stable\" (< 0.2 mm MTPM between 1 and 2 years' follow-up) or as being \"at risk\" of early aseptic loosening (> 0.2 mm MTPM between 1 and 2 years' follow-up) (Ryd et al. 1995). Implants categorized as \"at risk\" at 2 years were re-evaluated, and if found to have less then 0.3 mm of motion between 2 and 5 years, they were considered to have stabilized. Implants considered to be \"at risk\" at 2 years with greater than 0.3 mm of motion between 2 and 5 years were still considered to be \"at risk\". Implants classified as being \"stable\" at 2 years but which showed more than 0.3 mm of motion between 2 and 5 years were classified as being \"at risk\". 0.3 mm was chosen as the threshold for migration over 3 years by extrapolating the 0.2 mm between 2-year follow-up periods, originally described as the modified continuous migration (MCM) criteria by Ryd et al. (1995).\n\nBODY.PATIENTS AND METHODS.STATISTICS:\nAnalysis of variance was used to test for differences in age, body mass index (BMI), and subjective measures between implant groups. Maximum total point motion measurements were not normally distributed; therefore, the Kruskal-Wallis test was used to investigate differences in maximum total point motion between implant groups at 5 years. Analysis of variance was used to test for differences in translations and rotations between groups at 5 years. Fisher's exact test was used to investigate differences in proportions of implants found to be \"at risk\" between groups, and Wilson's procedure was used to calculate the 95% confidence intervals of this proportion. There were 8 primary outcome metrics (maximum total point motion, 3 translations, 3 rotations, and proportion of implants at risk). A Bonferroni correction for multiple comparisons was used, giving a significance level of p < 0.006.\n\nBODY.RESULTS.STANDARD OUTCOMES AND FOLLOW-UP:\nAll standard outcome measures for the 45 patients in this study are given in Table 1. Originally, 37 patients were randomized to receive the trabecular metal component and 33 received the cemented component. Age and BMI were similar in the 2 groups at the time of admission. There were no differences in subjective measures (WOMAC) between implant groups at any follow-up point. Table 1. Subject demographics and WOMAC scores. Only matched WOMAC data from earlier follow-ups are included. Values are mean (SD) Trabecular metal Cemented p-value (n = 27) (n = 18) (ANOVA) Sex, M/F 10/17 8/10 Age 60 (8) 61 (9) 0.7 Body mass index 32 (5) 34 (5) 0.2 WOMAC preoperatively 51 (20) 44 (18) 0.2 WOMAC at 6 months 22 (23) 18 (20) 0.5 WOMAC at 12 months 15 (16) 14 (22) 0.9 WOMAC at 24 months 13 (16) 15 (14) 0.6 WOMAC at 60 months 18 (18) 19 (18) 0.8 For the 5-year follow-up, 45 patients could be reached and were willing to participate. There were 27/28 of the trabecular metal subjects and 18/21 cemented subjects with 2-year data recruited. The reasons for loss to follow-up for the other 25 patients are documented in Figure 1 and it happened mainly in the first two years of follow-up. Of the patients with a high degree of prosthesis migration in the original study, all 9 were recruited to the 5-year follow-up. All but 3 of the patients originally recruited to the study are still being followed by the operative surgeon, and there have been no revisions to date. The three patients who were exceptions had left the province. Figure 1.Consolidated Standards of Reporting Trials (CONSORT) flow diagram.\n\nBODY.RESULTS.RSA RESULTS:\nThe accuracy of the RSA system was 0.025 mm, 0.025 mm, 0.063 mm, and 0.026 mm for the x-, y-, and z- directions and maximum total point motion, respectively. The precision of the RSA system was 0.07 mm, 0.07 mm, and 0.11 mm for x-, y-, and z- translations, 0.16°, 0.15°, and 0.12° for x-, y-, and z- rotations, and 0.10 mm for maximum total point motion. At the 2-year follow-up, there were no subjects \"at risk\" in the trabecular metal group (0/28). At the 5-year follow-up, none of the trabecular metal components were found to be \"at risk\". At the 2-year follow-up, there were 4 subjects \"at risk\" in the cemented group (4/21). At the 5-year follow-up, 1 of the cemented components \"at risk\" was lost to follow-up, 2 stabilized (0.09 mm and 0.07 mm of movement between 2 and 5 years) and 1 continued to migrate (0.35 mm of movement between 2 and 5 years). 1 of the cemented implants found to be stable at 2 years migrated an additional 0.42 mm and was classified as being \"at risk\". However, retrospectively, this implant had somewhat anomalous 2-year data (0.64 mm, 0.45 mm, and 0.87 mm at 1, 2, and 5 years), where the difference between 1 year and 5 years would make the implant appear stable (0.23 mm in 4 years) but the difference between 2 years and 5 years would make the implant appear to be continuously migrating (0.42 mm in 3 years). The condition number and mean errors for this subject were within normal limits. All other implants that were found to be stable at 2 years migrated less than 0.3 mm between the 2- and 5-year follow-up, and were still classified as stable. The proportion of \"at risk\" components at 5 years was 2 of 18 (0.11, 95% CI: 0.03–0.33) in the cemented group and was 0 of 27 (95% CI: 0.0–0.13) in the trabecular metal group (p = 0.2). At the 5-year follow-up, there was no difference in MTPM between the cemented group and the trabecular metal group (p = 0.9) (Table 2 and Figure 2). MTPM values for individual subjects are shown in Figure 3. Compared to the cemented components, the trabecular metal tibial components had more subsidence (p = 0.001) (Table 2). Table 2. Summary of radiostereometric analysis (RSA) data by group and follow-up 6-month follow-up 12-month follow-up 24-month follow-up 60-month follow-up p-value a TM Cemented TM Cemented TM Cemented TM Cemented (n=30) (n=24) (n=29) (n=23) (n=28) (n=21) (n=27) (n=18) Mean maximum total point motion (range) 0.85 (0.07–3.11) 0.54 (0.12–1.99) 0.87 (0.12–3.25) 0.57 (0.16–1.87) 0.92 (0.06–3.40) 0.65 (0.18–2.12) 0.98 (0.16–3.11) 0.79 (0.06–2.16) 0.9 Lateral/medial translation (SD) –0.02 (0.13) 0.01 (0.14) –0.02(0.13) –0.00(0.14) –0.01 (0.14) 0.02 (0.16) –0.02 (0.16) 0.01 (0.26) 0.7 Superior/inferior translation (SD) –0.34 (0.31) 0.00 (0.08) –0.34 (0.34) –0.01 (0.11) –0.35 (0.35) –0.03 (0.10) –0.38 (0.37) –0.04 (0.11) 0.001 Anterior/posterior translation (SD) 0.02 (0.12) 0.02 (0.29) 0.03 (0.13) 0.06 (0.18) 0.02 (0.12) 0.09 (0.18) 0.01 (0.12) 0.08 (0.29) 0.3 Anterior/posterior tilt (SD) –0.33 (0.73) 0.02 (0.43) –0.40 (0.80) –0.02 (0.30) –0.39 (0.80) –0.05 (0.40) –0.45 (0.90) –0.11 (0.26) 0.1 Internal/external rotation (SD) 0.19 (0.38) –0.15 (0.47) 0.15 (0.39) –0.14 (0.44) 0.19 (0.45) –0.08 (0.60) 0.17 (0.45) 0.04 (0.82) 0.5 Varus/valgus tilt (SD) –0.05(0.72) 0.07 (0.32) –0.11 (0.73) 0.06 (0.32) –0.10 (0.75) 0.06 (0.33) –0.12 (0.73) –0.02 (0.20) 0.6 \n a 5-year results only Figure 2.Maximum total point motion (MTPM) (mean and standard deviation) according to the duration of follow-up in the trabecular metal and cemented groups. Lines joining data points do not indicate continuous measurement but are included to show the pattern of migration. Figure 3.Individual maximum total point motion (MTPM) according to the duration of follow-up in the trabecular metal group (blue) and the cemented group (red). Lines joining data points do not indicate continuous measurement but are included to show the pattern of migration. At the 2-year follow-up, 9 subjects in the trabecular metal group had shown very high migration in the first 6 months postoperatively (> 1.0 mm). All of these implants migrated less than 0.2 mm between 1 and 2 years and were not considered to be \"at risk\" At 5 years, all of these implants continued to show stability with an average change in MTPM of 0.10 mm over the 3 years between follow-ups. Using the marker configuration models, independent bead motion was detected in 5 of the high-migration cases, between the postoperative and 6-month follow-up examinations. The error of fit of the model marker was reduced dramatically (0.103 mm to 0.0466 mm on average) when the bead in question was removed from the analysis. The position of the bead with respect to that of the bead in the model was inferior, indicating that the motion was not due to loose beads, which would have traveled radially. The magnitude of independent bead motion was from 0.3 mm to 0.5 mm in all 5 cases. At the 5-year follow-up, there was no change in the position of the independently moving bead—indicating that the deformation of the base plate was permanent.\n\nBODY.DISCUSSION:\nIn the original 2-year report on these patients, we expressed our uncertainty concerning the long-term stability of the trabecular metal tibial implant due to the high initial migration seen in some cases. Subsequent to this, we have found stability of this implant up to 5 years and migration below the level of detection of our system in all cases. This implant appears to achieve solid fixation despite high initial levels of migration. The one cemented implant that was originally classified as being stable at 2 years but which appeared to be continuously migrating between 2 and 5 years was an unusual finding. We believe that due to random error, the MTPM of this subject was underestimated at 2 years, leading to a spurious finding of continuous migration between 2 and 5 years. However, this patient will continue to be monitored closely in the clinic. The mechanism that results in high early migration and subsequent stabilization is unknown. Previous work (Bellemans 1999) suggested that uncemented components can migrate extensively early after implantation and still form a robust interface with bone. However, migrations of such high magnitude (> 2.5 mm) are more typically associated with continuous migration (Grewal et al. 1992). In our work, all of the initially high migration stabilized by 1 year, suggesting that long-term stability can be achieved even in cases of quite extensive migration. Furthermore, our previous inducible displacement study on this patient group included subjects with a high degree of early migration, and all showed a low degree of inducible displacement (Wilson et al. 2010). Low inducible displacement suggests that bone ingrowth has occurred, and is inconsistent with fixation characterized by fibrous encapsulation. One possible explanation for this novel behavior may be related to the mechanical properties of the trabecular metal material. Due to its low elastic modulus, the trabecular metal monoblock component may deform, allowing areas of the implant that were well fixed initially to remain stable and achieve ingrowth of bone instead of lifting off as the contralateral side sinks into the bone. In support of this, we did measure permanent deformation of the trabecular metal tibial base plate in some of the high-migration cases (Dunbar et al. 2009). The most important limitation of this study was the small sample size. With only 37 patients randomized to the trabecular metal arm of the study, and of these only 27 being willing to participate in the 5-year follow-up, our ability to draw firm conclusions about the risk of revision in patients with this implant is somewhat limited. Another limitation is the modular nature of the cemented components. As the polyethylene was the part of the implant marked with RSA beads, any motion of the polyethylene component with respect to the tibial base plate would be detected as migration of the implant. It is possible that this contributed to the migration detected in some of the cemented cases. However, the monoblock construction of the trabecular metal implant makes motion between the polyethylene and base plate unlikely and less of a concern in that group.\n\n**Question:** Compared to cemented component. what was the result of The trabecular metal tibial monoblock component (TM) on The proportion of \"at risk\" components at 5 years?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
952
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Assessing the Effect of an Integrated Control Strategy for Schistosomiasis Japonica Emphasizing Bovines in a Marshland Area of Hubei Province, China: A Cluster Randomized Trial\n\n ABSTRACT.INTRODUCTION:\nMore than 80% of schistosomiasis patients in China live in the lake and marshland regions. The purpose of our study is to assess the effect of a comprehensive strategy to control transmission of Schistosoma japonicum in marshland regions.\n\nABSTRACT.METHODOLOGY/PRINCIPAL FINDINGS:\nIn a cluster randomized controlled trial, we implemented an integrated control strategy in twelve villages from 2009 through 2011 in Gong'an County, Hubei Province. The routine interventions included praziquantel chemotherapy and controlling snails, and were implemented in all villages. New interventions, mainly consisting of building fences to limit the grazing area for bovines, building safe pastures for grazing, improving the residents' health conditions and facilities, were only implemented in six intervention villages. Results showed that the rate of S. japonicum infection in humans, bovines, snails, cow dung and mice in the intervention group decreased from 3.41% in 2008 to 0.81% in 2011, 3.3% to none, 11 of 6,219 to none, 3.9% to none and 31.7% to 1.7%, respectively (P<0.001 for all comparisons). In contrast, there were no statistically significant reductions of S. japonicum infection in humans, bovines and snails from 2008 to 2011 in the control group (P>0.05 for all comparisons). Moreover, a generalized linear model showed that there was a higher infection risk in humans in the control group than in the intervention group (OR = 1.250, P = 0.001) and an overall significant downward trend in infection risk during the study period.\n\nABSTRACT.CONCLUSIONS/SIGNIFICANCE:\nThe integrated control strategy, designed to reduce the role of bovines and humans as sources of S. japonicum infection, was highly effective in controlling the transmission of S. japonicum in marshland regions in China.\n\nABSTRACT.TRIAL REGISTRATION:\nChinese Clinical Trial Registry ChiCTR-PRC-12002405.\n\nBODY.INTRODUCTION:\nSchistosomiasis is an important public health issue in China where it continues to pose a serious threat to human well-being [1]–[3]. Since the People's Republic of China was established in 1949, the Chinese government has given high priority to the control of schistosomiasis, establishing a number of special bodies to manage control activities from the national to the township level [4]–[6]. These policies resulted in remarkable success in that the number of schistosomiasis patients were reduced by over 90%, from about 11.6 million cases in the 1950s to 726,000 cases in 2004 [7]. A further report in 2004 indicated that disease transmission had been interrupted or controlled in 42% of provinces, 40% of counties, and 53% of towns, previously endemic for schistosomiasis [8]. However, prospects for the control of schistosomiasis have been less optimistic in recent years, particularly since the termination of the World Bank Loan Project (WBLP) for schistosomiasis control at the end of 2001 [7]. Compared with the second national survey of S. japonicum in China in 1995, the third national survey conducted in 2004 showed that the prevalence of S. japonicum infection in humans had not substantially changed in the lake and marshlands and other areas of Southern China [7]. By the end of 2009, a total of 365,770 cases of S. japonicum were estimated in China, and 89 counties had not yet reached the criterion for transmission control which stipulated that the human prevalence should be less than 1% over a length of time [9], [10], i.e. the prevalence in these counties was >1%. Over the past 2–3 decades, the strategies for S. japonicum control in southern China, including the vast lake and marshland regions, has involved chemical mollusciciding, alteration of the oncomelanid intermediate snails habitats, and synchronous chemotherapy with praziquantel for all villagers and their bovines [6], [11]. Historically, these strategies achieved some effect, but recent studies demonstrated there were some problems as the control options resulted in environmental pollution leading to ecological damage [11], [12]. Moreover, owing to the high rates of reinfection in both humans and bovines, frequent flooding, and the complex environment, more persons have been infected and the habitat of the Oncomelania snails has increased in the lake and marshlands [11], [13]. Consequently, a more effective strategy was needed urgently in these areas. From 2005 through 2007, an important study of schistosomiasis japonica control was undertaken by Wang et al. in villages along the Poyang Lake in Jiangxi Province involving a comprehensive integrated approach aimed at reducing S. japonicum transmission to snails from cattle and humans, which play key roles as sources of S. japonicum\n[14]. The integrated strategy, which included removing bovines from snail-infested grasslands, providing farmers with mechanized farming equipment, building safe water systems, providing adequate sanitation, and implementing health education and synchronous chemotherapy with praziquantel for both villagers and bovines, was highly effective [14]. On the basis of identifying and controlling the main S. japonicum infection sources, we implemented a similar strategy (March 2009 through November 2011) to that employed by Wang et al. in the marshlands of Gong'an County [14], Hubei Province, another major endemic area for schistosomiasis japonica. The objective of the study was assessing the strategy's effect in marshland regions.\n\nBODY.METHODS:\nThe protocol for this trial and supporting CONSORT checklist are available as supporting information; see Protocol S1 and Checklist S1.\n\nBODY.METHODS.ETHICS STATEMENT:\nWritten ethical approval for this study was obtained from the Ethics Review Committee of Hubei Provincial Center for Disease Control and Prevention (no. 200803). Written informed consent was obtained from all adults and from parents or guardians of minors before participation in the study. The participants had the opportunity to withdraw from the study at any time. Before beginning work on the study, the bovine owners provided consent to have their animals involved in the study. Moreover, permits for the bovines were obtained from Gong'an County Animal Husbandry Bureau. All animal work was carried out under the guidance of the Institute for Laboratory Animal Research (ILAR), and approved by Ethics Review Committee of Animal Experiments, Hubei Provincial Center for Disease Control and Prevention (no. 2008a05). Both doses of praziquantel (i.e., single 40 mg/kg dose for humans identified as stool egg-positive or 25 mg/kg for infected bovines) were within Chinese Ministry of Health published guidelines [15].\n\nBODY.METHODS.STUDY AREA AND PARTICIPANTS:\nGong'an County is located in typical marshland with a water area of 364 hectares (km2) along the mid-to-lower reaches of the Yangtze River in southwest Hubei Province, China. In 2009, there were 294 schistosomiasis-endemic villages (out of a total of 326), 539 cases of advanced schistosomiasis, and 36,612 cases of chronic schistosomiasis in Gong'an County; the prevalence of schistosomiasis in humans and bovines was 2.75% and 2.41%, respectively [16]. We carried out the study in 12 villages from 12 towns in Gong'an County which were selected by a two-stage random sampling procedure. The 12 towns were first randomly selected from 16 towns; then 12 schistosomiasis-endemic villages were selected from the 12 towns randomly (each town selected a village). Finally, these villages were randomly divided into intervention and control groups (Figure 1). E Jinghu, Gu Shengsi, Jianhong, Lianmeng, Tongqiao, Tuanjie were assigned to the intervention group and Guoqing, Nanyang, Qingyun, Qingji, Zhu Jiahu, Tongsheng were assigned to the control group (Figure 2). The villages had similar agriculture resources (rice, cotton and brassica napus) and the residents mainly were farmers (about 75%). The residents were exposed to contaminated water when they performed their agricultural or daily activities (i.e. cultivating crops, catching fish and washing clothes). 10.1371/journal.pntd.0002122.g001Figure 1Trial flowchart. 10.1371/journal.pntd.0002122.g002Figure 2Location of the study villages in the mid-to-lower reaches of the Yangtze River, Hubei province, China. The participants met the following inclusion criteria: a) must have been a resident of the village for more than 12 months; b) should be aged 6–65 years; c) should continuously reside in the village for the study period; d) have no serious diseases, such as malignant cancer.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF :\nPrior to implementing the new control strategy, routine interventions had been used in Gong'an County to control S. japonicum infection, which comprised synchronous praziquantel chemotherapy of both villagers and bovines and mollusciciding of snails [17], [18]. These procedures were continued in all 12 study villages for the duration of the study but additional interventions were incorporated into the intervention group. These comprised of the building of fences to limit the grazing area for bovines, building safe pastures for grazing, improving the residents' health conditions and facilities, and strengthening specialized schistosomiasis clinics at the village level.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .FENCING AND SAFE PASTURES:\nConsidering that bovines have been identified as the primary reservoir host of S. japonicum infection in China [6], [19]–[21], two measures were implemented (March to May, 2009) to reduce the influence of bovines on transmission. Firstly, 63 kilometres of fencing were installed to prevent animals from grazing in areas where Oncomelania snails were present. Warning signs were placed along the fenced areas, with the help of the local government authorities, and professional management staff were employed to ensure the intervention was effective. Secondly, 7 safe grazing pasture areas were established totaling 190 hectares (1,900,000 m2). Each area was 20–30 hectares and was designed to contain about 1,100 animals. The pastures were carefully selected with no or very few snails, which, if present, were quickly eliminated by mollusciciding. Specialized managers were employed to manage each of the grazing areas and ponds were established to enable the bovines to drink uncontaminated water. Fences comprised cement columns and barbed wire which ensured long-term usage. Each cement column was 160 centimeters (cm) in height and shaped as a triangular prism with 15 cm edges. Each column had three holes through which the barbed wire was fixed, and the distance between each was 2 meters.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .IMPROVING HEALTH AND HEALTH CARE FACILITIES:\nTo reduce the potential of humans as a source of infection for snails, measures were undertaken (May to July, 2009) to improve the health and well-being of villagers and the existing health care facilities. Safe water was provided to 2,826 households by building 8 new public wells or improving household wells already in place. To deal with humans feces, 1,545 home lavatories were constructed or repaired with three-cell septic tanks and 618 households were provided with marsh-gas pool latrines. Furthermore, all fishermen were provided with fecal-matter containers to prevent them from excreting feces directly into river or lake freshwater.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .STRENGTHENING SPECIALIZED SCHISTOSOMIASIS CLINICS:\nIn order to implement the interventions in the intervention villages, specialized schistosomiasis clinics were strengthened at the village level (March to May, 2009). As well as being responsible for managing the fences and the safety of the pastures, the clinic staff were responsible for examining the residents and livestock for schistosome infection, treating infected people or livestock with praziquantel, and providing health education on preventing S. japonicum infections and, in particular, in explaining the significance of bovines and humans as sources of S. japonicum infection for snails.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .PRAZIQUANTEL CHEMOTHERAPY:\nDuring the study period, all participants were screened annually for S. japonicum infection by indirect hemagglutination assay (IHA), followed by fecal examination of IHA-positives using the Kato-Katz technique [15] (three thick smears). Kato-Katz-positives were treated with praziquantel at a single oral dose of 40 mg/kg. As well, all bovines were checked annually by the miracidium hatching test in dung [15]. All infected bovines were treated with a single dose of praziquantel at a dosage of 25 mg/kg body weight.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .SNAIL CONTROL:\nBefore implementing the new control strategy, mollusciciding using niclosamide had been conducted in all schistosomiasis-endemic villages (included all 12 study villages) in Gong'an County as part of the routine procedure. During the study period, a comprehensive approach, comprising mollusciciding with niclosamide and environmental modification, which mainly included using cement to change ditches' original environment, was used for snail control in all 12 study villages. Snail surveys were carried out twice-yearly (spring and autumn) along the river banks, ditches and marshlands around the villages [15]. An average of 110 hectares of snail habitat in the study villages were treated annually with molluscicide or environmental modification (89 hectares in 2008, 141 hectares in 2009, 134 hectares in 2010 and 74 hectares in 2011).\n\nBODY.METHODS.OUTCOME MEASURES:\nFive outcome measures, including the primary one (the prevalence of S. japonicum in humans) and four secondary ones (the rate of S. japonicum infection in bovines, cow dung, snails and mice), were determined annually and used to assess the strategy's effect. Participants that were positive for both IHA and Kato-Katz were defined as infected and the prevalence of S. japonicum of participants in all 12 study villages was in October/November, after the second transmission season. During the same period, the infection of S. japonicum of all bovines was checked in all 12 study villages by the miracidium hatching test in dung [15]. In addition, we gathered fresh cow dung from the grasslands which located in the 6 intervention villages (the sampled locations were the safe grazing pastures from 2009 to 2011), and used the miracidium hatching test to detect the infection of S. japonicum in cow dung. In April and May, we systematically sampled Oncomelania hupensis snails along the river banks and in marshlands and ditches around all 12 study villages. In every village, we investigated at least 2000 sample units (0.11 m2 frame) and gathered all the live snails in the frame, crushed them, and examined them for S. japonicum infection using microscopy. During the peak of the transmission season, July and August, we used exposure tests with mice to assess the infectivity of water in the 6 intervention villages [14]. Sentinel mice were exposed for 2 hours every day for 3 consecutive days. After 30 to 35 days, we sacrificed the mice and checked for adult worms of S. japonicum in their mesenteric veins.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nAll statistical analyses were performed with the use of Statistics Analysis System, version 9.1 (SAS Institute Inc., Cary, NC, USA). Confidence intervals (CIs) were calculated using standard formulae based on the binomial distribution (annual infection rate of humans). Chi-square test or Fisher exact probability test were used to examine the differences of proportions. A generalized linear model (GLM) with a logit link and a binomial error distribution was used to analyze the risk of S. japonicum infection in humans. Generalized estimating equations of parameters with an unstructured variance-covariance matrix were used to account for repeated measures on individuals during the study period. The SAS proc GENMOD was used to estimate the parameters. Two-sided P-values were calculated for all tests and P-values <0.05 were considered statistically significant.\n\nBODY.RESULTS:\n\nTable 1 showed the characteristics of the participants in 2008. The differences of age between intervention and control groups were statistically significant (P<0.001). 10.1371/journal.pntd.0002122.t001 Table 1 Characteristics of the participants in intervention and control groups in 2008. Characteristics Control group (n = 5323) Intervention group (n = 5050) Age, years 38.4 (13.6) 39.4 (13.4) ** \n Sex ratio (F/M), n 2686/2637 2617/2433 Values are mean (SD) or number. ** P<0.001 for control group vs. intervention group. \nFigure 1 showed the flow diagram of the study design. In total, 10,373 participants were recruited for the study in 2008. The rates of loss to follow-up per year (2009–2011) for each village ranged from 1.1% to 18.5% in the intervention group and 2.1% to 25.2% in the control group (Table 2). The main reason for loss to follow-up was that the participants left the villages to seek work in urban areas. 10.1371/journal.pntd.0002122.t002 Table 2 \n Schistosoma japonicum infection in humans in intervention and control groups * . Village 2008 2009 2010 2011 N Infection No. Prevalence % (CI) N Infection No. Prevalence % (CI) N Infection No. Prevalence % (CI) N Infection No. Prevalence % (CI) \n Intervention group \n E Jinghu 535 19 3.55 (1.98, 5.12) 508 16 3.15 (1.63, 4.67) 452 10 2.21 (0.85, 3.57) 436 5 1.15 (0.14, 2.15) Gu Shengsi 1102 38 3.45 (2.37, 4.53) 1054 32 3.04 (2.00, 4.07) 997 19 1.91 (1.06, 2.76) 959 8 0.83 (0.26, 1.41) Jianhong 765 23 3.01 (1.79, 4.22) 718 17 2.37 (1.25, 3.48) 685 12 1.75 (0.77, 2.74) 637 4 0.63 (0.01, 1.24) Lianmeng 994 38 3.82 (2.63, 5.02) 964 31 3.22 (2.10, 4.33) 892 19 2.13 (1.18, 3.08) 885 8 0.90 (0.28, 1.53) Tongqiao 1021 33 3.23 (2.15, 4.32) 1001 24 2.40 (1.45, 3.35) 990 21 2.12 (1.22, 3.02) 963 8 0.83 (0.26, 1.41) Tuanjie 633 21 3.32 (1.92, 4.72) 626 18 2.88 (1.56, 4.19) 611 12 1.96 (0.86, 3.07) 587 3 0.51 (−0.07, 1.09) \n Total \n § \n 5050 172 3.41 (2.91 ,3.91) 4871 138 2.83 (2.37 ,3.30) 4627 93 2.01 (1.61 ,2.41) 4467 36 0.81 (0.54 ,1.07) \n Control group \n Guoqing 788 26 3.30 (2.05, 4.55) 755 21 2.78 (1.61, 3.96) 737 22 2.99 (1.75, 4.22) 716 16 2.23 (1.15, 3.32) Nanyang 1035 37 3.57 (2.44, 4.71) 967 30 3.10 (2.01, 4.20) 921 26 2.82 (1.75, 3.89) 914 23 2.52 (1.50, 3.53) Qingyun 894 26 2.91 (1.80, 4.01) 812 23 2.83 (1.69, 3.98) 750 21 2.80 (1.62, 3.98) 748 20 2.67 (1.52, 3.83) Qingji 963 32 3.32 (2.19, 4.46) 889 28 3.15 (2.00, 4.30) 822 25 3.04 (1.86, 4.22) 818 23 2.81 (1.68, 3.95) Zhu Jiahu 712 22 3.09 (1.82, 4.36) 697 17 2.44 (1.29, 3.59) 683 16 2.34 (1.21, 3.48) 595 16 2.69 (1.39, 3.99) Tongsheng 931 27 2.90 (1.82, 3.98) 807 22 2.73 (1.60, 3.85) 727 18 2.48 (1.34, 3.61) 696 16 2.30 (1.18, 3.41) \n Total # \n 5323 170 3.19 (2.72, 3.67) 4927 141 2.86 (2.40 ,3.33) 4640 128 2.76 (2.29 ,3.23) 4487 114 2.54 (2.08 ,3.00) * CI denote 95% confidence intervals. § As compared with 2008 in the intervention group, there was a statistically significant change in the rate of infection in 2010 and 2011 (P<0.001), and there was no statistically significant change in 2009 (P = 0.101). # As compared with 2008 in the control group, there was no statistically significant change in the rate of infection from 2009 to 2011 (P = 0.328, P = 0.204, P = 0.055, respectively). \n\nBODY.RESULTS.INFECTION IN HUMANS:\nThe prevalence of S. japonicum in humans was 3.41%, 2.83%, 2.01%, and 0.81% from 2008 to 2011 in the intervention villages (comparing with 2008, P = 0.101, P<0.001 and P<0.001, respectively, Table 2). In the control villages, there was no significant statistically decline in the rate of infection from 2008 to 2011 (P>0.05 for all comparisons). The generalized linear model (Table 3), yielding odds ratios (OR) adjusted for participants' age and gender, showed a higher infection risk in humans in the control villages than the intervention villages (OR = 1.250, P = 0.001); and an overall significant downward trend in infection risk during the study period. 10.1371/journal.pntd.0002122.t003 Table 3 Analyzing the risk of Schistosoma japonicum infection in humans by generalized linear model * . Variables OR (95% CI) P-value Intercept 0.012 (0.009, 0.016) <0.001 Age 1.004 (0.999, 1.009) 0.091 Gender Female 1.047 (0.919, 1.193) 0.492 Male 1.000 - Study year 2008 2.026 (1.670, 2.460) <0.001 2009 1.734 (1.417, 2.123) <0.001 2010 1.436 (1.166, 1.768) <0.001 2011 1.000 - Group Control group 1.250 (1.095, 1.427) 0.001 Intervention group 1.000 - * The model includes participants' age, gender, study year, group. \n\nBODY.RESULTS.INFECTION IN BOVINES:\nThe S. japonicum infection rates in bovines were 3.3%, 2.9%, 1.3%, and 0.0% from 2008 to 2011 in the intervention villages (comparing with 2008, P = 0.820, P = 0.111 and P<0.001, respectively, Table 4). In contrast, there was no significant decline in the rate of infection during the 4-year period in the control villages (P>0.05 for all comparisons). 10.1371/journal.pntd.0002122.t004 Table 4 \n Schistosoma japonicum infection in bovines in intervention and control groups * . Year Intervention group Control group N Infection No. (%) N Infection No. (%) P-value a \n P-value b \n P-value c \n 2008 305 10 (3.3) 345 11 (3.2) - - 1.000 2009 309 9 (2.9) 337 9 (2.7) 0.820 0.821 1.000 2010 311 4 (1.3) 325 8 (2.5) 0.111 0.646 0.385 2011 318 0 (0.0) 313 8 (2.6) <0.001 0.650 0.004 * Statistical method was the Fisher exactly probability test. a Compared with 2008 in the intervention group. b Compared with 2008 in the control group. c Compared between the intervention and control groups from 2008 to 2011. \n\nBODY.RESULTS.INFECTION IN SNAILS:\nFrom 2008 to 2011, 11 of 6,219, 9 of 5,975, none of 25,010, and none of 15,490 snails were infected with S. japonicum in the intervention villages (comparing with 2008, P = 0.824, P<0.001 and P<0.001, respectively, Table 5). In the control villages, there was no significant statistically decline in the rate of infection from 2008 to 2011 (P>0.05 for all comparisons). 10.1371/journal.pntd.0002122.t005 Table 5 \n Schistosoma japonicum infection in snails in intervention and control groups * . Year Intervention group Control group N Infection No. N Infection No. P-value a \n P-value b \n P-value c \n 2008 6,219 11 4,574 9 - - 0.824 2009 5,975 9 8,664 6 0.824 0.054 0.187 2010 25,010 0 11,889 13 <0.001 0.231 <0.001 2011 15,490 0 9,493 8 <0.001 0.116 <0.001 * Statistical method was the Fisher exactly probability test. a Compared with 2008 in the intervention group. b Compared with 2008 in the control group. c Compared between the intervention and control groups from 2008 to 2011. \n\nBODY.RESULTS.INFECTION IN COW DUNG:\nIn the intervention villages in 2008, 21 of 533 (3.9%) of the cow dung samples contained S. japonicum eggs. After the implementation of the new control strategy, 15 of 483 (3.1%), 5 of 476 (1.1%) and none of 356 cow dung were infected in 2009, 2010 and 2011, respectively (comparing with 2008, P = 0.501, P = 0.005 and P<0.001, respectively).\n\nBODY.RESULTS.INFECTIVITY OF WATER:\nIn 2008, before implementation of the new control strategy, 19 of 60 (31.7%) mice were infected with S. japonicum. In 2009, 2010 and 2011, 13 of 60 mice (21.7%), 7 of 120 (5.8%) and 2 of 120 (1.7%) mice were infected, respectively (comparing with 2008, P = 0.216, P<0.001 and P<0.001, respectively).\n\nBODY.RESULTS.ADVERSE EVENTS:\nNo serious adverse events were reported in the study.\n\nBODY.DISCUSSION:\nOver the past few years, investigations showed that more than 80% of S. japonicum patients in China were found in the lake and marshland areas of Hunan, Hubei, Jiangxi, Anhui, and Jiangsu provinces [22]. Therefore, schistosomiasis control in these regions was especially important. Because of high rates of S. japonicum reinfection in both humans and bovines in marshland and lake regions [21], [23], [24], it was very difficult to reduce the rate of S. japonicum infection in humans to a relatively low level (such as less than 1%). The study of Wang et al. showed that a comprehensive control strategy, which was based on interventions to reduce the transmission of S. japonicum infection from cattle and humans to snails, can solve this problem [14]. However, the authors mentioned that their study included only a small number of villages and the villages were not selected in a random manner. So they were not sure whether the strategy was still highly effective in other endemic areas. A similar study was implemented in Jiangsu Province of China, but this study did not have parallel control groups [25]. In our study, we applied a similar strategy but strived to avoid these limitations. We adopted a random manner to select 12 villages in Gong'an County, Hubei Province and randomly assigned them to intervention or control groups. Our study proved that the new comprehensive strategy to control transmission of S. japonicum was highly effective in the study areas. Firstly, the prevalence of schistosomiasis in humans declined to a relatively low level after implementing the new interventions. Compared with 2008, the rate of infection in humans in the intervention villages decreased from 3.41% in 2008 to 0.81% in 2011. The generalized linear model showed that a higher infection risk for humans in the control villages than in the intervention villages (OR = 1.250, P = 0.001). Secondly, comparing with 2008, the rate of infection in bovines, snails and cow dung in the intervention group decreased from 3.3% to none, 11 of 6,219 to none, and 3.9% to none in 2011, respectively. Thirdly, during the peak transmission period, after the implementation of the new control strategy, the rate of infection in sentinel mice decreased from 31.7% in 2008 to 1.7% in 2011. It should be noted that the randomized, controlled design pertains only to the human, bovine, and snail data while the environmental cow dung and infectious water data were only collected in the intervention villages. Bovines are the main infection source of S. japonicum in the marshlands and lakes in China, and are responsible for an estimated 75%–90% of the egg contamination [20], [21]. Therefore, it was very important to reduce the role of bovines as an infection source. Recently, the main measures included providing farmers machines instead of bovines and prohibiting bovines in these regions. These measures were effective [14]. However, if these measures were used in some rural areas of Hubei Province, there would be some challenges. Firstly, the machines are not suitable due to the limitation of landform or farmers' knowledge. Secondly, the bovines can bring additional economic benefits for villagers, so the villagers were reluctant to give up cattle rearing. For these reasons, in our study we adopted new interventions for bovines which included building safe pastures for grazing and preventing bovines from grazing on snail habitat beach areas. Our study proved that these measures were effective. Moreover, these measures were sustainable because the cost of fences and pastures was a one-time expense because the material is durable. We used a triangular cement fence post, which requires less cement than a square shape. Our study showed that there was no statistically significant decline in the S. japonicum infection prevalence in humans, bovines and snails from 2008 to 2011 in the control villages. Past studies showed that the effect of chemotherapy with praziquantel for humans and cattle was limited in controlling schistosomiasis in lake and marshland regions [17]–[19], because it does not prevent reinfection [23]–[25]. In contrast, the integrated control strategy can reduce the transmission of S. japonicum infection from humans and cattle to snails, reducing contaminated water as a source of infection in humans [14]. However, this difference is of little practical consequence unless the integrated interventions lead to more sustainable control and a reduction of the basic reproductive number (R0). If R0 can be kept less than 1 over a length of time, the disease could be gradually eliminated [26]. This is probably the most important potential outcome of the intervention. However, a longer term follow-up study is needed to test this potential outcome. Taken together, we considered that the new interventions, especially the interventions for bovines and humans, were the most important components of the integrated control strategy. A potential limitation of the study was that we did not investigate the status of S. japonicum infection in people who quit the study. Thus, we could not compare the difference of these people's infection between intervention and control groups.\n\nBODY.DISCUSSION.CONCLUSIONS:\nAn integrated control strategy, aiming to weaken the roles of bovines and humans as sources of S. japonicum infection, was effective in controlling the transmission of S. japonicum in marshland regions of China.\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1CONSORT Checklist.(DOC)Click here for additional data file. Protocol S1Trial protocol (Chinese language file).(DOC)Click here for additional data file.\n\n**Question:** Compared to The routine interventions included praziquantel chemotherapy and controlling snails, what was the result of The routine interventions included praziquantel chemotherapy and controlling snails plus new interventions, mainly consisting of building fences to limit the grazing area for bovines, building safe pastures for grazing, improving the residents' health conditions and facilities, on the rate of S. japonicum infection in humans, bovines, snails, cow dung and mice?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
953
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Assessing the Effect of an Integrated Control Strategy for Schistosomiasis Japonica Emphasizing Bovines in a Marshland Area of Hubei Province, China: A Cluster Randomized Trial\n\n ABSTRACT.INTRODUCTION:\nMore than 80% of schistosomiasis patients in China live in the lake and marshland regions. The purpose of our study is to assess the effect of a comprehensive strategy to control transmission of Schistosoma japonicum in marshland regions.\n\nABSTRACT.METHODOLOGY/PRINCIPAL FINDINGS:\nIn a cluster randomized controlled trial, we implemented an integrated control strategy in twelve villages from 2009 through 2011 in Gong'an County, Hubei Province. The routine interventions included praziquantel chemotherapy and controlling snails, and were implemented in all villages. New interventions, mainly consisting of building fences to limit the grazing area for bovines, building safe pastures for grazing, improving the residents' health conditions and facilities, were only implemented in six intervention villages. Results showed that the rate of S. japonicum infection in humans, bovines, snails, cow dung and mice in the intervention group decreased from 3.41% in 2008 to 0.81% in 2011, 3.3% to none, 11 of 6,219 to none, 3.9% to none and 31.7% to 1.7%, respectively (P<0.001 for all comparisons). In contrast, there were no statistically significant reductions of S. japonicum infection in humans, bovines and snails from 2008 to 2011 in the control group (P>0.05 for all comparisons). Moreover, a generalized linear model showed that there was a higher infection risk in humans in the control group than in the intervention group (OR = 1.250, P = 0.001) and an overall significant downward trend in infection risk during the study period.\n\nABSTRACT.CONCLUSIONS/SIGNIFICANCE:\nThe integrated control strategy, designed to reduce the role of bovines and humans as sources of S. japonicum infection, was highly effective in controlling the transmission of S. japonicum in marshland regions in China.\n\nABSTRACT.TRIAL REGISTRATION:\nChinese Clinical Trial Registry ChiCTR-PRC-12002405.\n\nBODY.INTRODUCTION:\nSchistosomiasis is an important public health issue in China where it continues to pose a serious threat to human well-being [1]–[3]. Since the People's Republic of China was established in 1949, the Chinese government has given high priority to the control of schistosomiasis, establishing a number of special bodies to manage control activities from the national to the township level [4]–[6]. These policies resulted in remarkable success in that the number of schistosomiasis patients were reduced by over 90%, from about 11.6 million cases in the 1950s to 726,000 cases in 2004 [7]. A further report in 2004 indicated that disease transmission had been interrupted or controlled in 42% of provinces, 40% of counties, and 53% of towns, previously endemic for schistosomiasis [8]. However, prospects for the control of schistosomiasis have been less optimistic in recent years, particularly since the termination of the World Bank Loan Project (WBLP) for schistosomiasis control at the end of 2001 [7]. Compared with the second national survey of S. japonicum in China in 1995, the third national survey conducted in 2004 showed that the prevalence of S. japonicum infection in humans had not substantially changed in the lake and marshlands and other areas of Southern China [7]. By the end of 2009, a total of 365,770 cases of S. japonicum were estimated in China, and 89 counties had not yet reached the criterion for transmission control which stipulated that the human prevalence should be less than 1% over a length of time [9], [10], i.e. the prevalence in these counties was >1%. Over the past 2–3 decades, the strategies for S. japonicum control in southern China, including the vast lake and marshland regions, has involved chemical mollusciciding, alteration of the oncomelanid intermediate snails habitats, and synchronous chemotherapy with praziquantel for all villagers and their bovines [6], [11]. Historically, these strategies achieved some effect, but recent studies demonstrated there were some problems as the control options resulted in environmental pollution leading to ecological damage [11], [12]. Moreover, owing to the high rates of reinfection in both humans and bovines, frequent flooding, and the complex environment, more persons have been infected and the habitat of the Oncomelania snails has increased in the lake and marshlands [11], [13]. Consequently, a more effective strategy was needed urgently in these areas. From 2005 through 2007, an important study of schistosomiasis japonica control was undertaken by Wang et al. in villages along the Poyang Lake in Jiangxi Province involving a comprehensive integrated approach aimed at reducing S. japonicum transmission to snails from cattle and humans, which play key roles as sources of S. japonicum\n[14]. The integrated strategy, which included removing bovines from snail-infested grasslands, providing farmers with mechanized farming equipment, building safe water systems, providing adequate sanitation, and implementing health education and synchronous chemotherapy with praziquantel for both villagers and bovines, was highly effective [14]. On the basis of identifying and controlling the main S. japonicum infection sources, we implemented a similar strategy (March 2009 through November 2011) to that employed by Wang et al. in the marshlands of Gong'an County [14], Hubei Province, another major endemic area for schistosomiasis japonica. The objective of the study was assessing the strategy's effect in marshland regions.\n\nBODY.METHODS:\nThe protocol for this trial and supporting CONSORT checklist are available as supporting information; see Protocol S1 and Checklist S1.\n\nBODY.METHODS.ETHICS STATEMENT:\nWritten ethical approval for this study was obtained from the Ethics Review Committee of Hubei Provincial Center for Disease Control and Prevention (no. 200803). Written informed consent was obtained from all adults and from parents or guardians of minors before participation in the study. The participants had the opportunity to withdraw from the study at any time. Before beginning work on the study, the bovine owners provided consent to have their animals involved in the study. Moreover, permits for the bovines were obtained from Gong'an County Animal Husbandry Bureau. All animal work was carried out under the guidance of the Institute for Laboratory Animal Research (ILAR), and approved by Ethics Review Committee of Animal Experiments, Hubei Provincial Center for Disease Control and Prevention (no. 2008a05). Both doses of praziquantel (i.e., single 40 mg/kg dose for humans identified as stool egg-positive or 25 mg/kg for infected bovines) were within Chinese Ministry of Health published guidelines [15].\n\nBODY.METHODS.STUDY AREA AND PARTICIPANTS:\nGong'an County is located in typical marshland with a water area of 364 hectares (km2) along the mid-to-lower reaches of the Yangtze River in southwest Hubei Province, China. In 2009, there were 294 schistosomiasis-endemic villages (out of a total of 326), 539 cases of advanced schistosomiasis, and 36,612 cases of chronic schistosomiasis in Gong'an County; the prevalence of schistosomiasis in humans and bovines was 2.75% and 2.41%, respectively [16]. We carried out the study in 12 villages from 12 towns in Gong'an County which were selected by a two-stage random sampling procedure. The 12 towns were first randomly selected from 16 towns; then 12 schistosomiasis-endemic villages were selected from the 12 towns randomly (each town selected a village). Finally, these villages were randomly divided into intervention and control groups (Figure 1). E Jinghu, Gu Shengsi, Jianhong, Lianmeng, Tongqiao, Tuanjie were assigned to the intervention group and Guoqing, Nanyang, Qingyun, Qingji, Zhu Jiahu, Tongsheng were assigned to the control group (Figure 2). The villages had similar agriculture resources (rice, cotton and brassica napus) and the residents mainly were farmers (about 75%). The residents were exposed to contaminated water when they performed their agricultural or daily activities (i.e. cultivating crops, catching fish and washing clothes). 10.1371/journal.pntd.0002122.g001Figure 1Trial flowchart. 10.1371/journal.pntd.0002122.g002Figure 2Location of the study villages in the mid-to-lower reaches of the Yangtze River, Hubei province, China. The participants met the following inclusion criteria: a) must have been a resident of the village for more than 12 months; b) should be aged 6–65 years; c) should continuously reside in the village for the study period; d) have no serious diseases, such as malignant cancer.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF :\nPrior to implementing the new control strategy, routine interventions had been used in Gong'an County to control S. japonicum infection, which comprised synchronous praziquantel chemotherapy of both villagers and bovines and mollusciciding of snails [17], [18]. These procedures were continued in all 12 study villages for the duration of the study but additional interventions were incorporated into the intervention group. These comprised of the building of fences to limit the grazing area for bovines, building safe pastures for grazing, improving the residents' health conditions and facilities, and strengthening specialized schistosomiasis clinics at the village level.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .FENCING AND SAFE PASTURES:\nConsidering that bovines have been identified as the primary reservoir host of S. japonicum infection in China [6], [19]–[21], two measures were implemented (March to May, 2009) to reduce the influence of bovines on transmission. Firstly, 63 kilometres of fencing were installed to prevent animals from grazing in areas where Oncomelania snails were present. Warning signs were placed along the fenced areas, with the help of the local government authorities, and professional management staff were employed to ensure the intervention was effective. Secondly, 7 safe grazing pasture areas were established totaling 190 hectares (1,900,000 m2). Each area was 20–30 hectares and was designed to contain about 1,100 animals. The pastures were carefully selected with no or very few snails, which, if present, were quickly eliminated by mollusciciding. Specialized managers were employed to manage each of the grazing areas and ponds were established to enable the bovines to drink uncontaminated water. Fences comprised cement columns and barbed wire which ensured long-term usage. Each cement column was 160 centimeters (cm) in height and shaped as a triangular prism with 15 cm edges. Each column had three holes through which the barbed wire was fixed, and the distance between each was 2 meters.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .IMPROVING HEALTH AND HEALTH CARE FACILITIES:\nTo reduce the potential of humans as a source of infection for snails, measures were undertaken (May to July, 2009) to improve the health and well-being of villagers and the existing health care facilities. Safe water was provided to 2,826 households by building 8 new public wells or improving household wells already in place. To deal with humans feces, 1,545 home lavatories were constructed or repaired with three-cell septic tanks and 618 households were provided with marsh-gas pool latrines. Furthermore, all fishermen were provided with fecal-matter containers to prevent them from excreting feces directly into river or lake freshwater.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .STRENGTHENING SPECIALIZED SCHISTOSOMIASIS CLINICS:\nIn order to implement the interventions in the intervention villages, specialized schistosomiasis clinics were strengthened at the village level (March to May, 2009). As well as being responsible for managing the fences and the safety of the pastures, the clinic staff were responsible for examining the residents and livestock for schistosome infection, treating infected people or livestock with praziquantel, and providing health education on preventing S. japonicum infections and, in particular, in explaining the significance of bovines and humans as sources of S. japonicum infection for snails.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .PRAZIQUANTEL CHEMOTHERAPY:\nDuring the study period, all participants were screened annually for S. japonicum infection by indirect hemagglutination assay (IHA), followed by fecal examination of IHA-positives using the Kato-Katz technique [15] (three thick smears). Kato-Katz-positives were treated with praziquantel at a single oral dose of 40 mg/kg. As well, all bovines were checked annually by the miracidium hatching test in dung [15]. All infected bovines were treated with a single dose of praziquantel at a dosage of 25 mg/kg body weight.\n\nBODY.METHODS.THE COMPREHENSIVE STRATEGY TO CONTROL TRANSMISSION OF .SNAIL CONTROL:\nBefore implementing the new control strategy, mollusciciding using niclosamide had been conducted in all schistosomiasis-endemic villages (included all 12 study villages) in Gong'an County as part of the routine procedure. During the study period, a comprehensive approach, comprising mollusciciding with niclosamide and environmental modification, which mainly included using cement to change ditches' original environment, was used for snail control in all 12 study villages. Snail surveys were carried out twice-yearly (spring and autumn) along the river banks, ditches and marshlands around the villages [15]. An average of 110 hectares of snail habitat in the study villages were treated annually with molluscicide or environmental modification (89 hectares in 2008, 141 hectares in 2009, 134 hectares in 2010 and 74 hectares in 2011).\n\nBODY.METHODS.OUTCOME MEASURES:\nFive outcome measures, including the primary one (the prevalence of S. japonicum in humans) and four secondary ones (the rate of S. japonicum infection in bovines, cow dung, snails and mice), were determined annually and used to assess the strategy's effect. Participants that were positive for both IHA and Kato-Katz were defined as infected and the prevalence of S. japonicum of participants in all 12 study villages was in October/November, after the second transmission season. During the same period, the infection of S. japonicum of all bovines was checked in all 12 study villages by the miracidium hatching test in dung [15]. In addition, we gathered fresh cow dung from the grasslands which located in the 6 intervention villages (the sampled locations were the safe grazing pastures from 2009 to 2011), and used the miracidium hatching test to detect the infection of S. japonicum in cow dung. In April and May, we systematically sampled Oncomelania hupensis snails along the river banks and in marshlands and ditches around all 12 study villages. In every village, we investigated at least 2000 sample units (0.11 m2 frame) and gathered all the live snails in the frame, crushed them, and examined them for S. japonicum infection using microscopy. During the peak of the transmission season, July and August, we used exposure tests with mice to assess the infectivity of water in the 6 intervention villages [14]. Sentinel mice were exposed for 2 hours every day for 3 consecutive days. After 30 to 35 days, we sacrificed the mice and checked for adult worms of S. japonicum in their mesenteric veins.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nAll statistical analyses were performed with the use of Statistics Analysis System, version 9.1 (SAS Institute Inc., Cary, NC, USA). Confidence intervals (CIs) were calculated using standard formulae based on the binomial distribution (annual infection rate of humans). Chi-square test or Fisher exact probability test were used to examine the differences of proportions. A generalized linear model (GLM) with a logit link and a binomial error distribution was used to analyze the risk of S. japonicum infection in humans. Generalized estimating equations of parameters with an unstructured variance-covariance matrix were used to account for repeated measures on individuals during the study period. The SAS proc GENMOD was used to estimate the parameters. Two-sided P-values were calculated for all tests and P-values <0.05 were considered statistically significant.\n\nBODY.RESULTS:\n\nTable 1 showed the characteristics of the participants in 2008. The differences of age between intervention and control groups were statistically significant (P<0.001). 10.1371/journal.pntd.0002122.t001 Table 1 Characteristics of the participants in intervention and control groups in 2008. Characteristics Control group (n = 5323) Intervention group (n = 5050) Age, years 38.4 (13.6) 39.4 (13.4) ** \n Sex ratio (F/M), n 2686/2637 2617/2433 Values are mean (SD) or number. ** P<0.001 for control group vs. intervention group. \nFigure 1 showed the flow diagram of the study design. In total, 10,373 participants were recruited for the study in 2008. The rates of loss to follow-up per year (2009–2011) for each village ranged from 1.1% to 18.5% in the intervention group and 2.1% to 25.2% in the control group (Table 2). The main reason for loss to follow-up was that the participants left the villages to seek work in urban areas. 10.1371/journal.pntd.0002122.t002 Table 2 \n Schistosoma japonicum infection in humans in intervention and control groups * . Village 2008 2009 2010 2011 N Infection No. Prevalence % (CI) N Infection No. Prevalence % (CI) N Infection No. Prevalence % (CI) N Infection No. Prevalence % (CI) \n Intervention group \n E Jinghu 535 19 3.55 (1.98, 5.12) 508 16 3.15 (1.63, 4.67) 452 10 2.21 (0.85, 3.57) 436 5 1.15 (0.14, 2.15) Gu Shengsi 1102 38 3.45 (2.37, 4.53) 1054 32 3.04 (2.00, 4.07) 997 19 1.91 (1.06, 2.76) 959 8 0.83 (0.26, 1.41) Jianhong 765 23 3.01 (1.79, 4.22) 718 17 2.37 (1.25, 3.48) 685 12 1.75 (0.77, 2.74) 637 4 0.63 (0.01, 1.24) Lianmeng 994 38 3.82 (2.63, 5.02) 964 31 3.22 (2.10, 4.33) 892 19 2.13 (1.18, 3.08) 885 8 0.90 (0.28, 1.53) Tongqiao 1021 33 3.23 (2.15, 4.32) 1001 24 2.40 (1.45, 3.35) 990 21 2.12 (1.22, 3.02) 963 8 0.83 (0.26, 1.41) Tuanjie 633 21 3.32 (1.92, 4.72) 626 18 2.88 (1.56, 4.19) 611 12 1.96 (0.86, 3.07) 587 3 0.51 (−0.07, 1.09) \n Total \n § \n 5050 172 3.41 (2.91 ,3.91) 4871 138 2.83 (2.37 ,3.30) 4627 93 2.01 (1.61 ,2.41) 4467 36 0.81 (0.54 ,1.07) \n Control group \n Guoqing 788 26 3.30 (2.05, 4.55) 755 21 2.78 (1.61, 3.96) 737 22 2.99 (1.75, 4.22) 716 16 2.23 (1.15, 3.32) Nanyang 1035 37 3.57 (2.44, 4.71) 967 30 3.10 (2.01, 4.20) 921 26 2.82 (1.75, 3.89) 914 23 2.52 (1.50, 3.53) Qingyun 894 26 2.91 (1.80, 4.01) 812 23 2.83 (1.69, 3.98) 750 21 2.80 (1.62, 3.98) 748 20 2.67 (1.52, 3.83) Qingji 963 32 3.32 (2.19, 4.46) 889 28 3.15 (2.00, 4.30) 822 25 3.04 (1.86, 4.22) 818 23 2.81 (1.68, 3.95) Zhu Jiahu 712 22 3.09 (1.82, 4.36) 697 17 2.44 (1.29, 3.59) 683 16 2.34 (1.21, 3.48) 595 16 2.69 (1.39, 3.99) Tongsheng 931 27 2.90 (1.82, 3.98) 807 22 2.73 (1.60, 3.85) 727 18 2.48 (1.34, 3.61) 696 16 2.30 (1.18, 3.41) \n Total # \n 5323 170 3.19 (2.72, 3.67) 4927 141 2.86 (2.40 ,3.33) 4640 128 2.76 (2.29 ,3.23) 4487 114 2.54 (2.08 ,3.00) * CI denote 95% confidence intervals. § As compared with 2008 in the intervention group, there was a statistically significant change in the rate of infection in 2010 and 2011 (P<0.001), and there was no statistically significant change in 2009 (P = 0.101). # As compared with 2008 in the control group, there was no statistically significant change in the rate of infection from 2009 to 2011 (P = 0.328, P = 0.204, P = 0.055, respectively). \n\nBODY.RESULTS.INFECTION IN HUMANS:\nThe prevalence of S. japonicum in humans was 3.41%, 2.83%, 2.01%, and 0.81% from 2008 to 2011 in the intervention villages (comparing with 2008, P = 0.101, P<0.001 and P<0.001, respectively, Table 2). In the control villages, there was no significant statistically decline in the rate of infection from 2008 to 2011 (P>0.05 for all comparisons). The generalized linear model (Table 3), yielding odds ratios (OR) adjusted for participants' age and gender, showed a higher infection risk in humans in the control villages than the intervention villages (OR = 1.250, P = 0.001); and an overall significant downward trend in infection risk during the study period. 10.1371/journal.pntd.0002122.t003 Table 3 Analyzing the risk of Schistosoma japonicum infection in humans by generalized linear model * . Variables OR (95% CI) P-value Intercept 0.012 (0.009, 0.016) <0.001 Age 1.004 (0.999, 1.009) 0.091 Gender Female 1.047 (0.919, 1.193) 0.492 Male 1.000 - Study year 2008 2.026 (1.670, 2.460) <0.001 2009 1.734 (1.417, 2.123) <0.001 2010 1.436 (1.166, 1.768) <0.001 2011 1.000 - Group Control group 1.250 (1.095, 1.427) 0.001 Intervention group 1.000 - * The model includes participants' age, gender, study year, group. \n\nBODY.RESULTS.INFECTION IN BOVINES:\nThe S. japonicum infection rates in bovines were 3.3%, 2.9%, 1.3%, and 0.0% from 2008 to 2011 in the intervention villages (comparing with 2008, P = 0.820, P = 0.111 and P<0.001, respectively, Table 4). In contrast, there was no significant decline in the rate of infection during the 4-year period in the control villages (P>0.05 for all comparisons). 10.1371/journal.pntd.0002122.t004 Table 4 \n Schistosoma japonicum infection in bovines in intervention and control groups * . Year Intervention group Control group N Infection No. (%) N Infection No. (%) P-value a \n P-value b \n P-value c \n 2008 305 10 (3.3) 345 11 (3.2) - - 1.000 2009 309 9 (2.9) 337 9 (2.7) 0.820 0.821 1.000 2010 311 4 (1.3) 325 8 (2.5) 0.111 0.646 0.385 2011 318 0 (0.0) 313 8 (2.6) <0.001 0.650 0.004 * Statistical method was the Fisher exactly probability test. a Compared with 2008 in the intervention group. b Compared with 2008 in the control group. c Compared between the intervention and control groups from 2008 to 2011. \n\nBODY.RESULTS.INFECTION IN SNAILS:\nFrom 2008 to 2011, 11 of 6,219, 9 of 5,975, none of 25,010, and none of 15,490 snails were infected with S. japonicum in the intervention villages (comparing with 2008, P = 0.824, P<0.001 and P<0.001, respectively, Table 5). In the control villages, there was no significant statistically decline in the rate of infection from 2008 to 2011 (P>0.05 for all comparisons). 10.1371/journal.pntd.0002122.t005 Table 5 \n Schistosoma japonicum infection in snails in intervention and control groups * . Year Intervention group Control group N Infection No. N Infection No. P-value a \n P-value b \n P-value c \n 2008 6,219 11 4,574 9 - - 0.824 2009 5,975 9 8,664 6 0.824 0.054 0.187 2010 25,010 0 11,889 13 <0.001 0.231 <0.001 2011 15,490 0 9,493 8 <0.001 0.116 <0.001 * Statistical method was the Fisher exactly probability test. a Compared with 2008 in the intervention group. b Compared with 2008 in the control group. c Compared between the intervention and control groups from 2008 to 2011. \n\nBODY.RESULTS.INFECTION IN COW DUNG:\nIn the intervention villages in 2008, 21 of 533 (3.9%) of the cow dung samples contained S. japonicum eggs. After the implementation of the new control strategy, 15 of 483 (3.1%), 5 of 476 (1.1%) and none of 356 cow dung were infected in 2009, 2010 and 2011, respectively (comparing with 2008, P = 0.501, P = 0.005 and P<0.001, respectively).\n\nBODY.RESULTS.INFECTIVITY OF WATER:\nIn 2008, before implementation of the new control strategy, 19 of 60 (31.7%) mice were infected with S. japonicum. In 2009, 2010 and 2011, 13 of 60 mice (21.7%), 7 of 120 (5.8%) and 2 of 120 (1.7%) mice were infected, respectively (comparing with 2008, P = 0.216, P<0.001 and P<0.001, respectively).\n\nBODY.RESULTS.ADVERSE EVENTS:\nNo serious adverse events were reported in the study.\n\nBODY.DISCUSSION:\nOver the past few years, investigations showed that more than 80% of S. japonicum patients in China were found in the lake and marshland areas of Hunan, Hubei, Jiangxi, Anhui, and Jiangsu provinces [22]. Therefore, schistosomiasis control in these regions was especially important. Because of high rates of S. japonicum reinfection in both humans and bovines in marshland and lake regions [21], [23], [24], it was very difficult to reduce the rate of S. japonicum infection in humans to a relatively low level (such as less than 1%). The study of Wang et al. showed that a comprehensive control strategy, which was based on interventions to reduce the transmission of S. japonicum infection from cattle and humans to snails, can solve this problem [14]. However, the authors mentioned that their study included only a small number of villages and the villages were not selected in a random manner. So they were not sure whether the strategy was still highly effective in other endemic areas. A similar study was implemented in Jiangsu Province of China, but this study did not have parallel control groups [25]. In our study, we applied a similar strategy but strived to avoid these limitations. We adopted a random manner to select 12 villages in Gong'an County, Hubei Province and randomly assigned them to intervention or control groups. Our study proved that the new comprehensive strategy to control transmission of S. japonicum was highly effective in the study areas. Firstly, the prevalence of schistosomiasis in humans declined to a relatively low level after implementing the new interventions. Compared with 2008, the rate of infection in humans in the intervention villages decreased from 3.41% in 2008 to 0.81% in 2011. The generalized linear model showed that a higher infection risk for humans in the control villages than in the intervention villages (OR = 1.250, P = 0.001). Secondly, comparing with 2008, the rate of infection in bovines, snails and cow dung in the intervention group decreased from 3.3% to none, 11 of 6,219 to none, and 3.9% to none in 2011, respectively. Thirdly, during the peak transmission period, after the implementation of the new control strategy, the rate of infection in sentinel mice decreased from 31.7% in 2008 to 1.7% in 2011. It should be noted that the randomized, controlled design pertains only to the human, bovine, and snail data while the environmental cow dung and infectious water data were only collected in the intervention villages. Bovines are the main infection source of S. japonicum in the marshlands and lakes in China, and are responsible for an estimated 75%–90% of the egg contamination [20], [21]. Therefore, it was very important to reduce the role of bovines as an infection source. Recently, the main measures included providing farmers machines instead of bovines and prohibiting bovines in these regions. These measures were effective [14]. However, if these measures were used in some rural areas of Hubei Province, there would be some challenges. Firstly, the machines are not suitable due to the limitation of landform or farmers' knowledge. Secondly, the bovines can bring additional economic benefits for villagers, so the villagers were reluctant to give up cattle rearing. For these reasons, in our study we adopted new interventions for bovines which included building safe pastures for grazing and preventing bovines from grazing on snail habitat beach areas. Our study proved that these measures were effective. Moreover, these measures were sustainable because the cost of fences and pastures was a one-time expense because the material is durable. We used a triangular cement fence post, which requires less cement than a square shape. Our study showed that there was no statistically significant decline in the S. japonicum infection prevalence in humans, bovines and snails from 2008 to 2011 in the control villages. Past studies showed that the effect of chemotherapy with praziquantel for humans and cattle was limited in controlling schistosomiasis in lake and marshland regions [17]–[19], because it does not prevent reinfection [23]–[25]. In contrast, the integrated control strategy can reduce the transmission of S. japonicum infection from humans and cattle to snails, reducing contaminated water as a source of infection in humans [14]. However, this difference is of little practical consequence unless the integrated interventions lead to more sustainable control and a reduction of the basic reproductive number (R0). If R0 can be kept less than 1 over a length of time, the disease could be gradually eliminated [26]. This is probably the most important potential outcome of the intervention. However, a longer term follow-up study is needed to test this potential outcome. Taken together, we considered that the new interventions, especially the interventions for bovines and humans, were the most important components of the integrated control strategy. A potential limitation of the study was that we did not investigate the status of S. japonicum infection in people who quit the study. Thus, we could not compare the difference of these people's infection between intervention and control groups.\n\nBODY.DISCUSSION.CONCLUSIONS:\nAn integrated control strategy, aiming to weaken the roles of bovines and humans as sources of S. japonicum infection, was effective in controlling the transmission of S. japonicum in marshland regions of China.\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1CONSORT Checklist.(DOC)Click here for additional data file. Protocol S1Trial protocol (Chinese language file).(DOC)Click here for additional data file.\n\n**Question:** Compared to The routine interventions included praziquantel chemotherapy and controlling snails, what was the result of The routine interventions included praziquantel chemotherapy and controlling snails plus new interventions, mainly consisting of building fences to limit the grazing area for bovines, building safe pastures for grazing, improving the residents' health conditions and facilities, on infection risk in humans?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
958
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Comparison of the Effects of Maternal Supportive Care and Acupressure (at BL32 Acupoint) on Labor Length and Infant's Apgar Score\n\n ABSTRACT.BACKGROUND AND OBJECTIVES:\nProlonged labor leads to increase of cesarean deliveries, reduction of fetal heart rate, and maternal as well as infantile complications. Therefore, many women tend to use pharmacological or non-pharmacological methods for reduction of labor length. The present study aimed to compare the effects of maternal supportive care and acupressure (at BL32 acupoint) on labor length and infant's Apgar score.\n\nABSTRACT.METHODS:\nIn this clinical trial, 150 women with low-risk pregnancy were randomly divided into supportive care, acupressure, and control groups each containing 50 subjects. The data were collected using a questionnaire including demographic and pregnancy characteristics. Then, the data were analyzed using Chi-square test and one-way ANOVA.\n\nABSTRACT.RESULTS:\nThe mean length of the first and second stages of labor was respectively 157.0±29.5 and 58.9±25.8 minutes in the supportive care group, 161.7±37.3 and 56.1±31.4 minutes in the acupressure group, ad 281.0±79.8 and 128.4±44.9 minutes in the control group. The difference between the length of labor stages was significant in the three study groups (P<0.001). Moreover, the frequency of Apgar score≥8 in the first and 5th minutes was higher in the supportive care and acupressure groups compared to the control group, and the difference was statistically significant (P<0.001).\n\nABSTRACT.CONCLUSION:\nContinuous support and acupressure could reduce the length of labor stages and increase the infants' Apgar scores. Therefore, these methods, as effective non-pharmacological strategies, can be introduced to the medical staff to improve the delivery outcomes.\n\nBODY.1. INTRODUCTION:\nDelivery pain is an acute pain which rapidly increases and is affected by physiological, psychological, social, cultural, and environmental factors (Leeman et al., 2003). Excessive pain intensifies mother's fear and anxiety during delivery and stimulates sympathetic nervous system. These, in turn, enhance secretion of catecholamines, such as epinephrine and norepinephrine, eventually leading to more pain, prolonged labor stages, and dissatisfaction with the delivery experience (Sercekus & Okumus, 2007). Prolonged labor results in anxiety, fear, and fatigue which play a major role in reduction of mother's self-confidence and self-esteem. Thus, the women experiencing prolonged labor tend to make use of analgesic methods. Prolonged labor also increases the probability of damage, prenatal mortality, utilization of oxytocin, and rate of cesarean and instrumental delivery (Kaptchuk, 2002; May& Elton, 1998; Rabl, Ahner, Bitschnau, Zeisler, & Husslein, 2001). Furthermore, the relationship between chronic stress and delivery outcomes indicates the necessity of interventions for reducing this factor (Fink et al., 2011). Nowadays, various pharmacological and non-pharmacological methods are used for decreasing labor pain. Yet, since pharmacological methods might be accompanied by some complications for both mother and fetus, non-pharmacological ones are more welcomed. Up to now, a large number of non-pharmacological methods have been proposed for reduction of labor pain with acupressure and supportive care being two important ones (Kimber et al., 2008). Acupressure is a comprehensive treatment method which dates back to 5000 years ago. In this method, similar to acupuncture, specific reflex points on the body are used for treatment. By pressing these points, muscle tension is removed and blood circulation and vital energy are improved (Yang, 2001). Some researchers believe that reduction of pain following stimulation of acupoints is due to the fact that it prevents transfer of pain stimulants and increases the blood endorphin levels (Chung, Hung, Kuo, & Huang, 2003). Park et al. (2003) stated that acupressure increased the intensity of uterine contractions (Park, Cho, Kwon, Ahn, Lim, & Chang, 2003). In the same line, Skilnand et al. (2002) showed that the first stage of labor was shorter among the participants who underwent acupressure (Skilnand, Fossen, & Heiberg, 2002). Overall, various acupoints are employed for induction and control of delivery and BL32 (Ciliao) is one of these points (Cook & Wilcox, 1997). The other non-pharmacological method used in the current study was supportive care. This method involved continuous presence of doula and provision of psychological support (reassuring, encouraging, and guiding the mother), physical support (palpation, massage, coldness, hotness, hydrotherapy, position change, and movement), informing and guiding the mother, and facilitation of creation of relationship (helping the woman to express her needs) (Simkin & Bolding, 2004). According to most doulas, mothers cannot predict how labor affects them because they do not know about the delivery process and judge themselves negatively (Gilliland, 2011). On the other hand, self-confidence and the ability to adapt with labor are the predictors of labor pain experience. By supportive care, women can successfully cope with labor pain and stress and feel strong and mentally calm (Simkin & Bolding, 2004). In the study by Kennel et al. (1991), doula's continuous support reduced the labor length by 1-2 hours and increased the mother's capability to control laborthereby resulting in a positive delivery experience (Kennel, Klaus, McGrath, Robertson, & Hinkley, 1991). Considering the effects of prolonged labor on the delivery outcomes, the present study aims to compare the effects of maternal supportive care and acupressure (at BL32 acupoint) on labor length and infant's Apgar score.\n\nBODY.2. METHODS.2.1 STUDY DESIGN:\nThis randomized clinical trial was conducted in the delivery ward of the selected educational center of Shiraz University of Medical Sciences (Shoushtari hospital in Iran) in 2012.\n\nBODY.2. METHODS.2.2 SETTING AND SAMPLE:\nConsidering d=5, α=0.05, 1-β=0.90, SD=7, and the following formula, a 126-subject sample size was determined for the study (42 in each group). However, due to the possibility of loss, the sample size was increased to 150 subjects (50 in each group): Then, the subjects were selected through simple random sampling and were divided into supportive care, acupressure, and control groups using stratified block randomization. In doing so, a number was randomly selected from the table of random numbers and the researcher moved toward the right or left column or row and wrote the 5 digit numbers down. Since the participants were divided into 3 groups in this study, 3-therapy method was used and classification was performed as follows: A: supportive care group, B: acupressure group, and C: control group. Accordingly, ABC: 1, ACB: 2, BAC: 3, BCA: 4, CAB: 5, and CBA: 6. It should be noted that numbers 0, 7, and 9 were ignored.\n\nBODY.2. METHODS.2.3 ETHICAL CONSIDERATIONS:\nEthical Committee Approval Code in Medical Research, Shiraz University of Medical Sciences in Iran is CT-P-4985.\n\nBODY.2. METHODS.2.4 MEASUREMENTS:\nThe inclusion criteria of the study were being primiparous or multiparous, being physically and mentally healthy, having at least diploma, being 18-35 years old, singleton pregnancy, cephalic presentation, gestational age of 37-42 weeks, 4cm dilation, and having at least 2-3 uterine contractions in 10 minutes. After signing written informed consents, the selected women were divided into acupressure, supportive care, and control groups through permuted block randomization. The women with preeclampsia, induced labor, non-cephalic presentation, cephalopelvic disproportion, multiple birth, and those who smoked, suffered from underlying diseases, and were unwilling to take part in the study were excluded from the research.\n\nBODY.2. METHODS.2.5 PROCEDURE:\nIn the supportive care group, the researcher as the doula accompanied the mother since hospitalization up to delivery. Emotional supports during labor included palpation, kindly massaging the mother, and reassuring her. The doula also provided the mother with information about the origin of pain and process of delivery. Besides, physical support included helping the mother to change her position and move during labor. In the acupressure group, in 3-4 and 7-8cm dilation, the participants were located in the appropriate position and BL32 acupoint was pressed. This acupoint is located in the second hole of sacral bone (16). The pressure was continuously and gently applied by both thumbs for 30 minutes (Figure 1). Before sampling, the researcher was trained regarding performance of acupressure by a physical medicine and rehabilitation specialist, so that equal pressure was applied in each performance. After the training, the pressure applied by the right and the left thumb was measured as 1405 and 1277 mmHg, respectively. The pressure was applied by the beginning and stopped at the end of the contractions. Since the interventions were not performed continuously and the researcher took a rest during the contractions, no problems were faced for application of pressure. Figure 1Location of the BL 32 point; Reference: http://acupunctureschoolonline.com/bl-31%E2%80%93bl-34-eight-liao-baliao-acupuncture-points.html The control group only received the hospital's routine care services. After the interventions, the three study groups were compared regarding the length of the first and second stages of labor as well as the infants' Apgar scores.\n\nBODY.2. METHODS.2.6 DATA ANALYSIS:\nThe data were analyzed using Chi-square test and one-way ANOVA. Post-hoc test was also used to identify the groups with significant differences.\n\nBODY.3. RESULTS:\nThe results of Chi-square test showed no significant difference among the three groups regarding age distribution and the participants' mean age (P=0.496). Also, no significant difference was found among the three groups concerning the mothers' level of education (P=0.584) and occupation (P=0.781). Additionally, the participants' mean gestational age was 38.9±1.1 weeks and the results of one-way ANOVA indicated no significant difference among the three groups regarding the mean gestational age (P=0.158). According to the results of one-way ANOVA, the mean length of the first stage of labor was higher in the control group (281.0±79.8) compared to the supportive care (157.0±29.5) and acupressure groups (161.7±37.3). Thus, the control group's mean length of the first stage of labor was 124.0 minutes higher compared to the supportive care group and 119.3 minutes higher compared to the acupressure group, and the difference was statistically significant (P<0.001). Also, the mean length of the second stage of labor in the control group was 69.5 minutes higher compared to the supportive care group and 72.3 minutes higher compared to the acupressure group, and the difference was statistically significant (P<0.001). According to the results of post-hoc and Least Significant Difference (LSD) tests in the first and second stages of labor, a significant difference was found between the supportive care and the control group as well as between the acupressure and the control group. Overall, the mean length of the first and second stages of labor was lower in the two intervention groups in comparison to the control group. The lowest mean lengths of the first and second stages of labor (157.0±29.5 and 58.9±25.8, respectively) were related to the supportive care group, while the highest mean lengths (281.0±79.8 and 128.4±44.9, respectively) were observed in the control group (Table 1). Table 1 Comparison of the mean duration of the first and second stages of labor in the intervention and control groups stage of labor Group stage of labor Group (n=50) P value Supportive care Acupressure Control First stage Duration (min) M±SD 157.0±29.5 161.7±37.3 281.0±79.8 0.000 Maximum 210 240 560 First stage Minimum 100 60 170 CT Upper limit 165.3 172.3 304.2 Lower limit 148.6 151.0 258.8 Duration (min) M±SD 58.9±25.8 56.1±31.4 128.4±44.9 Second stage Maximum 130 180 200 0.000 Minimum 20 15 50 CT Upper limit 66.2 65.0 141.1 Lower limit 51.5 47.1 115.6 Significance level: P<0.05. SD: standard deviation. The results of Chi-square test revealed a significant difference among the three groups regarding the first- and fifth-minute Apgar scores (P<0.001). As Table 2 depicts, the frequency of first-minute Apgar scores<8 in the control group was higher compared to the supportive care and acupressure groups by 46% and 34%, respectively. Also, the frequency of fifth-minute Apgar scores<8 in the control group was higher compared to the supportive care and acupressure groups by 20% and 18%, respectively. The frequency of first-minute Apgar scores≥8 was higher in the acupressure and supportive care groups (86% and 74%) compared to the control group (40%). The frequency of fifth-minute Apgar scores≥8 was also higher in the acupressure and supportive care groups in comparison to the control group (98%, 96%, and 78%, respectively) (Table 2). Table 2 Comparison of the first and fifth minute Apgar scores of the infants in the intervention and control groups Stage of labor Group (n=50) P value Supportive care Acupressure Control Apgar score after birth <8 7(14) 13(26) 30(60) 0.000 First minute apgar score ≥8 43(86) 37(74) 20(40) Apgar score after birth <8 1(2) 2(4) 11(22) 0.000 Five minute apgar score ≥8 49(98) 48(96) 39(78) Significance level: P<0.05. SD: standard deviation. \n\nBODY.4. DISCUSSION:\nThe findings of the present study revealed a significant difference between the supportive care as well as the acupressure group and the control group regarding the mean length of the first and second stages of labor (P<0.001). In fact, the mean length of the first and second stages of labor in the control group was respectively 124 and 69.5 minutes higher compared to the supportive care group. Providing the mother with psychological and emotional support is one of the dimensions of supporting the mother by the doula. In this respect, the findings of the current study were in line with those of the study by Kennel et al. (1991). In that study, women who had experienced delivery acted as doulas and the results indicated a 25% decrease in the labor length in the supported group compared to the control group. A large number of studies have also shown a reduction in the labor length in the supported women compared to those receiving hospital's routine care. This reduction was reported as 2.8 hours in the study by Zhang et al. (1996), 44 minutes in the study by Scott (Scott, Berkowitz, & Klaus, 1999), and 0.58 hours in the study by Hodnett (Hodnett, Gates, Hofmeyr, Sakala, & Weston, 2011). In the study conducted by Campell, the mean length of the first stage of labor was 10.4±4.3 hours in the supported group and 11.7±4.8 hours in the control group. In addition, the mean length of the second stage of labor was 58±51 and 64±57 minutes in the supported and the control group, respectively (Campell, Lake, & Falk, 2006). Similarly, Longer showed that the mean length of labor was 4.56 hours in the supported group and 5.58 hours in the control group (Langer, Campero, Garcia, & Reynoso, 1998). Also, Hofmeyr et al. (1991), Chalmers (1993), Klaus and Kennell (1997), and Pascali-Bonaro (Pascali-Bonaro & Kroeger, 2004) reported that the mean length of labor was lower in the supported group in comparison to the control group. The findings of all the aforementioned studies were in agreement with those of the present study. In all these studies, doulas accompanied the mothers since hospitalization up to the delivery and supported her psychologically. According to these researchers, supporting the mother during delivery can lead to considerable changes in the delivery process, including modification of uterine function, improvement of uterine contractions, creation of effective contractions, and reduction of labor length. On the contrary to the results of the current study, Bruggemann showed that the mean length of the first stage of labor was 3.4 hours in the supported group and 3.8 hours in the control group (Bruggemann, Parpinelli, Osis, Cecatti, & Neto, 2007). Similarly, Bruggemann and McGrath (McGrath & Kennell, 2008) revealed no significant difference between the intervention (presence of doula) and the control group regarding the mean length of labor. The difference between the results of these studies and the present one might be due to the fact that they were conducted on the individuals from high social levels and the study participants could take their family members to the delivery room either with or without the doula. Therefore, both groups were highly supported and the effect of presence of doula could not be truly investigated. Overall, presence of doula before the delivery, encouragement, consolation, and palpation of the women, and suggestion of positions which are effective in fetal descent, increase production of oxytocin, enhance women's threshold of pain, and modify delivery pain patterns, thereby decreasing the labor length (Taylor, Klein, Lewis, & Gruenewald, 2000). According to the western medicine perspective, acupressure can create balance during delivery, reduce delivery pain, and improve the delivery process by increasing the uterine contractions (Beal, 1992). Acupressure is a simple, inexpensive non-pharmacological method for controlling delivery. In general, various acupoints are used for induction and control of delivery (Cook & Wilcox, 1997). In the present study, acupressure was performed at BL32 acupoint and the results revealed a 119.3 minute reduction in the first stage and a 72.3 minute reduction in the second stage compared to the control group, which was statistically significant (P<0.001). In comparison to the supportive care group, the mean length of the first and second stages of labor was higher in the acupressure group; however, the difference was not statistically significant (P>0.005). Lee et al. compared labor length in acupressure and palpation groups (Lee, Chang, & Kang, 2004). In that study, the active phase of delivery was considered from 3cm dilation to complete dilation. The study results demonstrated that the length of the active phase of delivery was significantly lower in the experimental group compared to the control group, which is consistent with the findings of the current study. Zeisler stated that acupuncture played a critical role in puberty, acceleration of opening of the cervix, and reduction of labor length. Therefore, due to the positive effects of this method on reduction of labor length, he introduced it as an effective method in controlling labor (Zeisler, Tempfer, Mayerhofer, Barrada, & Husslein, 1998). Reduction of labor length might have resulted from the reduction of pain and its resultant anxiety. In fact, acupressure might lead to release of endogenous opioids and decrease pain and anxiety. Reduction of delivery pain in the acupressure group can be justified by gate control theory of pain and Melzack's neuromatrix theory. Based on gate control theory of pain, acupressure activates thick nerve fibers and closes the pain gate and in this way, prevents pain transfer. According to this theory, stimulation of skin creates nervous impulses which are transferred to the spinal cord system. These impulses are either inhibited or increased in the spine. The impulses which are transferred by the thick fibers close the pain gate and, consequently, reduce pain. Moreover, stimulation of thick fibers impulses by pressure leads to more closure of the gate (Setax & Pomeranz, 2006). According to Melzack's theory, pain matrix is composed of three main nervous components, namely sensory way which passes the thalamus and sensory cortex, emotional way which passes the limbic system, and body self-recognition way which includes parietal lobe of cortex. Thus, pain can be sensory, emotional, and cognitive (Chalmers & Wolman, 1993). Hence, in the present study, acupressure affected the sensory way, prevented message transfer to the brain, and reduced the perception of pain. In contrast, Lawrence mentioned that acupuncture had no effects on reduction of labor length (Lawrence, Lewis, Hofmeyr, Dowswell, & Styles, 2009). The difference between that study and the present one might be due to different definitions of the first stage of labor. In the present study and other similar studies, the first stage of labor began from 3cm dilation to complete dilation. Lawrence, however, considered this stage from the time the number, length, and intensity of contractions were sufficient for opening of the cervix. It is noteworthy that in none of the above-mentioned studies, acupuncture or acupressure increased the first stage of labor. Moreover, not only these non-pharmacological methods did not reduce or inhibit the uterine contractions, but they also sedated the delivery pain and improved the delivery progress. The findings of the current study showed that the length of the second stage of labor was shorter among the women who received acupressure at BL32 acupoint compared to the control group. In contrast, Lee et al. conducted a clinical trial and indicated no significant difference between the acupressure and the palpation group regarding the mean length of the second stage of labor; i.e., since the complete dilation up to delivery (30.3±22.6 minutes in the acupressure group and 44.8±40.0 minutes in the palpation group). They reported that acupressure was only effective in reduction of the first stage of labor (138.6±62.0 in the acupressure group and 191.2±83.7 in the control group). The difference between Lee's study and the present one might be due to the fact that Lee applied pressure during the contractions. Thus, length of pressure application was different relative to the length of contractions. In the present study, on the other hand, application of pressure was started at the beginning of contractions with similar lengths and intervals for all the participants. The findings of the current study revealed a significant difference among the three groups regarding the first- and fifth-minute Apgar scores (P<0.001). The frequency of Apgar scores<8 in the first and fifth minutes was higher in the control group compared to the supportive care and acupressure groups. Additionally, the frequency of first- and fifth-minute Apgar scores≥8 was higher in the supportive care and acupressure groups in comparison to the control group. After 50 years, Apgar scoring system is still the best method for evaluation of newborn infants' prognosis (Casey, McIntire, & Leveno, 2001). Apgar score has been proved to be the best standard method for evaluation of infants' health immediately after birth (Jepson, Talashek, & Tichy, 1991). First-minute Apgar score indicated the newborn infants' need for resuscitation. Besides, fifth-minute Apgar score determines the probability of death or nervous complications more precisely (Roland, 1987). Apgar score, in fact, predicts infants' chance of survival (Casey et al., 2001). Cunningham et al. demonstrated that prolonged labor was accompanied by Apgar scores<7 due to long labor stages and disruption of delivery phases (Cunningham, Leveno, Bloom, Hauth, Rouse, & Spong, 2010). Furthermore, several researchers have shown that increase in the length of the second stage of labor endangered both maternal and fetal health (Saunders, Paterson, & Wadsworth, 1992) and increased the risk of complications as well as prenatal mortality (Piper, Bolling, & Newton, 1991). In the study performed by Flan, the mean of first-minute Apgar scores was 8.8 (7-10) in the active group and 7.5 (1-10) in the resting group. Besides, the two groups' means of fifth-minute Apgar scores were 9.9 (9-10) and 9.4 (7-10), respectively (Flynn & Kelly, 1987). In the same line, Hemminki (Hemminki & Lenck, 1985), Andrews (Andrews & Chrzanowski, 1990), and Ben et al. (2010) revealed that mother's activity during delivery pain reduced the length of labor stages, improved maternal and fetal outcomes, improved infants' first- and fifth-minute Apgar scores, and reduced the rate of transfer to the neonatal ward. These results were all in agreement with those of the present study. However, Liu (1989) and Stewart (Stewart & Calder, 1984) reported that mother's activity had no effects on improvement of infants' Apgar scores. The difference between these two studies and the current one might result from the fact that they only investigated physically supporting the mother. In the present study, however, the doula provided the mother with physical support (suggestion of appropriate positions and activities) as well as emotional and mental support which reduced mothers' anxiety, improved her self-confidence, and decreased labor disorders.\n\nBODY.5. CONCLUSION:\nThe findings of the present study showed that supportive care and acupressure reduced the length of labor and increased the infants' Apgar scores compared to the control group. Therefore, these two non-pharmacological methods which are easy to perform and are not accompanied by any side effects can be employed during labor to achieve better delivery outcomes.\n\n**Question:** Compared to control groups what was the result of supportive care or acupuncture group on The mean length of the first and second stages of labor?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
962
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical evaluation of the AutoPulse automated chest compression device for out-of-hospital cardiac arrest in the northern district of Shanghai, China\n\n ABSTRACT.INTRODUCTION:\nWhether the AutoPulse automated chest compression device is worthy of clinical use for out-of-hospital cardiac arrest (OHCA) remains controversial. A prospective controlled study was conducted to evaluate the effect of AutoPulse versus manual chest compression for cardiopulmonary resuscitation (CPR) of OHCA patients in the northern district of Shanghai, China.\n\nABSTRACT.MATERIAL AND METHODS:\nA total of 133 patients with OHCA who were treated at the Emergency Medical Center of the Tenth People's Hospital Affiliated with Tongji University between March 2011 and March 2012 were included. The patients were randomly assigned to the Manual CPR (n = 64) and AutoPulse CPR groups (n = 69) in accordance with the approach of chest compression received. The primary outcome measure was return of spontaneous circulation (ROSC), and the secondary outcome measures included 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge.\n\nABSTRACT.RESULTS:\nThe ROSC rate of patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). The 24-h survival rate of OHCA patients was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). The hospital discharge rate of the patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The proportion of patients with OHCA and a cerebral performance category score of 1 or 2 points at hospital discharge was higher in the AutoPulse CPR group than in the Manual CPR group, but the difference was not statistically significant (16.2% vs. 13.4%, p = 1.00).\n\nABSTRACT.CONCLUSIONS:\nUse of the AutoPulse increases CPR success and survival rates in patients with OHCA, but its ability to improve cerebral performance requires further evaluation.\n\nBODY.INTRODUCTION:\nHigh-quality cardiopulmonary resuscitation (CPR) is of great importance to cardiac and cerebral resuscitation [1–3]. However, chest compression often fails to meet relevant guidelines in terms of depth, frequency, and compression/relaxation time [4, 5]. Manual chest compression is not ideal for blood perfusion in most patients. Even when chest compression is performed by a highly trained individual, the generated cerebral blood flow is only 30–40% of the normal level and the cardiac blood supply is only 10–20% of the normal level [6]. Thus, finding a proper automated chest compression device to replace manual chest compression is necessary. The AutoPulse (ZOLL Circulation, Sunnyvale, CA, USA) is a chest compression device that was approved by the U.S. Food and Drug Administration for CPR in patients who suffer from cardiac arrest (CA) in 2001, and was approved in China by the China Food and Drug Administration in 2007. The AutoPulse is an electrical chest compression device that functions through the compression and relaxation of a load-distributing band connected to a hard board using an electric motor. The operation procedure of the AutoPulse was as follows: first, all clothing was removed from the patient, who was then placed on the hard board following the indicating line; next, both ends of the load-distributing band were fixed with a buckle to the front chest of the patient through the armpit; and finally, the device was started and the load-distributing band was automatically tightened to fit patients of different sizes. The default setting was as follows: compression strength, to reduce the patient's chest volume by 20%; and compression frequency, 80 times/min. The device was designed with continuous and 15:2 compression modes; in the latter case, compressions were performed 15 times and then paused for 3 s in each run, allowing for ventilation of the patient twice. In this research, the 15:2 compression mode was used. However, the compression frequency and two compression modes were fixed and could not be adjusted. Animal studies have shown that compared to ordinary CPR, AutoPulse-assisted CPR improved cardiac and cerebral hemodynamics as well as neurological prognosis [7, 8]. A human study showed that compared to manual chest compression, the AutoPulse significantly increased the coronary blood flow [9]. In a retrospective study, the return of the spontaneous circulation (ROSC) rate was significantly improved in patients in the AutoPulse CPR group compared to those in the Manual CPR group [10]. A large-scale multi-center prospective study found that compared with patients in the Manual CPR group, those of the AutoPulse CPR group had significantly higher rates of ROSC, hospital admission, and hospital discharge [11]. Earlier research indicated that the AutoPulse was a good choice for clinical use in patients after CA. However, a large-scale multi-center prospective controlled study found that compared with patients of the Manual CPR group, those of the AutoPulse CPR group had a decreased 4-h survival rate (28.5% vs. 29.5%; p = 0.74) and hospital discharge rate (5.8% vs. 9.9%; p = 0.6) with significantly poor neurological prognosis [12]. Because of the contradictory results from similar studies, whether the AutoPulse is worthy of clinical use in CPR for patients after CA requires more comprehensive evaluation. On the basis of the above-mentioned studies and Lerner's experience [13], we enhanced the control over the emergency treatment procedure and the professional level of emergency care personnel to minimize various interferences during the research. A prospective controlled study was designed to comprehensively evaluate the effect of the AutoPulse versus manual CPR for OHCA in the northern district of Shanghai, China. The results will provide valuable clinical evidence for the use of the AutoPulse to provide CPR for patients after OHCA.\n\nBODY.MATERIAL AND METHODS.STUDY DESIGN:\nThe study was designed as a single-center, prospective, randomized, controlled clinical trial. This clinical trial was reviewed and approved by the ethics committee of the Tenth People's Hospital in Shanghai. The inclusion criterion was: patients with OHCA who were admitted to the Emergency Medical Center of our hospital between March 2011 and March 2012. The exclusion criteria were: pregnant females, trauma patients, patients with advanced cancer, or patients aged < 14 years or > 90 years old (Figure 1). Figure 1Flow of participants in trial There are 10 ambulances in our Emergency Medical Center, which were encoded from number one to number ten optionally, then five numbers were randomly produced by a computer. The ambulances with the five numbers were equipped with an AutoPulse. The ambulances with or without the device were used in turn to visit the patients. After pre-hospital emergency treatment, all patients were sent to the Emergency Department of the Tenth People's Hospital Affiliated with Tongji University, which has an intensive care unit (ICU) and the corresponding advanced life support technologies. The patients or their families were informed of this research and signed informed consent prior to emergency treatment or after admission to the emergency room. The follow-up was terminated for patients who refused to participate in this research. All emergency personnel were familiar with this device as well as the 2010 American Heart Association (AHA) CPR guidelines. The personnel were familiar with our study design and research procedure.\n\nBODY.MATERIAL AND METHODS.RESEARCH PROCEDURE:\nThe unified regimen was as follows: the emergency personnel arrived at the scene and connected the monitor to determine the cardiac rhythm. In the case of ventricular fibrillation or pulseless ventricular tachycardia, defibrillation was performed immediately. If stable ROSC was achieved after defibrillation, then the patient was excluded from the research. If the patient failed to establish stable ROSC or the initial cardiac rhythm was not ventricular fibrillation or pulseless ventricular tachycardia, then the use of the AutoPulse was determined by the presence of this device in the ambulance. In the absence of an AutoPulse, manual chest compression was performed immediately; in the presence of an AutoPulse, 1 person performed manual chest compression while another two persons prepared the device. Endotracheal intubation was conducted simultaneously with balloon-assisted ventilation consisting of 100% oxygen. When the AutoPulse was in place, the patient was placed on the hard board in a supine position. The load-distributing band was connected and fixed as described above. The device was then started to deliver 15 compressions followed by two balloon-assisted ventilations in each run. Patients who met the inclusion criterion were included in the study. For these patients, an end-tidal carbon-dioxide (etCO2) module was connected to the intubation. Changes in the etCO2 level were monitored and recorded. Chest compression was performed on the patient continuously until ROSC was achieved or death was announced by the emergency personnel. Surviving patients were sent to the emergency room for further treatment, where chest compression was performed in the same way. Patients who demonstrated ROSC before or after admission to the emergency room were changed to a ventilator for ventilation assistance. The ventilator was operated in a mode of volume control ventilation (tidal volume, 8 ml/kg; inspired oxygen concentration, 100%). Vasoactive drugs, antiarrhythmic drugs, a defibrillator [14], and the chest compression rate of the Manual CPR group were used or performed strictly following the 2010 AHA CPR guidelines. The ROSC status of patients admitted to the emergency room was recorded. Patients who achieved ROSC were admitted to the ICU, where physicians and nurses were not allowed to know whether the patient had been treated with an AutoPulse. The following changes in patient condition were continuously recorded: 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge. Neurological prognosis was assessed in all patients achieving ROSC using Cerebral Performance Category (CPC) scores, which was repeated in those who survived at 24 h and once again in patients at the time of hospital discharge. Patients with multiple CPC scores were analyzed with data from the last scoring.\n\nBODY.MATERIAL AND METHODS.STATISTICAL ANALYSIS:\nStatistical analysis was performed using SPSS 13.0 statistical software (SPSS Inc, Chicago, IL, USA). Measurement data are expressed as mean ± standard deviation, while enumeration data are described using the number of cases (percentage). Basic and CPR data of the patients were compared as follows: measurement data using the t test and enumeration data using the χ2 or Fisher's exact probability test. Associations between treatment groups and all end points were analyzed using the χ2 test. Values of p < 0.05 were considered statistically significant.\n\nBODY.RESULTS:\nA total of 133 patients were included in this clinical trial. During the treatment, no patients were changed from the AutoPulse to manual CPR and vice versa. Patients' gender, age, and weight in the Manual CPR and AutoPulse CPR groups are listed in Table I. Weight was provided by the family of the patient or estimated by the researcher. There were no significant differences in the above indicators between the two groups (p > 0.05), nor were there significant differences detected in the witness (59.4% vs. 66.7%, p = 0.38) or composition of the cardiac rhythm types (p = 0.32).\n Table I Comparison of basic patient characteristics between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value Male 44 (68.8) 50 (72.5) 0.64 Age, mean ± SD [years] 64.2 ±12.6 62.6 ±14.9 0.53 Weight, mean ± SD [kg] 68.4 ±8.9 67.3 ±8.1 0.47 Witness 38 (59.4) 46 (66.7) 0.38 Cardiac rhythm: 0.32  Ventricular fibrillation/pulseless ventricular tachycardia 8 (12.5) 9 (13)  Pulseless electrical activity 20 (31.3) 31 (44.9)  Asystole 32 (50.0) 24 (34.8)  Others 4 (6.3) 5 (7.2) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. \nTable II compares the CPR data of patients between the Manual CPR and AutoPulse CPR groups. There were no statistically significant differences in the number of patients receiving CPR before the arrival of professional emergency personnel between groups (25.0% vs. 27.5%, p = 0.70), the duration from the emergency call to the arrival of emergency care personnel (11.5 vs. 11.7 min, p = 0.70), or the total duration of CPR (33.4 vs. 19.6 min, p = 0.21) between groups. In the AutoPulse CPR group, the average device preparation time was 3.4 ±1.0 min. The rates of epinephrine and defibrillation utilization during CPR did not differ statistically significantly between the two groups (87.5% vs. 76.8%, p = 0.10 and 20.3% vs. 15.9%, p = 0.51, respectively). However, patients in the AutoPulse CPR group had significantly higher etCO2 than those of the Manual CPR group (22.3 vs. 16.1 mm Hg, p < 0.05). Also, the 1-h return of spontaneous breathing (ROSB) rate was significantly higher in patients in the AutoPulse CPR group than in the Manual CPR group (24.6% vs. 10.9%, p = 0.04).\n Table II Comparison of patient resuscitation characteristics between the Manual CPR and AutoPulse CPR groups Variable Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value CPR before arrival of emergency care personnel 16 (25.0) 19 (27.5) 0.74 Duration from call to arrival of emergency care personnel, mean ± SD [min] 11.5 ±3.1 11.7 ±2.8 0.70 Total CPR time [min] 33.4 (15.1) 19.6 (19.3) 0.21 AutoPulse preparation time, mean ± SD [s] * NA 52 ±23 Epinephrine 56 (87.5) 53 (76.8) 0.1 Defibrillation 13 (20.3) 11 (15.9) 0.51 EtCO 2 [mm Hg] 16.1 (5.4) 22.3 (6.1) < 0.001 1-h return of spontaneous ventilation 7 (10.9) 17 (24.6) 0.04 Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * Time to apply the AutoPulse and thus terminate manual chest compressions, which was exactly the time to place the patient on the hard board, connect and fix the load-distributing belt, and start the AutoPulse device. The patients' clinical prognostic features are shown in Table III. In terms of the primary endpoint, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). In terms of the secondary endpoint events, the 24-h patient survival rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). Similarly, the hospital discharge rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The neurological prognosis results are shown in Table IV. In the Manual CPR group, 13.4% of the ROSC patients were scored as CPC 1 or CPC 2 vs. 16.2% in the AutoPulse CPR group. Despite a relative increase in the latter group, there was no statistically significant difference (p = 1.00).\n Table III Comparison of patient clinical prognosis between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR AutoPulse CPR P -value ROSC 15 (23.4) 31 (44.9) 0.009 24-h survival 14 (21.9) 27 (39.1) 0.03 Hospital discharge 4 (6.3) 13 (18.8) 0.03 Table IV Comparison of CPC scores of patients between Manual CPR and AutoPulse CPR groups CPC score * Manual CPR ( n = 15) AutoPulse CPR ( n = 31) P -value 1 1 (6.7) 2 (6.5) 1.00 # 2 1 (6.7) 3 (9.7) 3 4 (26.7) 5 (16.1) 4 0 (0) 1 (3.2) 5 9 (60.0) 20 (64.5) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * CPC scoring of all ROSC patients # comparison of the ratio of patients scored as CPC 1 or CPC 2 in ROSC patients at hospital discharge between AutoPulse CPR and Manual CPR groups. With regard to complications, there was a lack of detailed data for comparison of injuries between groups. This is because an autopsy was refused by the families of all patients due to Chinese tradition. As required by patient condition, bedside X-ray examinations were performed upon admission to the emergency room; additionally, all deceased patients were subjected to bedside X-ray examination. The results showed that the incidence of rib fracture was 6.7% in the AutoPulse group (4/60), slightly higher than that in the Manual CPR group (4.8%, 3/63), but the difference was not statistically significant (p = 0.71). Due to the following condition of the illness, 6 patients (two in the Manual CPR group and four in the AutoPulse CPR group) underwent abdominal computed tomography (CT) examinations, in which no abdominal organ injuries were found.\n\nBODY.DISCUSSION:\nStudies have shown that the increased circulating blood flow in CPR is closely related to the success rate of resuscitation [2]. Animal studies in a porcine model of CA confirmed that the AutoPulse generated better blood circulation than manual chest compression [7, 8]. Similar conclusions were obtained in a number of human studies [9, 15, 16]. A retrospective study found that the ROSC rate of patients with CA was significantly higher in those treated with the AutoPulse than in those treated with manual CPR [10]. A recent multi-center prospective study compared the effects of the AutoPulse CPR versus manual CPR, which found that at one of the medical centers, the survival rate of patients in the AutoPulse group declined from 19.6% to 4% after the implementation of an updated experimental program with delayed use of the AutoPulse (p = 0.024); at the other medical centers, the 4-h survival rate at the primary endpoint was significantly higher in the AutoPulse CPR group than in the Manual CPR group (p = 0.008) [17]. The earlier studies confirm that the AutoPulse plays a positive role in CPR and that stable effective blood circulation is a key factor of CPR success. Our study found that the 1-h ROSB rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group. Basic research has shown that the respiratory center is located in the medulla oblongata and that even a few minutes of ischemia and hypoxia injury can cause irreversible brain damage. Thus, the higher ROSB rate indicates that patients treated with the AutoPulse achieved better cerebral blood supply and that the AutoPulse is superior to manual CPR for improving cerebral blood flow. Additionally, we found that the etCO2 level was significantly higher in the AutoPulse CPR group compared to the Manual CPR group. Under normal physiological conditions, the etCO2 level is determined by CO2 production, pulmonary alveoli ventilation capability, and pulmonary blood flow. When CA occurs, CO2 is continuously generated in vivo. The major influencing factors of CO2 discharge into the lung are CO2 transmission speed and volume from the peripheral tissues (CO2-generating site) to the lung, which depend on the pulmonary blood flow. Thus, if the ventilation remains constant, then the etCO2 level changes reflect cardiac output changes as well as the effect of CPR. In recent years, several studies have reported that etCO2 can be used as a noninvasive monitoring indicator for CPR that is related to prognosis [18, 19]. The 2010 AHA CPR guidelines clarify that during CPR of patients with CA, etCO2 can be used as a physiological parameter to indicate the cardiac output and myocardial perfusion, which helps to optimize the quality of chest compression. Together our results indicate that the AutoPulse is superior to manual CPR for improving cardiac blood flow. Thus, we consider that better cardiac and cerebral blood circulation achieved by mechanical chest compression compared to manual chest compression is the main reason for the high success rate of CPR with the use of AutoPulse. Additionally, uninterrupted chest compression in a patient by an AutoPulse even during delivery may be another important factor contributing to the high ROSC rate [1–3]. The importance of continuous chest compression was emphasized by the 2010 AHA CPR guidelines. A recent study showed that in the helicopter emergency medical service, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group, mainly because the former strategy achieved sustained chest compression during transportation [20]. A simulation study on a human body suggested that compared to manual chest compression, the AutoPulse achieves more sustained and stable chest compression during transportation [21]. From the occurrence of CA to emergency room admission, patients need to be carried for transportation at least twice. In this process, manual chest compression will inevitably be interrupted, whereas the AutoPulse can avoid this problem. Although AutoPulse preparation requires time (average 52 s in our study), it may bring more benefit by continuous chest compression during transportation than the potential risks that arise from CPR interruption during device preparation. Cerebral resuscitation remains a major problem in medical science [22, 23]. A large-scale clinical trial showed that only 27% of adults achieved good recovery of neurological function after CPR [24]. Therefore, the ability of the AutoPulse in improving cerebral resuscitation and neurological function recovery should be of concern. Despite the above statement, patients seem to have better cerebral blood supply in the AutoPulse CPR group than in the Manual CPR group. Animal experiments also demonstrated that compared to manual CPR, AutoPulse-assisted CPR more significantly improved the neurological prognosis in a porcine model of CA. However, it is disappointing that we observed no statistically significant difference in neurological prognosis between the AutoPulse CPR and Manual CPR groups. In terms of complications, there is evidence that compared to manual CPR [25], AutoPulse-assisted CPR increases the risk of organ damage [26–28]. In particular, Truhlar [29] proposed that caution should be taken in the use of AutoPulse in thrombolysis to avoid hemorrhage in the abdominal cavity. Because none of the patients' families consented to an autopsy, we have no detailed data for comparative analysis of relevant organ damage between the two groups. However, bedside X-ray films were taken in all patients upon admission to the emergency room, showing that the incidence of rib fractures was slightly lower in the AutoPulse CPR group than in the Manual CPR group. Thereafter, 6 patients were subjected to abdominal CT examination as required by their condition, and none had abdominal organ damage. Because not all patients underwent the above checks, we could not conclude that the AutoPulse is safer than manual CPR in terms of rib fracture and abdominal organ injury or that there is no difference between the risks of organ damage between the AutoPulse and manual CPR. It is worth noting that a recent study using a porcine model of ventricular fibrillation indicated that manual chest compression is advantageous compared to the AutoPulse for improving hemodynamic parameters [30]. A retrospective comparative study by Jennings [31] found that compared to conventional CPR, AutoPulse-assisted CPR improves the hospital admission rate but reduces the hospital discharge rate of patients with OHCA. Whether the AutoPulse can improve the prognosis of patients with OHCA remains controversial. Considering previous studies [12, 13] and the characteristics of the emergency medicine in the Shanghai area, we used a prospective, randomized, controlled method in this study. On the basis of the previous studies [11, 12] and the characteristics of emergency medicine, ROSC was chosen as the primary outcome measure, and 24-hour survival, discharge rate and discharge neurologic prognosis were chosen for secondary outcome measures. The parameters of etCO2, defibrillation, use of epinephrine, etc, were also analyzed in the results. Due to the uncertain time and places of cardiac arrest onset, it was difficult to achieve complete randomization for the research. Hallstrom et al. [12] assigned the patients to two groups randomly according to the emergency medical services (EMS) station with or without the AutoPulse. However, it was impossible for us to follow because our study was performed in a single emergency medicine center. So we finally equipped half of the ambulances with an AutoPulse randomly by using a computer. It may be the most relevant method to achieve randomization as far as possible. This prospective study has brought new findings at both the primary and secondary endpoints of clinical events. Data comparison showed that the ROSC, 24-h survival, and hospital discharge rates of patients with OHCA were significantly higher in the AutoPulse CPR group than in the Manual CPR group. However, in the final neurological prognosis, there was no statistically significant advantage in patients with OHCA within the AutoPulse CPR group. Our work represents the first prospective controlled study regarding the clinical use of the new chest compression device AutoPulse for CPR of patients with OHCA in China. The results are of guiding value for application of this device worldwide, especially in China's Emergency System. The major limitations of our study are as follows: 1) the small sample size may lead to a biased conclusion; and 2) it is difficult to achieve complete randomization based on the grouping method for the ambulance visiting mode because of the particularity of pre-hospital emergency treatment. Due to our present finding that the AutoPulse can improve the ROSC and 24-h survival rates of patients with OHCA, a multi-center prospective study has been designed for in-depth research in China. In conclusion, in Shanghai's Emergency Medical System in the northern district, AutoPulse CPR is superior to manual CPR for increasing resuscitation success and survival rates in patients with OHCA. This device also improves cardiac and cerebral blood circulation in patients but has no significant positive effect on their neurological prognosis. Thus, we conclude that the AutoPulse is worthy of clinical use but that its application prospects require further comprehensive evaluation.\n\n**Question:** Compared to manual chest compression for cardiopulmonary resuscitation (CPR) of out-of-hospital cardiac arrest (OHCA) what was the result of AutoPulse automated chest compression device on return of spontaneous circulation (ROSC)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
963
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical evaluation of the AutoPulse automated chest compression device for out-of-hospital cardiac arrest in the northern district of Shanghai, China\n\n ABSTRACT.INTRODUCTION:\nWhether the AutoPulse automated chest compression device is worthy of clinical use for out-of-hospital cardiac arrest (OHCA) remains controversial. A prospective controlled study was conducted to evaluate the effect of AutoPulse versus manual chest compression for cardiopulmonary resuscitation (CPR) of OHCA patients in the northern district of Shanghai, China.\n\nABSTRACT.MATERIAL AND METHODS:\nA total of 133 patients with OHCA who were treated at the Emergency Medical Center of the Tenth People's Hospital Affiliated with Tongji University between March 2011 and March 2012 were included. The patients were randomly assigned to the Manual CPR (n = 64) and AutoPulse CPR groups (n = 69) in accordance with the approach of chest compression received. The primary outcome measure was return of spontaneous circulation (ROSC), and the secondary outcome measures included 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge.\n\nABSTRACT.RESULTS:\nThe ROSC rate of patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). The 24-h survival rate of OHCA patients was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). The hospital discharge rate of the patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The proportion of patients with OHCA and a cerebral performance category score of 1 or 2 points at hospital discharge was higher in the AutoPulse CPR group than in the Manual CPR group, but the difference was not statistically significant (16.2% vs. 13.4%, p = 1.00).\n\nABSTRACT.CONCLUSIONS:\nUse of the AutoPulse increases CPR success and survival rates in patients with OHCA, but its ability to improve cerebral performance requires further evaluation.\n\nBODY.INTRODUCTION:\nHigh-quality cardiopulmonary resuscitation (CPR) is of great importance to cardiac and cerebral resuscitation [1–3]. However, chest compression often fails to meet relevant guidelines in terms of depth, frequency, and compression/relaxation time [4, 5]. Manual chest compression is not ideal for blood perfusion in most patients. Even when chest compression is performed by a highly trained individual, the generated cerebral blood flow is only 30–40% of the normal level and the cardiac blood supply is only 10–20% of the normal level [6]. Thus, finding a proper automated chest compression device to replace manual chest compression is necessary. The AutoPulse (ZOLL Circulation, Sunnyvale, CA, USA) is a chest compression device that was approved by the U.S. Food and Drug Administration for CPR in patients who suffer from cardiac arrest (CA) in 2001, and was approved in China by the China Food and Drug Administration in 2007. The AutoPulse is an electrical chest compression device that functions through the compression and relaxation of a load-distributing band connected to a hard board using an electric motor. The operation procedure of the AutoPulse was as follows: first, all clothing was removed from the patient, who was then placed on the hard board following the indicating line; next, both ends of the load-distributing band were fixed with a buckle to the front chest of the patient through the armpit; and finally, the device was started and the load-distributing band was automatically tightened to fit patients of different sizes. The default setting was as follows: compression strength, to reduce the patient's chest volume by 20%; and compression frequency, 80 times/min. The device was designed with continuous and 15:2 compression modes; in the latter case, compressions were performed 15 times and then paused for 3 s in each run, allowing for ventilation of the patient twice. In this research, the 15:2 compression mode was used. However, the compression frequency and two compression modes were fixed and could not be adjusted. Animal studies have shown that compared to ordinary CPR, AutoPulse-assisted CPR improved cardiac and cerebral hemodynamics as well as neurological prognosis [7, 8]. A human study showed that compared to manual chest compression, the AutoPulse significantly increased the coronary blood flow [9]. In a retrospective study, the return of the spontaneous circulation (ROSC) rate was significantly improved in patients in the AutoPulse CPR group compared to those in the Manual CPR group [10]. A large-scale multi-center prospective study found that compared with patients in the Manual CPR group, those of the AutoPulse CPR group had significantly higher rates of ROSC, hospital admission, and hospital discharge [11]. Earlier research indicated that the AutoPulse was a good choice for clinical use in patients after CA. However, a large-scale multi-center prospective controlled study found that compared with patients of the Manual CPR group, those of the AutoPulse CPR group had a decreased 4-h survival rate (28.5% vs. 29.5%; p = 0.74) and hospital discharge rate (5.8% vs. 9.9%; p = 0.6) with significantly poor neurological prognosis [12]. Because of the contradictory results from similar studies, whether the AutoPulse is worthy of clinical use in CPR for patients after CA requires more comprehensive evaluation. On the basis of the above-mentioned studies and Lerner's experience [13], we enhanced the control over the emergency treatment procedure and the professional level of emergency care personnel to minimize various interferences during the research. A prospective controlled study was designed to comprehensively evaluate the effect of the AutoPulse versus manual CPR for OHCA in the northern district of Shanghai, China. The results will provide valuable clinical evidence for the use of the AutoPulse to provide CPR for patients after OHCA.\n\nBODY.MATERIAL AND METHODS.STUDY DESIGN:\nThe study was designed as a single-center, prospective, randomized, controlled clinical trial. This clinical trial was reviewed and approved by the ethics committee of the Tenth People's Hospital in Shanghai. The inclusion criterion was: patients with OHCA who were admitted to the Emergency Medical Center of our hospital between March 2011 and March 2012. The exclusion criteria were: pregnant females, trauma patients, patients with advanced cancer, or patients aged < 14 years or > 90 years old (Figure 1). Figure 1Flow of participants in trial There are 10 ambulances in our Emergency Medical Center, which were encoded from number one to number ten optionally, then five numbers were randomly produced by a computer. The ambulances with the five numbers were equipped with an AutoPulse. The ambulances with or without the device were used in turn to visit the patients. After pre-hospital emergency treatment, all patients were sent to the Emergency Department of the Tenth People's Hospital Affiliated with Tongji University, which has an intensive care unit (ICU) and the corresponding advanced life support technologies. The patients or their families were informed of this research and signed informed consent prior to emergency treatment or after admission to the emergency room. The follow-up was terminated for patients who refused to participate in this research. All emergency personnel were familiar with this device as well as the 2010 American Heart Association (AHA) CPR guidelines. The personnel were familiar with our study design and research procedure.\n\nBODY.MATERIAL AND METHODS.RESEARCH PROCEDURE:\nThe unified regimen was as follows: the emergency personnel arrived at the scene and connected the monitor to determine the cardiac rhythm. In the case of ventricular fibrillation or pulseless ventricular tachycardia, defibrillation was performed immediately. If stable ROSC was achieved after defibrillation, then the patient was excluded from the research. If the patient failed to establish stable ROSC or the initial cardiac rhythm was not ventricular fibrillation or pulseless ventricular tachycardia, then the use of the AutoPulse was determined by the presence of this device in the ambulance. In the absence of an AutoPulse, manual chest compression was performed immediately; in the presence of an AutoPulse, 1 person performed manual chest compression while another two persons prepared the device. Endotracheal intubation was conducted simultaneously with balloon-assisted ventilation consisting of 100% oxygen. When the AutoPulse was in place, the patient was placed on the hard board in a supine position. The load-distributing band was connected and fixed as described above. The device was then started to deliver 15 compressions followed by two balloon-assisted ventilations in each run. Patients who met the inclusion criterion were included in the study. For these patients, an end-tidal carbon-dioxide (etCO2) module was connected to the intubation. Changes in the etCO2 level were monitored and recorded. Chest compression was performed on the patient continuously until ROSC was achieved or death was announced by the emergency personnel. Surviving patients were sent to the emergency room for further treatment, where chest compression was performed in the same way. Patients who demonstrated ROSC before or after admission to the emergency room were changed to a ventilator for ventilation assistance. The ventilator was operated in a mode of volume control ventilation (tidal volume, 8 ml/kg; inspired oxygen concentration, 100%). Vasoactive drugs, antiarrhythmic drugs, a defibrillator [14], and the chest compression rate of the Manual CPR group were used or performed strictly following the 2010 AHA CPR guidelines. The ROSC status of patients admitted to the emergency room was recorded. Patients who achieved ROSC were admitted to the ICU, where physicians and nurses were not allowed to know whether the patient had been treated with an AutoPulse. The following changes in patient condition were continuously recorded: 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge. Neurological prognosis was assessed in all patients achieving ROSC using Cerebral Performance Category (CPC) scores, which was repeated in those who survived at 24 h and once again in patients at the time of hospital discharge. Patients with multiple CPC scores were analyzed with data from the last scoring.\n\nBODY.MATERIAL AND METHODS.STATISTICAL ANALYSIS:\nStatistical analysis was performed using SPSS 13.0 statistical software (SPSS Inc, Chicago, IL, USA). Measurement data are expressed as mean ± standard deviation, while enumeration data are described using the number of cases (percentage). Basic and CPR data of the patients were compared as follows: measurement data using the t test and enumeration data using the χ2 or Fisher's exact probability test. Associations between treatment groups and all end points were analyzed using the χ2 test. Values of p < 0.05 were considered statistically significant.\n\nBODY.RESULTS:\nA total of 133 patients were included in this clinical trial. During the treatment, no patients were changed from the AutoPulse to manual CPR and vice versa. Patients' gender, age, and weight in the Manual CPR and AutoPulse CPR groups are listed in Table I. Weight was provided by the family of the patient or estimated by the researcher. There were no significant differences in the above indicators between the two groups (p > 0.05), nor were there significant differences detected in the witness (59.4% vs. 66.7%, p = 0.38) or composition of the cardiac rhythm types (p = 0.32).\n Table I Comparison of basic patient characteristics between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value Male 44 (68.8) 50 (72.5) 0.64 Age, mean ± SD [years] 64.2 ±12.6 62.6 ±14.9 0.53 Weight, mean ± SD [kg] 68.4 ±8.9 67.3 ±8.1 0.47 Witness 38 (59.4) 46 (66.7) 0.38 Cardiac rhythm: 0.32  Ventricular fibrillation/pulseless ventricular tachycardia 8 (12.5) 9 (13)  Pulseless electrical activity 20 (31.3) 31 (44.9)  Asystole 32 (50.0) 24 (34.8)  Others 4 (6.3) 5 (7.2) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. \nTable II compares the CPR data of patients between the Manual CPR and AutoPulse CPR groups. There were no statistically significant differences in the number of patients receiving CPR before the arrival of professional emergency personnel between groups (25.0% vs. 27.5%, p = 0.70), the duration from the emergency call to the arrival of emergency care personnel (11.5 vs. 11.7 min, p = 0.70), or the total duration of CPR (33.4 vs. 19.6 min, p = 0.21) between groups. In the AutoPulse CPR group, the average device preparation time was 3.4 ±1.0 min. The rates of epinephrine and defibrillation utilization during CPR did not differ statistically significantly between the two groups (87.5% vs. 76.8%, p = 0.10 and 20.3% vs. 15.9%, p = 0.51, respectively). However, patients in the AutoPulse CPR group had significantly higher etCO2 than those of the Manual CPR group (22.3 vs. 16.1 mm Hg, p < 0.05). Also, the 1-h return of spontaneous breathing (ROSB) rate was significantly higher in patients in the AutoPulse CPR group than in the Manual CPR group (24.6% vs. 10.9%, p = 0.04).\n Table II Comparison of patient resuscitation characteristics between the Manual CPR and AutoPulse CPR groups Variable Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value CPR before arrival of emergency care personnel 16 (25.0) 19 (27.5) 0.74 Duration from call to arrival of emergency care personnel, mean ± SD [min] 11.5 ±3.1 11.7 ±2.8 0.70 Total CPR time [min] 33.4 (15.1) 19.6 (19.3) 0.21 AutoPulse preparation time, mean ± SD [s] * NA 52 ±23 Epinephrine 56 (87.5) 53 (76.8) 0.1 Defibrillation 13 (20.3) 11 (15.9) 0.51 EtCO 2 [mm Hg] 16.1 (5.4) 22.3 (6.1) < 0.001 1-h return of spontaneous ventilation 7 (10.9) 17 (24.6) 0.04 Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * Time to apply the AutoPulse and thus terminate manual chest compressions, which was exactly the time to place the patient on the hard board, connect and fix the load-distributing belt, and start the AutoPulse device. The patients' clinical prognostic features are shown in Table III. In terms of the primary endpoint, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). In terms of the secondary endpoint events, the 24-h patient survival rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). Similarly, the hospital discharge rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The neurological prognosis results are shown in Table IV. In the Manual CPR group, 13.4% of the ROSC patients were scored as CPC 1 or CPC 2 vs. 16.2% in the AutoPulse CPR group. Despite a relative increase in the latter group, there was no statistically significant difference (p = 1.00).\n Table III Comparison of patient clinical prognosis between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR AutoPulse CPR P -value ROSC 15 (23.4) 31 (44.9) 0.009 24-h survival 14 (21.9) 27 (39.1) 0.03 Hospital discharge 4 (6.3) 13 (18.8) 0.03 Table IV Comparison of CPC scores of patients between Manual CPR and AutoPulse CPR groups CPC score * Manual CPR ( n = 15) AutoPulse CPR ( n = 31) P -value 1 1 (6.7) 2 (6.5) 1.00 # 2 1 (6.7) 3 (9.7) 3 4 (26.7) 5 (16.1) 4 0 (0) 1 (3.2) 5 9 (60.0) 20 (64.5) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * CPC scoring of all ROSC patients # comparison of the ratio of patients scored as CPC 1 or CPC 2 in ROSC patients at hospital discharge between AutoPulse CPR and Manual CPR groups. With regard to complications, there was a lack of detailed data for comparison of injuries between groups. This is because an autopsy was refused by the families of all patients due to Chinese tradition. As required by patient condition, bedside X-ray examinations were performed upon admission to the emergency room; additionally, all deceased patients were subjected to bedside X-ray examination. The results showed that the incidence of rib fracture was 6.7% in the AutoPulse group (4/60), slightly higher than that in the Manual CPR group (4.8%, 3/63), but the difference was not statistically significant (p = 0.71). Due to the following condition of the illness, 6 patients (two in the Manual CPR group and four in the AutoPulse CPR group) underwent abdominal computed tomography (CT) examinations, in which no abdominal organ injuries were found.\n\nBODY.DISCUSSION:\nStudies have shown that the increased circulating blood flow in CPR is closely related to the success rate of resuscitation [2]. Animal studies in a porcine model of CA confirmed that the AutoPulse generated better blood circulation than manual chest compression [7, 8]. Similar conclusions were obtained in a number of human studies [9, 15, 16]. A retrospective study found that the ROSC rate of patients with CA was significantly higher in those treated with the AutoPulse than in those treated with manual CPR [10]. A recent multi-center prospective study compared the effects of the AutoPulse CPR versus manual CPR, which found that at one of the medical centers, the survival rate of patients in the AutoPulse group declined from 19.6% to 4% after the implementation of an updated experimental program with delayed use of the AutoPulse (p = 0.024); at the other medical centers, the 4-h survival rate at the primary endpoint was significantly higher in the AutoPulse CPR group than in the Manual CPR group (p = 0.008) [17]. The earlier studies confirm that the AutoPulse plays a positive role in CPR and that stable effective blood circulation is a key factor of CPR success. Our study found that the 1-h ROSB rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group. Basic research has shown that the respiratory center is located in the medulla oblongata and that even a few minutes of ischemia and hypoxia injury can cause irreversible brain damage. Thus, the higher ROSB rate indicates that patients treated with the AutoPulse achieved better cerebral blood supply and that the AutoPulse is superior to manual CPR for improving cerebral blood flow. Additionally, we found that the etCO2 level was significantly higher in the AutoPulse CPR group compared to the Manual CPR group. Under normal physiological conditions, the etCO2 level is determined by CO2 production, pulmonary alveoli ventilation capability, and pulmonary blood flow. When CA occurs, CO2 is continuously generated in vivo. The major influencing factors of CO2 discharge into the lung are CO2 transmission speed and volume from the peripheral tissues (CO2-generating site) to the lung, which depend on the pulmonary blood flow. Thus, if the ventilation remains constant, then the etCO2 level changes reflect cardiac output changes as well as the effect of CPR. In recent years, several studies have reported that etCO2 can be used as a noninvasive monitoring indicator for CPR that is related to prognosis [18, 19]. The 2010 AHA CPR guidelines clarify that during CPR of patients with CA, etCO2 can be used as a physiological parameter to indicate the cardiac output and myocardial perfusion, which helps to optimize the quality of chest compression. Together our results indicate that the AutoPulse is superior to manual CPR for improving cardiac blood flow. Thus, we consider that better cardiac and cerebral blood circulation achieved by mechanical chest compression compared to manual chest compression is the main reason for the high success rate of CPR with the use of AutoPulse. Additionally, uninterrupted chest compression in a patient by an AutoPulse even during delivery may be another important factor contributing to the high ROSC rate [1–3]. The importance of continuous chest compression was emphasized by the 2010 AHA CPR guidelines. A recent study showed that in the helicopter emergency medical service, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group, mainly because the former strategy achieved sustained chest compression during transportation [20]. A simulation study on a human body suggested that compared to manual chest compression, the AutoPulse achieves more sustained and stable chest compression during transportation [21]. From the occurrence of CA to emergency room admission, patients need to be carried for transportation at least twice. In this process, manual chest compression will inevitably be interrupted, whereas the AutoPulse can avoid this problem. Although AutoPulse preparation requires time (average 52 s in our study), it may bring more benefit by continuous chest compression during transportation than the potential risks that arise from CPR interruption during device preparation. Cerebral resuscitation remains a major problem in medical science [22, 23]. A large-scale clinical trial showed that only 27% of adults achieved good recovery of neurological function after CPR [24]. Therefore, the ability of the AutoPulse in improving cerebral resuscitation and neurological function recovery should be of concern. Despite the above statement, patients seem to have better cerebral blood supply in the AutoPulse CPR group than in the Manual CPR group. Animal experiments also demonstrated that compared to manual CPR, AutoPulse-assisted CPR more significantly improved the neurological prognosis in a porcine model of CA. However, it is disappointing that we observed no statistically significant difference in neurological prognosis between the AutoPulse CPR and Manual CPR groups. In terms of complications, there is evidence that compared to manual CPR [25], AutoPulse-assisted CPR increases the risk of organ damage [26–28]. In particular, Truhlar [29] proposed that caution should be taken in the use of AutoPulse in thrombolysis to avoid hemorrhage in the abdominal cavity. Because none of the patients' families consented to an autopsy, we have no detailed data for comparative analysis of relevant organ damage between the two groups. However, bedside X-ray films were taken in all patients upon admission to the emergency room, showing that the incidence of rib fractures was slightly lower in the AutoPulse CPR group than in the Manual CPR group. Thereafter, 6 patients were subjected to abdominal CT examination as required by their condition, and none had abdominal organ damage. Because not all patients underwent the above checks, we could not conclude that the AutoPulse is safer than manual CPR in terms of rib fracture and abdominal organ injury or that there is no difference between the risks of organ damage between the AutoPulse and manual CPR. It is worth noting that a recent study using a porcine model of ventricular fibrillation indicated that manual chest compression is advantageous compared to the AutoPulse for improving hemodynamic parameters [30]. A retrospective comparative study by Jennings [31] found that compared to conventional CPR, AutoPulse-assisted CPR improves the hospital admission rate but reduces the hospital discharge rate of patients with OHCA. Whether the AutoPulse can improve the prognosis of patients with OHCA remains controversial. Considering previous studies [12, 13] and the characteristics of the emergency medicine in the Shanghai area, we used a prospective, randomized, controlled method in this study. On the basis of the previous studies [11, 12] and the characteristics of emergency medicine, ROSC was chosen as the primary outcome measure, and 24-hour survival, discharge rate and discharge neurologic prognosis were chosen for secondary outcome measures. The parameters of etCO2, defibrillation, use of epinephrine, etc, were also analyzed in the results. Due to the uncertain time and places of cardiac arrest onset, it was difficult to achieve complete randomization for the research. Hallstrom et al. [12] assigned the patients to two groups randomly according to the emergency medical services (EMS) station with or without the AutoPulse. However, it was impossible for us to follow because our study was performed in a single emergency medicine center. So we finally equipped half of the ambulances with an AutoPulse randomly by using a computer. It may be the most relevant method to achieve randomization as far as possible. This prospective study has brought new findings at both the primary and secondary endpoints of clinical events. Data comparison showed that the ROSC, 24-h survival, and hospital discharge rates of patients with OHCA were significantly higher in the AutoPulse CPR group than in the Manual CPR group. However, in the final neurological prognosis, there was no statistically significant advantage in patients with OHCA within the AutoPulse CPR group. Our work represents the first prospective controlled study regarding the clinical use of the new chest compression device AutoPulse for CPR of patients with OHCA in China. The results are of guiding value for application of this device worldwide, especially in China's Emergency System. The major limitations of our study are as follows: 1) the small sample size may lead to a biased conclusion; and 2) it is difficult to achieve complete randomization based on the grouping method for the ambulance visiting mode because of the particularity of pre-hospital emergency treatment. Due to our present finding that the AutoPulse can improve the ROSC and 24-h survival rates of patients with OHCA, a multi-center prospective study has been designed for in-depth research in China. In conclusion, in Shanghai's Emergency Medical System in the northern district, AutoPulse CPR is superior to manual CPR for increasing resuscitation success and survival rates in patients with OHCA. This device also improves cardiac and cerebral blood circulation in patients but has no significant positive effect on their neurological prognosis. Thus, we conclude that the AutoPulse is worthy of clinical use but that its application prospects require further comprehensive evaluation.\n\n**Question:** Compared to manual chest compression for cardiopulmonary resuscitation (CPR) of out-of-hospital cardiac arrest (OHCA) what was the result of AutoPulse automated chest compression device on The 24-h survival rate of OHCA?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
965
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical evaluation of the AutoPulse automated chest compression device for out-of-hospital cardiac arrest in the northern district of Shanghai, China\n\n ABSTRACT.INTRODUCTION:\nWhether the AutoPulse automated chest compression device is worthy of clinical use for out-of-hospital cardiac arrest (OHCA) remains controversial. A prospective controlled study was conducted to evaluate the effect of AutoPulse versus manual chest compression for cardiopulmonary resuscitation (CPR) of OHCA patients in the northern district of Shanghai, China.\n\nABSTRACT.MATERIAL AND METHODS:\nA total of 133 patients with OHCA who were treated at the Emergency Medical Center of the Tenth People's Hospital Affiliated with Tongji University between March 2011 and March 2012 were included. The patients were randomly assigned to the Manual CPR (n = 64) and AutoPulse CPR groups (n = 69) in accordance with the approach of chest compression received. The primary outcome measure was return of spontaneous circulation (ROSC), and the secondary outcome measures included 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge.\n\nABSTRACT.RESULTS:\nThe ROSC rate of patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). The 24-h survival rate of OHCA patients was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). The hospital discharge rate of the patients with OHCA was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The proportion of patients with OHCA and a cerebral performance category score of 1 or 2 points at hospital discharge was higher in the AutoPulse CPR group than in the Manual CPR group, but the difference was not statistically significant (16.2% vs. 13.4%, p = 1.00).\n\nABSTRACT.CONCLUSIONS:\nUse of the AutoPulse increases CPR success and survival rates in patients with OHCA, but its ability to improve cerebral performance requires further evaluation.\n\nBODY.INTRODUCTION:\nHigh-quality cardiopulmonary resuscitation (CPR) is of great importance to cardiac and cerebral resuscitation [1–3]. However, chest compression often fails to meet relevant guidelines in terms of depth, frequency, and compression/relaxation time [4, 5]. Manual chest compression is not ideal for blood perfusion in most patients. Even when chest compression is performed by a highly trained individual, the generated cerebral blood flow is only 30–40% of the normal level and the cardiac blood supply is only 10–20% of the normal level [6]. Thus, finding a proper automated chest compression device to replace manual chest compression is necessary. The AutoPulse (ZOLL Circulation, Sunnyvale, CA, USA) is a chest compression device that was approved by the U.S. Food and Drug Administration for CPR in patients who suffer from cardiac arrest (CA) in 2001, and was approved in China by the China Food and Drug Administration in 2007. The AutoPulse is an electrical chest compression device that functions through the compression and relaxation of a load-distributing band connected to a hard board using an electric motor. The operation procedure of the AutoPulse was as follows: first, all clothing was removed from the patient, who was then placed on the hard board following the indicating line; next, both ends of the load-distributing band were fixed with a buckle to the front chest of the patient through the armpit; and finally, the device was started and the load-distributing band was automatically tightened to fit patients of different sizes. The default setting was as follows: compression strength, to reduce the patient's chest volume by 20%; and compression frequency, 80 times/min. The device was designed with continuous and 15:2 compression modes; in the latter case, compressions were performed 15 times and then paused for 3 s in each run, allowing for ventilation of the patient twice. In this research, the 15:2 compression mode was used. However, the compression frequency and two compression modes were fixed and could not be adjusted. Animal studies have shown that compared to ordinary CPR, AutoPulse-assisted CPR improved cardiac and cerebral hemodynamics as well as neurological prognosis [7, 8]. A human study showed that compared to manual chest compression, the AutoPulse significantly increased the coronary blood flow [9]. In a retrospective study, the return of the spontaneous circulation (ROSC) rate was significantly improved in patients in the AutoPulse CPR group compared to those in the Manual CPR group [10]. A large-scale multi-center prospective study found that compared with patients in the Manual CPR group, those of the AutoPulse CPR group had significantly higher rates of ROSC, hospital admission, and hospital discharge [11]. Earlier research indicated that the AutoPulse was a good choice for clinical use in patients after CA. However, a large-scale multi-center prospective controlled study found that compared with patients of the Manual CPR group, those of the AutoPulse CPR group had a decreased 4-h survival rate (28.5% vs. 29.5%; p = 0.74) and hospital discharge rate (5.8% vs. 9.9%; p = 0.6) with significantly poor neurological prognosis [12]. Because of the contradictory results from similar studies, whether the AutoPulse is worthy of clinical use in CPR for patients after CA requires more comprehensive evaluation. On the basis of the above-mentioned studies and Lerner's experience [13], we enhanced the control over the emergency treatment procedure and the professional level of emergency care personnel to minimize various interferences during the research. A prospective controlled study was designed to comprehensively evaluate the effect of the AutoPulse versus manual CPR for OHCA in the northern district of Shanghai, China. The results will provide valuable clinical evidence for the use of the AutoPulse to provide CPR for patients after OHCA.\n\nBODY.MATERIAL AND METHODS.STUDY DESIGN:\nThe study was designed as a single-center, prospective, randomized, controlled clinical trial. This clinical trial was reviewed and approved by the ethics committee of the Tenth People's Hospital in Shanghai. The inclusion criterion was: patients with OHCA who were admitted to the Emergency Medical Center of our hospital between March 2011 and March 2012. The exclusion criteria were: pregnant females, trauma patients, patients with advanced cancer, or patients aged < 14 years or > 90 years old (Figure 1). Figure 1Flow of participants in trial There are 10 ambulances in our Emergency Medical Center, which were encoded from number one to number ten optionally, then five numbers were randomly produced by a computer. The ambulances with the five numbers were equipped with an AutoPulse. The ambulances with or without the device were used in turn to visit the patients. After pre-hospital emergency treatment, all patients were sent to the Emergency Department of the Tenth People's Hospital Affiliated with Tongji University, which has an intensive care unit (ICU) and the corresponding advanced life support technologies. The patients or their families were informed of this research and signed informed consent prior to emergency treatment or after admission to the emergency room. The follow-up was terminated for patients who refused to participate in this research. All emergency personnel were familiar with this device as well as the 2010 American Heart Association (AHA) CPR guidelines. The personnel were familiar with our study design and research procedure.\n\nBODY.MATERIAL AND METHODS.RESEARCH PROCEDURE:\nThe unified regimen was as follows: the emergency personnel arrived at the scene and connected the monitor to determine the cardiac rhythm. In the case of ventricular fibrillation or pulseless ventricular tachycardia, defibrillation was performed immediately. If stable ROSC was achieved after defibrillation, then the patient was excluded from the research. If the patient failed to establish stable ROSC or the initial cardiac rhythm was not ventricular fibrillation or pulseless ventricular tachycardia, then the use of the AutoPulse was determined by the presence of this device in the ambulance. In the absence of an AutoPulse, manual chest compression was performed immediately; in the presence of an AutoPulse, 1 person performed manual chest compression while another two persons prepared the device. Endotracheal intubation was conducted simultaneously with balloon-assisted ventilation consisting of 100% oxygen. When the AutoPulse was in place, the patient was placed on the hard board in a supine position. The load-distributing band was connected and fixed as described above. The device was then started to deliver 15 compressions followed by two balloon-assisted ventilations in each run. Patients who met the inclusion criterion were included in the study. For these patients, an end-tidal carbon-dioxide (etCO2) module was connected to the intubation. Changes in the etCO2 level were monitored and recorded. Chest compression was performed on the patient continuously until ROSC was achieved or death was announced by the emergency personnel. Surviving patients were sent to the emergency room for further treatment, where chest compression was performed in the same way. Patients who demonstrated ROSC before or after admission to the emergency room were changed to a ventilator for ventilation assistance. The ventilator was operated in a mode of volume control ventilation (tidal volume, 8 ml/kg; inspired oxygen concentration, 100%). Vasoactive drugs, antiarrhythmic drugs, a defibrillator [14], and the chest compression rate of the Manual CPR group were used or performed strictly following the 2010 AHA CPR guidelines. The ROSC status of patients admitted to the emergency room was recorded. Patients who achieved ROSC were admitted to the ICU, where physicians and nurses were not allowed to know whether the patient had been treated with an AutoPulse. The following changes in patient condition were continuously recorded: 24-h survival rate, hospital discharge rate, and neurological prognosis at hospital discharge. Neurological prognosis was assessed in all patients achieving ROSC using Cerebral Performance Category (CPC) scores, which was repeated in those who survived at 24 h and once again in patients at the time of hospital discharge. Patients with multiple CPC scores were analyzed with data from the last scoring.\n\nBODY.MATERIAL AND METHODS.STATISTICAL ANALYSIS:\nStatistical analysis was performed using SPSS 13.0 statistical software (SPSS Inc, Chicago, IL, USA). Measurement data are expressed as mean ± standard deviation, while enumeration data are described using the number of cases (percentage). Basic and CPR data of the patients were compared as follows: measurement data using the t test and enumeration data using the χ2 or Fisher's exact probability test. Associations between treatment groups and all end points were analyzed using the χ2 test. Values of p < 0.05 were considered statistically significant.\n\nBODY.RESULTS:\nA total of 133 patients were included in this clinical trial. During the treatment, no patients were changed from the AutoPulse to manual CPR and vice versa. Patients' gender, age, and weight in the Manual CPR and AutoPulse CPR groups are listed in Table I. Weight was provided by the family of the patient or estimated by the researcher. There were no significant differences in the above indicators between the two groups (p > 0.05), nor were there significant differences detected in the witness (59.4% vs. 66.7%, p = 0.38) or composition of the cardiac rhythm types (p = 0.32).\n Table I Comparison of basic patient characteristics between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value Male 44 (68.8) 50 (72.5) 0.64 Age, mean ± SD [years] 64.2 ±12.6 62.6 ±14.9 0.53 Weight, mean ± SD [kg] 68.4 ±8.9 67.3 ±8.1 0.47 Witness 38 (59.4) 46 (66.7) 0.38 Cardiac rhythm: 0.32  Ventricular fibrillation/pulseless ventricular tachycardia 8 (12.5) 9 (13)  Pulseless electrical activity 20 (31.3) 31 (44.9)  Asystole 32 (50.0) 24 (34.8)  Others 4 (6.3) 5 (7.2) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. \nTable II compares the CPR data of patients between the Manual CPR and AutoPulse CPR groups. There were no statistically significant differences in the number of patients receiving CPR before the arrival of professional emergency personnel between groups (25.0% vs. 27.5%, p = 0.70), the duration from the emergency call to the arrival of emergency care personnel (11.5 vs. 11.7 min, p = 0.70), or the total duration of CPR (33.4 vs. 19.6 min, p = 0.21) between groups. In the AutoPulse CPR group, the average device preparation time was 3.4 ±1.0 min. The rates of epinephrine and defibrillation utilization during CPR did not differ statistically significantly between the two groups (87.5% vs. 76.8%, p = 0.10 and 20.3% vs. 15.9%, p = 0.51, respectively). However, patients in the AutoPulse CPR group had significantly higher etCO2 than those of the Manual CPR group (22.3 vs. 16.1 mm Hg, p < 0.05). Also, the 1-h return of spontaneous breathing (ROSB) rate was significantly higher in patients in the AutoPulse CPR group than in the Manual CPR group (24.6% vs. 10.9%, p = 0.04).\n Table II Comparison of patient resuscitation characteristics between the Manual CPR and AutoPulse CPR groups Variable Manual CPR ( n = 64) AutoPulse CPR ( n = 69) P -value CPR before arrival of emergency care personnel 16 (25.0) 19 (27.5) 0.74 Duration from call to arrival of emergency care personnel, mean ± SD [min] 11.5 ±3.1 11.7 ±2.8 0.70 Total CPR time [min] 33.4 (15.1) 19.6 (19.3) 0.21 AutoPulse preparation time, mean ± SD [s] * NA 52 ±23 Epinephrine 56 (87.5) 53 (76.8) 0.1 Defibrillation 13 (20.3) 11 (15.9) 0.51 EtCO 2 [mm Hg] 16.1 (5.4) 22.3 (6.1) < 0.001 1-h return of spontaneous ventilation 7 (10.9) 17 (24.6) 0.04 Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * Time to apply the AutoPulse and thus terminate manual chest compressions, which was exactly the time to place the patient on the hard board, connect and fix the load-distributing belt, and start the AutoPulse device. The patients' clinical prognostic features are shown in Table III. In terms of the primary endpoint, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (44.9% vs. 23.4%; p = 0.009). In terms of the secondary endpoint events, the 24-h patient survival rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (39.1% vs. 21.9%; p = 0.03). Similarly, the hospital discharge rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group (18.8% vs. 6.3%; p = 0.03). The neurological prognosis results are shown in Table IV. In the Manual CPR group, 13.4% of the ROSC patients were scored as CPC 1 or CPC 2 vs. 16.2% in the AutoPulse CPR group. Despite a relative increase in the latter group, there was no statistically significant difference (p = 1.00).\n Table III Comparison of patient clinical prognosis between the Manual CPR and AutoPulse CPR groups Parameter Manual CPR AutoPulse CPR P -value ROSC 15 (23.4) 31 (44.9) 0.009 24-h survival 14 (21.9) 27 (39.1) 0.03 Hospital discharge 4 (6.3) 13 (18.8) 0.03 Table IV Comparison of CPC scores of patients between Manual CPR and AutoPulse CPR groups CPC score * Manual CPR ( n = 15) AutoPulse CPR ( n = 31) P -value 1 1 (6.7) 2 (6.5) 1.00 # 2 1 (6.7) 3 (9.7) 3 4 (26.7) 5 (16.1) 4 0 (0) 1 (3.2) 5 9 (60.0) 20 (64.5) Data are presented as number (percentage) unless otherwise specified; mean values were compared using the t test, while percentages were compared using the χ 2 or Fisher test. * CPC scoring of all ROSC patients # comparison of the ratio of patients scored as CPC 1 or CPC 2 in ROSC patients at hospital discharge between AutoPulse CPR and Manual CPR groups. With regard to complications, there was a lack of detailed data for comparison of injuries between groups. This is because an autopsy was refused by the families of all patients due to Chinese tradition. As required by patient condition, bedside X-ray examinations were performed upon admission to the emergency room; additionally, all deceased patients were subjected to bedside X-ray examination. The results showed that the incidence of rib fracture was 6.7% in the AutoPulse group (4/60), slightly higher than that in the Manual CPR group (4.8%, 3/63), but the difference was not statistically significant (p = 0.71). Due to the following condition of the illness, 6 patients (two in the Manual CPR group and four in the AutoPulse CPR group) underwent abdominal computed tomography (CT) examinations, in which no abdominal organ injuries were found.\n\nBODY.DISCUSSION:\nStudies have shown that the increased circulating blood flow in CPR is closely related to the success rate of resuscitation [2]. Animal studies in a porcine model of CA confirmed that the AutoPulse generated better blood circulation than manual chest compression [7, 8]. Similar conclusions were obtained in a number of human studies [9, 15, 16]. A retrospective study found that the ROSC rate of patients with CA was significantly higher in those treated with the AutoPulse than in those treated with manual CPR [10]. A recent multi-center prospective study compared the effects of the AutoPulse CPR versus manual CPR, which found that at one of the medical centers, the survival rate of patients in the AutoPulse group declined from 19.6% to 4% after the implementation of an updated experimental program with delayed use of the AutoPulse (p = 0.024); at the other medical centers, the 4-h survival rate at the primary endpoint was significantly higher in the AutoPulse CPR group than in the Manual CPR group (p = 0.008) [17]. The earlier studies confirm that the AutoPulse plays a positive role in CPR and that stable effective blood circulation is a key factor of CPR success. Our study found that the 1-h ROSB rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group. Basic research has shown that the respiratory center is located in the medulla oblongata and that even a few minutes of ischemia and hypoxia injury can cause irreversible brain damage. Thus, the higher ROSB rate indicates that patients treated with the AutoPulse achieved better cerebral blood supply and that the AutoPulse is superior to manual CPR for improving cerebral blood flow. Additionally, we found that the etCO2 level was significantly higher in the AutoPulse CPR group compared to the Manual CPR group. Under normal physiological conditions, the etCO2 level is determined by CO2 production, pulmonary alveoli ventilation capability, and pulmonary blood flow. When CA occurs, CO2 is continuously generated in vivo. The major influencing factors of CO2 discharge into the lung are CO2 transmission speed and volume from the peripheral tissues (CO2-generating site) to the lung, which depend on the pulmonary blood flow. Thus, if the ventilation remains constant, then the etCO2 level changes reflect cardiac output changes as well as the effect of CPR. In recent years, several studies have reported that etCO2 can be used as a noninvasive monitoring indicator for CPR that is related to prognosis [18, 19]. The 2010 AHA CPR guidelines clarify that during CPR of patients with CA, etCO2 can be used as a physiological parameter to indicate the cardiac output and myocardial perfusion, which helps to optimize the quality of chest compression. Together our results indicate that the AutoPulse is superior to manual CPR for improving cardiac blood flow. Thus, we consider that better cardiac and cerebral blood circulation achieved by mechanical chest compression compared to manual chest compression is the main reason for the high success rate of CPR with the use of AutoPulse. Additionally, uninterrupted chest compression in a patient by an AutoPulse even during delivery may be another important factor contributing to the high ROSC rate [1–3]. The importance of continuous chest compression was emphasized by the 2010 AHA CPR guidelines. A recent study showed that in the helicopter emergency medical service, the ROSC rate was significantly higher in the AutoPulse CPR group than in the Manual CPR group, mainly because the former strategy achieved sustained chest compression during transportation [20]. A simulation study on a human body suggested that compared to manual chest compression, the AutoPulse achieves more sustained and stable chest compression during transportation [21]. From the occurrence of CA to emergency room admission, patients need to be carried for transportation at least twice. In this process, manual chest compression will inevitably be interrupted, whereas the AutoPulse can avoid this problem. Although AutoPulse preparation requires time (average 52 s in our study), it may bring more benefit by continuous chest compression during transportation than the potential risks that arise from CPR interruption during device preparation. Cerebral resuscitation remains a major problem in medical science [22, 23]. A large-scale clinical trial showed that only 27% of adults achieved good recovery of neurological function after CPR [24]. Therefore, the ability of the AutoPulse in improving cerebral resuscitation and neurological function recovery should be of concern. Despite the above statement, patients seem to have better cerebral blood supply in the AutoPulse CPR group than in the Manual CPR group. Animal experiments also demonstrated that compared to manual CPR, AutoPulse-assisted CPR more significantly improved the neurological prognosis in a porcine model of CA. However, it is disappointing that we observed no statistically significant difference in neurological prognosis between the AutoPulse CPR and Manual CPR groups. In terms of complications, there is evidence that compared to manual CPR [25], AutoPulse-assisted CPR increases the risk of organ damage [26–28]. In particular, Truhlar [29] proposed that caution should be taken in the use of AutoPulse in thrombolysis to avoid hemorrhage in the abdominal cavity. Because none of the patients' families consented to an autopsy, we have no detailed data for comparative analysis of relevant organ damage between the two groups. However, bedside X-ray films were taken in all patients upon admission to the emergency room, showing that the incidence of rib fractures was slightly lower in the AutoPulse CPR group than in the Manual CPR group. Thereafter, 6 patients were subjected to abdominal CT examination as required by their condition, and none had abdominal organ damage. Because not all patients underwent the above checks, we could not conclude that the AutoPulse is safer than manual CPR in terms of rib fracture and abdominal organ injury or that there is no difference between the risks of organ damage between the AutoPulse and manual CPR. It is worth noting that a recent study using a porcine model of ventricular fibrillation indicated that manual chest compression is advantageous compared to the AutoPulse for improving hemodynamic parameters [30]. A retrospective comparative study by Jennings [31] found that compared to conventional CPR, AutoPulse-assisted CPR improves the hospital admission rate but reduces the hospital discharge rate of patients with OHCA. Whether the AutoPulse can improve the prognosis of patients with OHCA remains controversial. Considering previous studies [12, 13] and the characteristics of the emergency medicine in the Shanghai area, we used a prospective, randomized, controlled method in this study. On the basis of the previous studies [11, 12] and the characteristics of emergency medicine, ROSC was chosen as the primary outcome measure, and 24-hour survival, discharge rate and discharge neurologic prognosis were chosen for secondary outcome measures. The parameters of etCO2, defibrillation, use of epinephrine, etc, were also analyzed in the results. Due to the uncertain time and places of cardiac arrest onset, it was difficult to achieve complete randomization for the research. Hallstrom et al. [12] assigned the patients to two groups randomly according to the emergency medical services (EMS) station with or without the AutoPulse. However, it was impossible for us to follow because our study was performed in a single emergency medicine center. So we finally equipped half of the ambulances with an AutoPulse randomly by using a computer. It may be the most relevant method to achieve randomization as far as possible. This prospective study has brought new findings at both the primary and secondary endpoints of clinical events. Data comparison showed that the ROSC, 24-h survival, and hospital discharge rates of patients with OHCA were significantly higher in the AutoPulse CPR group than in the Manual CPR group. However, in the final neurological prognosis, there was no statistically significant advantage in patients with OHCA within the AutoPulse CPR group. Our work represents the first prospective controlled study regarding the clinical use of the new chest compression device AutoPulse for CPR of patients with OHCA in China. The results are of guiding value for application of this device worldwide, especially in China's Emergency System. The major limitations of our study are as follows: 1) the small sample size may lead to a biased conclusion; and 2) it is difficult to achieve complete randomization based on the grouping method for the ambulance visiting mode because of the particularity of pre-hospital emergency treatment. Due to our present finding that the AutoPulse can improve the ROSC and 24-h survival rates of patients with OHCA, a multi-center prospective study has been designed for in-depth research in China. In conclusion, in Shanghai's Emergency Medical System in the northern district, AutoPulse CPR is superior to manual CPR for increasing resuscitation success and survival rates in patients with OHCA. This device also improves cardiac and cerebral blood circulation in patients but has no significant positive effect on their neurological prognosis. Thus, we conclude that the AutoPulse is worthy of clinical use but that its application prospects require further comprehensive evaluation.\n\n**Question:** Compared to manual chest compression for cardiopulmonary resuscitation (CPR) of out-of-hospital cardiac arrest (OHCA) what was the result of AutoPulse automated chest compression device on The proportion of patients with OHCA and a cerebral performance category score of 1 or 2 points at hospital discharge ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
970
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The implementation of a community-based aerobic walking program for mild to moderate knee osteoarthritis (OA): a knowledge translation (KT) randomized controlled trial (RCT): Part I: The Uptake of the Ottawa Panel clinical practice guidelines (CPGs)\n\n ABSTRACT.BACKGROUND:\nThe implementation of evidence based clinical practice guidelines on self-management interventions to patients with chronic diseases is a complex process. A multifaceted strategy may offer an effective knowledge translation (KT) intervention to promote knowledge uptake and improve adherence in an effective walking program based on the Ottawa Panel Evidence Based Clinical Practice Guidelines among individuals with moderate osteoarthritis (OA).\n\nABSTRACT.METHODS:\nA single-blind, randomized control trial was conducted. Patients with mild to moderate (OA) of the knee (n=222) were randomized to one of three KT groups: 1) Walking and Behavioural intervention (WB) (18 males, 57 females) which included the supervised community-based aerobic walking program combined with a behavioural intervention and an educational pamphlet on the benefits of walking for OA; 2) Walking intervention (W) (24 males, 57 females) wherein participants only received the supervised community-based aerobic walking program intervention and the educational pamphlet; 3) Self-directed control (C) (32 males, 52 females) wherein participants only received the educational pamphlet. One-way analyses of variance were used to test for differences in quality of life, adherence, confidence, and clinical outcomes among the study groups at each 3 month assessment during the 12-month intervention period and 6-month follow-up period.\n\nABSTRACT.RESULTS:\nShort-term program adherence was greater in WB compared to C (p<0.012) after 3 months. No statistical significance (p> 0.05) was observed for long-term adherence (6 to 12 months), and total adherence between the three groups. The three knowledge translation strategies demonstrated equivalent long-term results for the implementation of a walking program for older individuals with moderate OA. Lower dropout rates as well as higher retention rates were observed for WB at 12 and 18 months.\n\nABSTRACT.CONCLUSION:\nThe additional knowledge translation behavioural component facilitated the implementation of clinical practice guidelines on walking over a short-term period. More studies are needed to improve the long-term walking adherence or longer guidelines uptake on walking among participants with OA. Particular attention should be taken into account related to patient's characteristic and preference. OA can be managed through the implementation of a walking program based on clinical practice guidelines in existing community-based walking clubs as well as at home with the minimal support of an exercise therapist or a trained volunteer.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent Controlled Trials IRSCTNO9193542\n\nBODY.BACKGROUND:\nRising healthcare costs, limited resources, and the aging population have created new and growing challenges in the management of osteoarthritis (OA). The challenges associated with the management of OA are numerous as the prevalence of the population continues to increase while healthcare resources remain limited \n[1-7]. It is therefore necessary to determine the most effective methods of integrating research evidence in order to optimize health outcomes. The purpose of this randomized controlled trial (RCT) was to compare 1) improvements in quality of life (QoL) and clinical outcomes such as pain, mobility and endurance); 2) adherence rates; and 3) confidence and self-efficacy after the implementation of a 12-month supervised community-based aerobic walking program (SCAWP) based on the Ottawa Panel clinical practice guidelines (CPG) among three knowledge translation (KT) intervention arms. QoL, confidence, and self-efficacy were compared at 12- months (end of treatment) and at 18-months (6-months post-intervention). Adherence was compared during the intervention period at 3, 6, 9 and 12 months. The first part of this manuscript introduces each KT intervention and demonstrates the impact of knowledge (CPG) uptake of each by comparing outcomes influenced by the KT intervention such as adherene and behaviour change (confidence and self-efficacy). Knowledge application of each KT intervention is explained through the use a theoretical framework: Knowledge-To-Action Cycle (KTAC) \n[8,9]. The second part of this manuscript focuses on outcome evaluation, a specific phase in the KTAC framework used to guide this study. Part II demonstrates the effect of each intervention involving a walking program on outcomes which were influenced by the SCAWP including QoL (primary outcome) and other clinical outcomes such as pain, mobility, and endurance. These outcomes are exhibited in part II. Therefore, both parts of this manuscript were split according to the ''Evaluate outcomes\" phase of the KTAC framework. Part I focused on KT outcomes which measured the success of CPG uptake through participants' adherence and behavioural change while part II focused on clinical outcomes which measured the positive effect of SCAWP due to the indirect CPG uptake/implementation/adoption by individuals with OA of the knee.\n\nBODY.BACKGROUND.KNOWLEDGE TRANSLATION:\nA major issue in health research today is finding effective and efficient ways to exchange knowledge between researchers, clinicians and the general public. Potential benefits to health which could be accrued through the application of research findings are not realized due to challenges in research uptake and knowledge translation \n[10]. KT is defined by Estabrooks et al. \n[11] as \"the process by which specific research-based knowledge (science) is implemented in practice.\" The available evidence suggests that with an effective KT strategy, uptake of evidence-based clinical practice guidelines (EBCPGs) can be effective in improving patient health outcomes \n[12].\n\nBODY.BACKGROUND.PHYSICAL ACTIVITY AND OSTEOARTHRITIS:\nThe promotion of physical activity (PA) is a priority for health organizations serving the general population \n[13,14] and is highly recommended for subgroups affected by chronic diseases \n[15-19]. Community-based PA combined with a behavioural modification and self-management interventions can reduce the risk of disability and negative consequences of inactivity related to OA \n[20,21]. The challenge is to develop PA programs that will encourage OA patients to initiate and maintain improvements in exercise behaviour over a long-term period \n[22]. The available evidence suggests that with an effective KT strategy, uptake of evidence-based clinical practice guidelines (CPGs) can be effective in improving patient health outcomes \n[12].\n\nBODY.BACKGROUND.SELECTING AN EFFECTIVE KT STRATEGY:\nMultifaceted interventions combining more than one KT strategy have tended to be more effective than using a single KT strategy alone \n[23,24] and have shown to have the greatest impact on CPG adherence and PA performance \n[25]. Behavioural interventions have been used for other chronic conditions to improve long-term adherence and maintenance of PA regimens with varying success. The efficacy of different behavioural interventions including patient education, health counselling, goal setting, and social/peer support, delivered separately and or in combination, have been examined in the management of arthritis \n[26-31].\n\nBODY.METHODS:\nThe following methodology is in full agreement with the Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement criteria for reporting RCTs \n[32].\n\nBODY.METHODS.DESIGN:\nThis single blind RCT, funded by the Canadian institutes of Health Research, used a parallel group design (1:1:1). This community-based study was approved by the University of Ottawa Research Ethics Board and the City of Ottawa Public Health Research Ethics Board. This study implemented one of the Ottawa Panel CPG recommendations related to a supervised community-based aerobic walking programs (SCAWPs) among individuals with mild to moderate OA of the knee \n[33-35]. In order to facilitate an understanding of the KT process, Graham et al. developed a conceptual framework entitled \"the Knowledge-To-Action Cycle\" (Figure \n1) which provides an approach which combines commonalities from various planned-action theories \n[8,9]. This framework was used to guide the development of the three KT strategies used in this RCT (Additional file \n1). The framework demonstrates the dynamic process of knowledge creation and application \n[9]. Knowledge creation consists of three phases: knowledge inquiry, knowledge synthesis, and knowledge tools/products. The knowledge creation cycle of the framework demonstrates how knowledge is refined through each phase to provide useful information for end users. The action cycle of the framework consists of seven phases which may occur sequentially or simultaneously: 1) Identify the Knowledge-To-Action Cycle gaps; 2) Adapt Knowledge to local context; 3) Assess barriers to Knowledge Use; 4) Select, Tailor, Implement Interventions; 5) Monitor Knowledge Use; 6) Evaluate outcomes; and 7) Sustained Knowledge Use. For part I, the emphasis was placed on the \"Evaluate outcomes (KT outcomes)\" phase as participants' knowledge uptake was measured as adherence to the effective walking program for OA and attitude/behaviour change (self-efficacy and confidence) after participating in the SCAWP. Figure 1Knowledge-To-Action Cycle. This figure illustrates the Knowledge-To-Action Cycle Framework \n[8]. This figure is used with permission: Lost in knowledge translation: Time for a map. Graham ID, et al. Journal of Continuing Education in the Health Professions, 26(1). Copyright © 2006. The Alliance for Continuing Medical Education, The Society for Academic Continuing Medical Education, and The Association for Hospital Medical Education. The hypothesis was that the additional behavioural approach of the multifaceted KT intervention would: 1) increase short and long term adherence to the SCAWP; 2) reduce the drop-out rate; 3) influence behaviour change by improving self-efficacy, confidence, QoL and other clinical outcomes.\n\nBODY.METHODS.SAMPLE AND RECRUITMENT:\nA total of 222 adults with knee OA were recruited. Participants were eligible to participate in the study if he/she 1) had a confirmed diagnosis with mild to moderate unilateral or bilateral OA according to the American College of Rheumatology clinical and radiographic/magnetic resonance imagery criteria; 2) reported pain for at least 3 months; 3) expected his/her medication to change during the study period; 4) demonstrated an ability to ambulate for a minimum of 20 minutes, at their own pace with minimal reports of pain (≤ 3 out of 10 on a visual analogue pain rating scale) \n[36]; 4) were able to be treated as an out-patient; and 5) were available three times a week over a period of 12 months. Potential study subjects were excluded if they had: 1) Participated in regular physical or aerobic sports ≥ 2 times per week for more than 20 minutes per session during the previous 6 months; 2) severe OA of the knee or other weight bearing joints of the lower extremity; 3) no written consent from their physician to participate in the study; 4) pain at rest or at night; 5) received rehabilitation treatment, corticosteroids injection, or any other pain-related treatment besides medication for arthritis within the last 12 months; 6) uncontrolled hypertension (Systolic blood pressure >160 mm Hg confirmed by the screening initial VO2 max test at the Ottawa Heart Institute) 7) other illnesses, such as rheumatoid arthritis (judged by the patient or study physician to make participation in this study inadvisable); 8) significant cognitive deficit resulting in an inability to understand or comply with instructions; 9) surgery planned in the next year; 10) Intention to move away from Ottawa region in the next year; 11) an inability to communicate in English or French; or 12) an unwillingness to sign informed consent. The study coordinator assessed the inclusion/exclusion criteria of all potential participants by telephone. At the first visit, written informed consent was obtained if the patient met all inclusion criteria.\n\nBODY.METHODS.INTERVENTION:\nParticipants were randomly assigned to one of the three KT intervention groups using central randomization \n[37] and computer generated numbers \n[38]: 1) Walking and Behavioural intervention (WB) (18 males, 57 females) which included the SCAWP with a behavioural intervention and an educational pamphlet on the benefits of walking for OA; 2) Walking intervention (W) (24 males, 57 females) wherein participants only received the SCAWP intervention and the educational pamphlet; 3) Self-directed control (C) (32 males, 52 females) wherein participants only received the educational pamphlet. All 3 groups were provided with pedometers and log books to be completed to measure physical performance (walking in minutes) and additional PA aside from the walking sessions (Table \n1). The KT strategies were implemented over a 12-month duration and participants were assessed for additional 6 months (15 and 18-month follow-up assessments). Table 1 Summary of the KT strategies used KT strategies in each group WB or WalkingClub+ (supervised walking program + behavioural approach at the community-based Walking Club) W or WalkingClub (supervised walking program only at the community-based Walking Club) C or Self-directed Home-based or community based different than W and WB (unsupervised/self-directed Walking program only) EBP walking program (Ottawa Panel, 2005/2012) refs: \n[ 33 - 35 ] EBP walking program (Ottawa Panel, 2005/2012) refs: \n[ 33 - 35 ] Walking program (general info from pamphlet on walking and OA) Walking Club + effect (supervised program, team, daily monitored vital measures: BP, HR, #steps,) Walking Club effect (supervised program, team, daily monitored vital measures: BP, HR, #steps,) N/A Pedometer as measurement tool, but becomes a KT strategy Pedometer as measurement tool, but becomes a KT strategy Pedometer as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy Log book including 7-day PAR as measurement tool, but becomes a KT strategy $ compensation each walking session and at evaluation session $ compensation each walking session and at evaluation session $ compensation each logbook fulfilled and at evaluation session Study affiliation/participation (feel committed) Study affiliation/participation (feel committed) Study affiliation/participation (feel committed) Behavioural intervention (Baycrest PACE-ex: patient education + goal settings; PA counselling, telephone support) N/A N/A Following the successful randomization process, wherein no statistical difference was found between the three groups (with the exception of the \"physical role\" category of the SF-36: part II) (Tables \n2, \n3, \n4, \n5), research staff and evaluators were blinded to the treatment allocation. An independent evaluator was blinded to assess outcome measures. Due to the nature of the physical intervention, it was not practical to blind the study participants and PA specialists supervising the aerobic walking program \n[39]. Table 2 Subject demographics and baseline characteristics Characteristic Walking (n=79) Walking and Behavioural (n=69) Self-directed Control (n=74) Total Mean age (SD), yrs 63.9 (10.3) 63.9 (8.2) 62.3(6.8) 63.4 (8.6) Missing Data 0 0 0 0 Men/women, (%) 24 (30.4)/55 (69.9) 18 (26.1)/51 (73.9) 27 (36.5)/47 (63.5) 69 (31.1)/153 (68.9) Missing Data 0 0 0 0 Affected knee, n (%)         Right 33 (41.8) 31 (44.9) 25 (33.8) 89 (40.1) Left 31(39.2) 23 (33.3) 28 (37.8) 82 (36.9) Both side 15 (19.0) 15 (28.4) 21 (28.4) 51 (23.0) Missing Data 0 0 0 0 Mean duration of OA (SD), yrs 9.54 (8.09) 11.3 (9.7) 10.0 (9.9) 10.3 (9.26) Missing Data 0 0 0 0 Mean weight (SD), kg 80.7 (18.5) 83.1 (15.4) 83.0(15.8) 82.2 (16.6) Missing Data 0 1 0 0 Mean BMI (SD), kg/m² 29.4 (5.4) 30.3 (5.6) 29.9(5.3) 29.8 (5.4) Missing Data 0 2 4 6 Walking aid, n (%)         Yes 10 (12.7) 10 (14.5) 9 (12.2) 29 (13.1) No 69 (87.3) 58 (84.1) 64 (86.5) 191 (86.0) Missing Data 0 1 1 2 Racial background, n (%)         White 69 (87.3) 60 (87.0) 68 (91.9) 197 (88.7) Black 1 (1.3) 3 (4.3) 1 (1.4) 5 (2.3) Hispanic 2 (2.5) 2 (2.9) 4 (5.4) 8 (3.6) Asian or Pacific Islander 5 (6.3) 4 (5.8) 1 (1.4) 10 (4.5) American Indian or Alaskan native 1 (1.3) 0 (0) 0 (0) 1 (0.5) Other 1 (1.3) 0 (0) 0 (0) 1 (0.5) Missing Data 0 0 0 0 Marital status, n (%)         Married 46 (58.2) 36 (52.2) 44 (59.5) 126 (56.8) Separated 2 (2.5) 1 (1.4) 1 (1.4) 4 (1.8) Divorced 9 (11.4) 17 (24.6) 8 (10.8) 34 (15.3) Widowed 17 (21.5) 11 (15.9) 9 (12.2) 37 (16.7) Never Married 5 (6.3) 4 (5.8) 12 (16.2) 21 (9.5) Missing Data 0 0 0 0 Level of education, n (%)         Less than 7 yrs of school 2 (2.5) 1 (1.4) 1 (1.4) 4 (1.8) Grades 7 through 9 5 (6.3) 0 (0) 0 (0) 5 (2.3) Grades 10 through 11 7 (8.9) 4 (5.8) 5 (6.8) 16 (7.2) High school graduate 13 (16.5) 16 (23.2) 8 (10.8) 37 (16.7) 1 to 4 yrs of college 13 (16.5) 9 (13.0) 22 (29.7) 44 (19.8) College graduate 25 (31.6) 21 (30.4) 19 (25.7) 65 (29.3) Professional or Graduate school 14 (17.7) 18 (26.1) 19 (25.7) 51 (23.0) Missing Data 0 0 0 0 Yrs: years, OA: osteoarthritis, kg: kilograms, m: meters, %: validity percent. Table 3 Subject’s Medication at Baseline Medication Walking (W) only (n=79) Walking and Behavioural (WB) (n=69) Self-directed (C) (n=74) Oral hypoglycaemic agents         Total 2 (3) 5 (7) 9 (12)   Biguanide Class 2 (3) 4 (6) 9 (12)   Sulfonylureas Class 0 (0) 0 (0) 2 (3)   Thiazolidinedione 0 (0) 2 (3) 2 (3) Antihypertensive Agents         Total 22 (28) 15 (22) 15 (22)   Calcium channel blockers 6(8) 0(0) 7(9)   Antiotensin-converting enzyme 3 (4) 7 (10) 7 (9)   Angiotensin II receptor blockers 6 (8) 6 (9) 3 (4)   Beta-blocker 4 (5) 3 (4) 4 (5)   Alpha-blocker 3 (4) 0 (0) 0 (0)   Diuretic 5 (6) 8 (12) 5 (7) Antiarrhythmic 1 (1) 0 (0) 0 (0) Anti-anginal 1 (1) 0 (0) 0 (0) Anti-platelet agent 1 (1) 1 (1) 1 (1) Anticoagulant 0 (0) 2 (3) 0 (0) Phosphodiesterase type 5 inhibitors 0 (0) 1 (1) 0 (0) Statins 8 (10) 12 (17) 10 (14) Hormone         Total 11 (14) 10 (14) 11 (15)   Thyroid 5 (6) 5 (7) 8 (11)   Insulin 0 (0) 3 (4) 1 (1)   Corticosteroid 4 (5) 2 (3) 3 (4)   Progesterone 1 (1) 0 (0) 1 (1)   Androgen 1 (1) 0 (0) 0 (0)   Oestrogen 1 (1) 0 (0) 0 (0) Antithyroid 0 (0) 1 (1) 0(0) SERMs (Selective estrogen receptor modifiers) 1 (1) 0 (0) 0 (0) Beta 2-adrenergic receptor agonist (bronchodilator) 2(3) 1 (1) 3 (4) NSAIDs (Non-steroid anti-inflammatory drugs) 26 (33) 21 (30) 18 (24) COX-2 selective inhibitor 3 (4) 4 (6) 1 (1) NSAIDS       Betahistine 0 (0) 1 (1) 0 (0) DMARDs (Disease modifying antirheumatic drugs) 0 (0) 1 (1) 0 (0) Histamine antagonist 0 (0) 2 (3) 0 (0) Bisphosphonate 6 (8) 4 (6) 4 (5) Antibiotics 2 (3) 1 (1) 4 (5) Analgesics 1 (1) 1 (1) 3 (4) Opiates 0 (0) 0 (0) 2 (3) Hypnotic 4 (5) 3 (4) 4 (5) Antidepressant 7 (9) 5 (7) 7 (9) Psychostimulant 0 (0) 1 (1) 0 (0) Antipsychotic 0 (0) 1 (1) 0 (0) Anti-manic 0 (0) 1 (1) 0 (0) Anti-convulsant 1 (1) 1 (1) 0 (0) Protein pump inhibitor (PPI) 9 (11) 5 (6) 6 (8) Enzyme inhibitor 0 (0) 1 (1) 2 (3) Muscle relaxant 0 (0) 1 (1) 1 (1) Antispasmodic 0 (0) 1 (1) 0 (0) Anti-cholinergic 1 (1) 1 (1) 0 (0) Antineoplastic 2 (3) 1 (1) 0 (0) Antimetabolite 1 (1) 1 (1) 1 (1) Antifolate 0 (0) 1 (1) 1 (1) Latanoprost 0 (0) 1 (1) 0 (0) Viscosupplementation 0 (0) 1 (1) 0 (0) Xanthine 1 (1) 0 (0) 0 (0) Antifungals 0 (0) 0 (0) 1 (1) Antimalarial 0 (0) 1 (1) 1 (1) Nitrate 0 (0) 0 (0) 2 (3) Laxative 0 (0) 0 (0) 2 (3) Supplements 22 (28) 22 (28) 18 (24) Homeopathic Medication 1 (1) 1 (1) 1 (1) None 11 (14) 10 (13) 9 (12) Not Specific 21 (27) 21 (27) 20 (27) Missing Data         Total 4 (5) 8 (10) 17 (23)   No Evaluation* 2 (3) 4 (5) 7 (9)   No File** 2 (3) 4 (6) 10 (14) This table presents a list of medications which participants declared to be taking at baseline. Data is presented as n (%). Table 4 Summary of adherence based on attendance marked in trainers’ manuals and individual walkers’ logbooks   Walking N Mean±SD Walking & Behavioural N Mean±SD Self-directed (C) N Mean±SD Walking vs. Self-directed t -test P value Walking & Behavioural Vs. Self-directed t -test P value Walking & Behavioural Vs. Walking t -test P value 0-3 months 79 69 73 *(0.043) *(0.012) (0.514)   0.770±0.299 0.802±0.290 0.652±0.403       3-6 months 79 69 73 (0.242) (0.159) (0.774)   0.617±0.410 0.636±0.390 0.535±0.459       6-9 months 79 69 73 (0.421) (0.937) (0.363)   0.471±0.418 0.534±0.425 0.528±0.463       9-12 months 79 69 73 (0.549) (0.551) (0.989)   0.446±0.441 0.445±0.433 0.490±0.462       Total Adherence 79 69 73 (0.690) (0.413) (0.619)   0.576±0.346 0.604±0.342 0.551±0.420       Table 5 Summary of self-efficacy, measured with Stanford questionnaire on chronic disease for three study arms (continued)   Baseline 12 months 18 months   W WB C W WB C W WB C   N N N N N N N N N Variables Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD Mean±SD         W.Vs. C(P) W.Vs. C(P) W.Vs. W(P) W.Vs. C(P) W.Vs. C(P) W.Vs. W(P) Coping 79 69 73 44 44 40 44 41 36 With 1.069±794 1.126±0.775 1.013±0.946 0.864±0.771 1.239±0.730 1.192±0.835 1.064±0.952 1.388±0.856 1.342±1.127 Symptoms       (0.091) (0.858) (0.057) (0.190) (0.793) (0.286) Confidence 79 69 73 44 44 39 43 41 36 About 7.406±1.719 7.827±1.488 7.858±1.512 7.826±1.551 7.682±1.785 8.064±1.575 7.690±1.920 7.546±1.848 8.015±1.476 Doing Things       (0.433) (0.297) (0.060) (0.235) (0.422) *(0.041) (Higher is better) W: Walking only group; WB: Walking and Behavioural Group; C: Self-directed group (unsupervised/self-directed); N: number of subjects in each comparative group; SD: standard deviation; vs: versus; data is presented as mean (standard deviation); p: p-value (statistical significance); * Statistically significant. \n\nBODY.METHODS.INTERVENTION.SUPERVISED WALKING PROGRAMS (WB & W GROUPS):\nThe SCAWPs took place at two walking club sites in Ottawa, Ontario, Canada and one in Gatineau, Québec, Canada. Participants took part in three weekly walking sessions over a 12-month period. Every walking session began with a 10-minute warm-up, consisting of light aerobic exercises, before engaging in the 45-minute aerobic walking phase. The walking sessions ended with a 10-minute cool-down consisting of light aerobic exercises and stretching. The target intensity of the walking phase was approximately 50% to 70% of the subjects' pre-determined maximum heart rate as recommended in The Ottawa Panel guidelines \n[33-35]. The SCAWP was divided into two stages: 1) A \"progressive aerobic phase\" wherein the duration and heart rate intensity progressively increased over time and 2) a \"maintenance aerobic phase\" wherein the duration and heart rate intensity remained constant for the remainder of the walking program (Additional file \n2). The selected dosage, frequency, intensity, and progression of the walking interventions were based on existing protocols proven effective for OA, described in several RCTs involving individuals with OA \n[33-35] (Additional file \n2). To avoid contamination between the two walking interventions, participants in the W group were instructed to walk in the mornings, while the WB group walked in the afternoons at the main walking site. In order to meet the prescribed target heart rate, heart rate monitors were provided to each participant prior to his/her walking session. As a safety precaution, participants who experienced prolonged pain lasting more than two hours within 24 hours after a walking session were asked to temporarily reduce their level of exercise until he/she felt comfortable enough to resume the appropriate duration, intensity and frequency of the walking program. Following randomization, the PA specialist met each subject in the walking groups to explain the SCAWP and its progression (Additional file \n2) and was present for a minimum of three weekly scheduled sessions over one year to supervise the subjects. The PA specialist was responsible for monitoring and recording attendance, blood pressure, heart rate (during and after the walking session), time duration, and number of steps (pedometer) at each walking session. Monetary compensation was given to each participant for attending each walking session.\n\nBODY.METHODS.INTERVENTION.BEHAVIOURAL INTERVENTION (WB GROUP):\nThe behavioural intervention was part of the multifaceted KT intervention (Table \n1) and was implemented using the adapted Program for Arthritis Control through Education and Exercise (PACEex) program \n[40]. The behavioural intervention was integrated into the PACEex program and consisted of the following components: (1) short- and long-term goal setting during PACEex classes; (2) an educational component, delivered by a trained instructor, involving consisting of instructional sessions about the benefits of PA, specifically walking; (3) monthly face-to-face counselling wherein participants received moral support/encouragement to adherence with PA. Potential barriers to program adherence were identified and self-management strategies were reviewed to overcome those barriers; (4) goal setting and telephone counselling were provided as an additional source of social support until the end of the supervised phase. As with the face-to-face counselling, barriers were identified and strategies were shared in an effort to promote the long-term maintenance of PA. In summary, the behavioural intervention consisted of twenty 2-hour group sessions discussing short-term goal setting and education of arthritis-related topics over a duration of 20 consecutive weeks. Individual long-term goal setting was discussed at the beginning of the PACEex program and was followed by monthly face-to-face meetings throughout the first 6 months of the program. The last six months of the 12-month supervised phase consisted of participants receiving counselling via telephone discussing long-term goals and barriers/facilitators to adhere to the walking program.\n\nBODY.METHODS.INTERVENTION.THE SELF-DIRECTED CONTROL (C GROUP):\nParticipants were invited to consult an educational pamphlet on walking and OA. One introductory session was provided to the participant wherein they were provided an educational pamphlet describing the benefits of walking for OA, a pedometer to monitor PA, and a logbook to record activity level and adherence while partaking in the self-directed walking program. Participants in this group received monetary compensation for the completion his/her log books. The self-directed (C) group did not have any contact with participants in the two other groups, avoiding potential contamination.\n\nBODY.METHODS.DATA COLLECTION:\nParticipants were assessed by an independent evaluator at baseline and at each 3 month interval (months 3, 6, 9, 12) during the intervention period. Participants were then assessed at 3 and 6 months post-intervention during the follow-up period (15 and 18-months). Participants were asked to complete a collection of validated questionnaires as well as perform physical tests at each assessment.\n\nBODY.RESULTS.KNOWLEDGE (CPG) UPTAKE OUTCOME MEASURES (KT OUTCOMES).ADHERENCE AND BEHAVIOUR CHANGE:\nSimilar to other studies \n[29,41,42], participant adherence was assessed as the number of attended walking sessions divided by the number of prescribed sessions (3 sessions per week). Among the W and WB groups, the number of attended walking sessions throughout the 12-month duration was recorded by the exercise therapist on site as well as through the use of participants' completed logbooks which provided information on the amount of performed PA during the 12-month intervention period. The C group's adherence was only assessed by the completed log books. Behaviour change was based on the concept of self-efficacy and was measured using the Stanford scale. In regards to the research question, behaviour change results were only reported at 12 and 18-months.\n\nBODY.RESULTS.KNOWLEDGE (CPG) UPTAKE OUTCOME MEASURES (KT OUTCOMES).ANALYSIS:\nThe analysis was conducted on an intention-to-treat basis. Descriptive statistics including mean, standard deviation and frequencies were used to summarize the study groups at baseline. A repeated measure mixed model was used to assess the change in adherence from baseline to end of treatment (12 months) among three study groups. Behavioural change outcomes were assessed from baseline to of end treatment (12 months) and at 6-month follow-up post-intervention (18 months). The model included variables such as intervention group, study month, and an interaction term between intervention group and study month. Missing data was assumed to be missing at random (MAR) in order to include incomplete data. Pairwise differences comparing each group to one another (W vs. C, W vs. WB, WB vs. C) were assessed. The repeated measure mixed model was used to compare the change of adherence from baseline to the 18- month follow-up. Statistical analyses were performed using SAS (version 9.2, SAS Institute Inc., Cary, North Carolina), and statistical significance was defined as p<0.05.\n\nBODY.RESULTS:\nBetween January 2007 and December 2008, 629 study participants were assessed for eligibility. Figure \n2 exhibits the flow of participants from recruitment to follow-up. The most common reasons for exclusion were: 1) time commitment (26.5%); 2) first diagnosed with OA by physician or X-Ray but later contradicted by MRI test result (18.1%); 3) loss of interest (14.2%); 4) severe OA of the weight bearing joint of LE (11.8%); 5) no medical clearance after VO2 max test (5.1%); 6) unable to contact (4.9%); 7) too active (2,9%); 8) knee replacement (2.7%); 9) age related (2.5%); 10) travelling too much (1.9%); 11) living too far (1.2%). Follow-up for the last recruited participant was completed in May 2010. Figure 2Study flow diagram. This figure provides a flow diagram of data collection from baseline to 18-months follow-up. The groups were similar in age, sex, ethnicity, OA duration and medication use (Table \n2 &3). Two hundred and twenty two subjects were included in the study. Women made up 153 of the 222 subjects. The mean age (±S.D.) was 63.4 (±8.6) years, the mean disease duration (±S.D.) was 10.3 (±9.3) years, the highest level of completed education was \"College graduate\" with 65 participants (±29.3) and \"Professional or graduate school\" with 51 participants (±23.0). There was no statistical difference between the three comparative groups at baseline for all the demographic variables.\n\nBODY.RESULTS.BLINDING:\nThe effectiveness of blinding the evaluator to treatment allocation was assessed through a questionnaire where she was asked to identify which group each participant belonged to. The blinding rate of the evaluator was high (98%).\n\nBODY.RESULTS.ADHERENCE AND ATTRITION:\nFrom baseline to 12 months, the adherence rates were expressed as percentages of adherence among walkers who did not drop out and were reported as means (+/−standard deviation) for each time period between groups (Table \n4). Statistically significant results for short-term adherence rates favoured participants in the WB group (80.2%) compared to the self-directed (C) group (65.2%) (p<0.012) after 3 months. For long term adherence (6 to 12 months), the WB group demonstrated superior adherence rates throughout each period compared to the other two groups, but results did not reach statistical significance. No statistical significance (p> 0.05) was observed for the total adherence rates between WB vs. W, W vs. C, and WB vs. C. During the 12-month intervention period, the dropout rates were 43.1% for the W group, 40.6% for the WB group and 49.3% for the self-directed (C) group. From baseline to 18 months, the dropout rates were 44.3% for the W group; 40.6% for the WB group and 52.1% for the self-directed (C) group (Figure \n2 and Additional files \n3 &4). The dropout rates were lower for the WB group at 12-months (36.2%) and 18-months (39.1%) compared to the W group (40.5% and 44.3%) and the self-directed (C) group (40.5% and 51.3%). The W and self-directed (C) groups had the highest dropout rates (40.5%) at 12 months while the self-directed (C) group had the highest at 18 months (52.1%).\n\nBODY.RESULTS.BEHAVIOUR CHANGE:\nAfter the 12 month intervention phase, no differences were found between the 3 comparative groups for any of the items on the Stanford scale (Table \n5). Moreover, variables related to coping with symptoms and confidence were higher among the WB group compared to the other two groups. At 18-months, statistically significant differences in scores between the WB and W groups revealed that the W group demonstrated an improved mean for the \"confidence about doing things\" variable (p=0.041). The self-directed group (C) demonstrated the highest mean \"confidence about doing things\" score 8.015± 1.476 (p=0.040) as it increased by 0.157 (p=0.048) when compared to baseline. The self-directed group (C) demonstrated the highest mean score 8.015± 1.476 (p=0.040).\n\nBODY.DISCUSSION.THE IMPACT OF THE MULTIFACETED KT INTERVENTIONS ON KNOWLEDGE (CPG) UPTAKE.ADHERENCE AND ATTRITION:\nThe additional behavioural approach of the multifaceted KT intervention (WB group) improved adherence to the SCAWP when compared to the other two groups over a short-term period. In addition, the WB group demonstrated reduced dropout rates compared to the W and self-directed (C) groups. Over a long-term period, the three groups were equivalent in improving behaviour change while adopting the Ottawa Panel recommendations on walking \n[33-35]. Adherence rates were higher among the WB group for each 3-month period compared to the other two groups. Improved adherence rates over the short-term period can be explained by the fact the PACEex program used in the behavioural intervention lasted for 20 consecutive weeks at the beginning of the study. The PACEex program \n[40] used in this study, allowed our participants to set general goals, which may have motivated participants to adhere to the walking program. Similar to a previous study \n[43], another reason may be that participants in the W and WB group lost motivation after the supervised phase, no longer having regular professional support from the PA specialist. The drop-out rate results are comparable to other long-term studies which have used an aerobic PA intervention with a behavioural intervention component \n[38,44-54]. Higher adherence rates were demonstrated at 2 to 4 months (85% - 90%) and at 10 to 18 months (50% - 90%). An initial 6-month center-based exercise program enhanced retention over short and the long-term periods while promoting short-term adherence and energy expenditure \n[55]. The moderate exercise intervention retained more subjects, but had little influence on adherence over a long-term period \n[56]. Home-based PA programs, such as self-directed walking, can achieve improved adherence rates compared to facility-based programs over a long period \n[55].\n\nBODY.DISCUSSION.BEHAVIOUR CHANGE:\nThere were no statistically significant results among the three groups at 12 months. These results concur with a similar study \n[53] and are normal in the context of a KT study which implemented an already proven effective walking program \n[33-35]. However, at 18-months, there were statistically significant results which favoured the self-directed group (C). One reason explaining why participants in the self-directed group (C) had higher confidence and self-efficacy scores may be due to the fact that since these individuals were not supervised, they may have been more independent, self-controlled, self-confident, and self-motivated. Participants in this group may have also had the opportunity, over the 12-month period, to develop confidence and self-efficacy skills compared to the two supervised groups during the unsupervised follow-up phase. The variable \"confidence about doing things\" decreased after 12 months for the WB group. Given the high amount of PA demonstrated by the WB group at 12 months, we were surprised with this result. On the other hand, at 18 months, statistically significant results favoured the W group compared to WB group for the variables \"confidence about doing things\". Again, these results about confidence were unexpected given that the WB group performed higher amounts of PA. Exercise therapists, pedometers, logbooks and social support from walking club members (Table \n1) assisted participants who had difficulties with activity adherence and maintenance \n[57]. Regular professional contact was recognised as the significant motivational factor for adherence to the supported walking program during the RCT \n[43]. Social support has been reported to be more successful in engaging participants in PA programs compared to programs which only provided written educational material \n[58,59]. The combination of using pedometers and goal-setting has been recognised as an effective KT tool for increasing PA \n[31,60-62]. However, despite the short-term effectiveness of pedometers and goal-setting, the success of these tools diminished over time. Other KT strategies are recommended to facilitate sustainability \n[62].\n\nBODY.DISCUSSION.LIMITATIONS:\nBarriers related to the implementation of SCAWP were identified and addressed to improve adherence for older individuals \n[57]. Acceptability (preferences, tolerance and accessibility) should be considered to promote PA adherence and to overcome associated barriers \n[63]. The most common barriers identified by participants with OA are perceptions about illness and recovery, transportation difficulties, family commitments and inconvenient timing \n[64]. Time commitment is also recognised as a significant barrier \n[55]. Additional barriers such as weather conditions including snow storms, freezing rain and humidity, a 3-month bus strike, and health problems of participants' partners may have contributed to study adherence. Adherence is influenced by participant's preference of intervention \n[65-70]. Unfortunately, with this study being a RCT, taking participants' preference into consideration was not possible due to the randomization procedure. This being said, randomisation may have contributed to higher dropout rates. The purpose of this KT study was to implement a scientifically proven effective walking program for OA \n[33-35]. Short term research suggests that self-management interventions \n[16,18], telephone counselling \n[22,46,71-73], peer support \n[13,74-76] and PA education \n[18,22,27,41,77] are effective behavioural strategies for improving self-efficacy. The long-term multifaceted behavioural strategy used in this study did not concur with the previous studies, as participants in the self-directed group (C group) were provided information and education on the benefits of PA. Methods of measurement for adherence for the W and WB groups were different than the self-directed (C) group. Observed attendance was recorded on site for each walking session for the W and WB groups by exercise therapists while adherence could only be measured in the self-directed group (C) using completed logbooks. The self-reporting of PA, which is commonly overstated, may have been subject to potential information bias. Selection bias may have occurred as our study population included individuals with mild to moderate OA, with a confirmed X-ray report. In addition, some participants were retired while some were still employed. Since participants were provided monetary compensation for their participation in the study, one may argue the generalizability of the results.\n\nBODY.DISCUSSION.IMPLICATIONS:\nThis KT RCT aimed to identify the best multifaceted strategy to implement a proven effective walking program for OA of the knee among older individuals. The epidemic public health problem of OA can be managed through the implementation of a proven effective walking program in existing community-based walking clubs as well as at home with minimal support. More studies are needed to improve the long-term walking adherence or knowledge uptake on SCAWP among participants with OA. Particular attention should focus on patients' characteristics and preferences.\n\nBODY.CONCLUSIONS:\nThis study partially supported our initial hypothesis regarding the impact of a multifaceted KT intervention as the combined walking and behavioural approach had greater benefits over the comparative groups resulting in improved long-term adherence to PA only over a short period (3-months). The WB group demonstrated to be the best KT strategy for reducing dropouts compared to the W group and self-directed (C) group. All three KT strategies were equivalent over the long-term period (up to 18 months) for improving behavioural change. However, the self-directed walking program (C group) was the least expensive to implement over a long-period (18 months). This RCT was a long-term adherence study as well as a KT study which addressed questions of clinical and scientific importance aimed at improving the understanding of effective KT strategies to promote the adoption and maintenance of a community-based walking program for OA.\n\nBODY.ABBREVIATIONS:\nOA: Osteoarthritis; QoL: Quality of Life; SCAWP: Supervised Community-based Aerobic Walking Program; CPG: Clinical Practice Guideline; KT: Knowledge Translation; KTAC: Knowledge-To-Action Cycle; EBCPG: Evidence-Based Clinical Practice Guidelines; CONSORT: Consolidated Standards of Reporting Trials; WB (group): Walking and Behavioural intervention; W (group): Walking intervention; C (group): Self-directed control intervention; RCT: Randomized Controlled Trial; PA: Physical Activity; PACE-ex: Program for Arthritis Control through Education and Exercise; AIMS2: Arthritis Impact Measurement Scale 2; SF-36: Short-Form 36 Health Survey; WOMAC: Western Ontario and McMaster Universities Osteoarthritis Index; MET: Multiples of basal metabolic rate; MAR: Missing At Random; CIHR: Canadian Institutes of Health Research.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nLB is a Full Professor of Rehabilitation, an epidemiologist and the principal investigator of this study and primary author of this manuscript. GAW the co-principal investigator of this study, is senior biostatistician and director, Cardiovascular Research Methods Centre at the University of Ottawa Heart Institute, and is a leading expert in the design and analysis of clinical trials. He provided assistance with the methodology and statistical analysis of the study. GPK is a Full Professor of Physiology at the University of Ottawa, and director of the university's Human Performance and Environmental Medicine Research Laboratory and Professional Fitness and Lifestyle Consultant Certification Training program. He assisted with the methodology of the study. RR is a senior researcher at the University of Ottawa Heart Institute and provided experience in applying innovative behavioural approaches aimed at increasing PA in healthy and chronic diseased populations. AM is a health economist and assisted with the economic evaluation concept in the original proposal. PT is a rheumatologist, an epidemiologist and chief of the Cochrane Musculoskeletal Group. He has experience in conducting RCTs and meta-analyses. He provided assistance with OA outcome measures in the study. MH & CM developed the PACEex program and was in charge of training the PA specialist. GDA was the research coordinator of this study and assisted with the writing of this manuscript. LC is a biostatistician and performed the analyses of this study. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/12/871/prepub\n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional file 1The knowledge –to-action framework and KT interventions. This table demonstrates how each step of the KTAC framework is applied to the RCT.Click here for file Additional file 2Post-Randomization Walking Interventions. The supervised phase of the walking intervention lasted 52 weeks following randomization. The unsupervised (follow-up) phase of the walking intervention lasted 26 additional weeks. For Intervention (one year): 52 weeks × 3 sessions =156 sessions attended; Follow-up (six month): 26 weeks x 3 sessions = 78 sessions attended. N.B. This structured and supervised walking program is based on Ottawa Panel Grade A recommendations \n[33-35] and was implemented to groups W and WB (Implementation groups). Individuals with OA in the self-directed group (C) received a pamphlet on OA and walking (which recommends regular walking in a unsupervised/self-directed way) (Dissemination group) \n[25].Click here for file Additional file 3Reasons of dropouts. This table demonstrates the reasons as to why participants decided to withdraw from the study.Click here for file Additional file 4Dropout rates and corresponding retention rates at 12-and 18-month time periods. This table demonstrates the drop-out rates and retention rates at end of intervention 12-months and follow-up at 18-months.Click here for file\n\n**Question:** Compared to Self-directed control (C) what was the result of Walking and Behavioural intervention (WB) on Long-term adherence to the program?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
974
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Treatment costs associated with interventional cancer clinical trials conducted at a single UK institution over 2 years (2009–2010)\n\n ABSTRACT.BACKGROUND:\nThe conduct of clinical trials should be an integral part of routine patient care. Treating patients in trials incurs additional costs over and above standard of care (SOC), but the extent of the cost burden is not known. We undertook a retrospective cost attribution analysis to quantitate the treatment costs associated with cancer clinical trial protocols conducted over a 2 year period.\n\nABSTRACT.METHODS:\nAll patients entered into oncology (non-haematology) clinical trials involving investigational medicinal products in 2009 and 2010 in a single UK institution were identified. The trial protocols on which they were treated were analysed to identify the treatment costs for the experimental arm(s) of the trial and the equivalent SOC had the patient not been entered in the trial. The treatment cost difference was calculated by subtracting the experimental treatment cost from SOC cost. For randomised trials, an average treatment cost was estimated by taking into account the number of arms and randomisation ratio. An estimate of the annual treatment costs was calculated.\n\nABSTRACT.RESULTS:\nA total of 357 adult oncology patients were treated on 53 different trial protocols: 40 phase III, 2 randomised II/III and 11 phase II design. A total of 27 trials were academic, non-commercial sponsored trials and 26 were commercial sponsored trials. When compared with SOC, the average treatment cost per patient was an excess of £431 for a non-commercial trial (range £6393 excess to £6005 saving) and a saving of £9294 for a commercial trial (range £0 to £71 480). There was an overall treatment cost saving of £388 719 in 2009 and £496 556 in 2010, largely attributable to pharmaceutical company provision of free drug supplies.\n\nABSTRACT.CONCLUSION:\nOn an average, non-commercial trial protocols were associated with a small per patient excess treatment cost, whereas commercial trials were associated with a substantially higher cost saving. Taking into account the total number of patients recruited annually, treatment of patients on clinical trial protocols was associated with considerable cost savings across both the non-commercial and commercial portfolio.\n\nBODY:\nThe drug development process is well established and randomised clinical trials remain the gold standard for testing new treatments against standard of care in order to advance clinical practice (Vogelzang et al, 2012). Even so, the proportion of patients entering clinical trials worldwide is remarkably low. Since 2001, when under 4% of cancer patients took part in a clinical trial, recruitment increased six-fold over the following 10 years in the UK due to significant investment in a unique national research infrastructure (Department of Health, 2006; NCRN, 2011), whereas recruitment in the USA has remained static (Califf et al, 2012). It is undisputed that conducting research incurs additional costs over and above standard of care (Emanuel et al, 2003), yet there is an international commitment to research being a core health service function (Zerhouni, 2003; Califf et al, 2012; The NHS Constitution, 2013). Attribution of research costs is complex and recovery of those costs, particularly for non-commercial clinical trials is very challenging (Snowdon et al, 2006; Seow et al, 2012). Costs incurred by conducting a clinical trial can broadly be attributed to one of the three elements (Attributing the costs of health and social care Research & Development (AcoRD), 2012): (1) research costs, that is, the costs of the trial itself, which end when the research ends and relate to activities being undertaken to answer the research question; (2) treatment costs, that is, the patient care costs, which would continue to be incurred if the patient care intervention continued to be provided after the trial had stopped; and (3) service support costs, that is, the additional patient care costs associated with the research, which would end once the trial had stopped, even if the patient care involved continued to be provided. For commercially sponsored trials, all three elements would normally be met by the sponsor. In the UK, pharmaceutical companies have, for many years, been able to access service support from the national health service by formal adoption into a national clinical trials portfolio. For non-commercial studies, research costs are generally met by grant funders. Service support costs are met from local R&D budgets (via comprehensive local research networks), whereas treatment costs are met through the normal commissioning process. The latter has been the cause of significant disharmony between commissioners and providers of health care. In the USA, private insurance companies are not obliged to cover non-routine patient care costs (Anonymous, 2007). Cancer drugs are generally expensive and it is widely perceived that cancer clinical trials generate an excess treatment cost, despite a lack of evidence to support this view. A single retrospective review of 932 cancer patients recruited to 35 trials in the USA between 1998–1999 suggested that the incremental costs of direct care in clinical trials was 6.5% higher than what they would be outside of a trial (Goldman et al, 2003). The investigators concluded that the additional treatment costs of an open reimbursement policy for government-sponsored cancer clinical trials seem minimal. Even so, risk of excess treatment costs remains a barrier to academic clinical trials being performed in many countries today. We undertook a review of cancer clinical trials conducted in a single, research-active institution in the UK over a 2-year period (2009–2010), to identify the extent of treatment costs involved.\n\nBODY.MATERIALS AND METHODS:\nCambridge University Hospitals NHS Foundation Trust (Addenbrooke's Hospital, Cambridge) is a major university teaching hospital and cancer centre in the UK. It also hosts the West Anglia Cancer Research Network (WACRN), one of the highest recruiting regional research networks in the UK, where 45.1% of patients newly diagnosed with cancer entered a research study in 2011/12. Although WACRN supports research across seven hospitals in the region, 64% of all patients were recruited at Addenbrooke's hospital. For this retrospective cost attribution analysis, all patients recruited to interventional, research ethics committee approved, oncology drug trials between 1 January 2009 and 31 December 2010 at Addenbrooke's Hospital were identified from the Cambridge Cancer Trials Centre patient database. Haematology trials were excluded. The clinical trial protocols for each identified patient were reviewed to determine the treatment costs associated with each arm of the trial. The following activities were attributed to being treatment costs (as described in the AcoRD guidance): (1) supplying an investigation medicinal product and pharmacy costs associated with administering oral agents under investigation; (2) supplying and administering any active comparators, but not placebo; (3) nursing time to administer the drug, time taken to deliver the treatment either as an outpatient ('day unit chair time') or as an inpatient ('bed days'); (4) investigations, including imaging and laboratory tests, which would continue to be incurred if the patient care service in question continued to be provided after the trial stopped; (5) additional clinic visits. Costs were calculated for the active treatment phase of the trial only and did not include follow-up once treatment had finished. For all protocols, a standard of care (SOC) was identified as the standard treatment that would have been offered locally to the patient had the trial not been available. For randomised trials, if the control arm differed to the local standard treatment, the local standard treatment was used to cost SOC. The treatment costs of the SOC were determined as for the experimental arm. Costs were determined for the duration of treatment defined in each trial protocol and did not include costs of follow-up once treatment stopped. For adjuvant, neo-adjuvant and some palliative protocols which specified the treatment programme duration, costs were determined for the entire prespecified treatment programme. For palliative protocols where treatment duration was until disease progression, the actual number of cycles and/or months of treatment delivered to the patients recruited to each protocol was obtained from the patient records and an average treatment duration was determined for each protocol. The same treatment duration was used for estimating SOC costs, unless the SOC treatment maximum duration was specified, in which case the shorter duration was used. Cancer drug costs were obtained from the British National Formulary (2010) and excluded additional taxes or local negotiated discounts. Supportive drugs such as antiemetics, antidiarrhoeals, steroids, intravenous hydration fluids were excluded. For dose calculations, an average body surface area of 1.75 m2 was used. Staff and facility direct costs without overheads were obtained from local reference costs. Imaging and laboratory tests were costed locally or using the NIHR costing template. Where costs differed, local costs were used. The data was cross-checked with the lead clinical trials nurse responsible for each protocol. For each protocol, the treatment cost (TC) difference for the experimental arm(s) was the difference between the experimental arm treatment cost and the SOC cost. For randomised trials, an average treatment cost was attributed to the protocol, by taking into account the number of arms of the trial and randomisation ratio (see Table 1 as an example of how the TC difference was calculated). Finally, using the total number of patients recruited to each trial protocol, the annual overall TC excess or saving in 2009 and 2010 was estimated.\n\nBODY.RESULTS:\nDuring the 2-year review period, 357 cancer patients were recruited to 53 different interventional clinical trials involving IMPs (Tables 2 and 3): 159 patients in 2009 and 198 patients in 2010. A total of 27 trials were academic, non-commercial sponsored trials with variable levels of industry support. A total of 26 trials were commercial sponsored trials, of which 26 of the non-commercial trials and 9 of the commercial trials were adopted in the NIHR portfolio. Thirty-nine of the 53 trials were randomised phase III design. The remainder were 1 non-randomised phase III, 2 randomised phase II/III, 9 randomised phase II and 2 non-randomised phase II trials. Patients received IMPs that were either unlicensed (22), licensed for the indicated disease (10) or licensed drugs being used in an unlicensed condition (21). Compared with 2009, in 2010, 24 new trials had opened to recruitment and 12 had closed. The average TC per protocol across the portfolio of 53 trials ranged from an excess of £6393 to a saving of £71 480 (Figure 1). For 27 non-commercial trials (Table 2), the average TC per protocol ranged from an excess of £6393 to a saving of £6005 (mean excess TC was £431, median £126 per protocol). The main source of excess TC was due to extra hospital day unit delivery time associated with 9 protocols evaluating IMPs given in addition to, or in place of, SOC (7 protocols) or where the SOC was observation (2 protocols). For example, in the AVAST-M clinical trial comparing 12 months of 3 weekly adjuvant bevacizumab as treatment for patients with resected melanoma versus standard observation, the additional cost of treatment delivery accounted for £4250 (83%) of the £5144 excess TC. In this case, the investigation medicinal product cost did not contribute to the excess TC, as all bevacizumab was supplied free of charge by the manufacturer. A total of 7 non-commercial trials were associated with a TC saving. Two of these trials, SCOT (comparing 3 months versus 6 months standard adjuvant chemotherapy in colorectal cancer) and PERSEPHONE (comparing 6 months versus 12 months standard adjuvant chemotherapy in breast cancer), were evaluating whether shorter durations of adjuvant treatment were as effective as SOC, in which the TCs were, in effect, halved. In 2 trials, the source of saving was free drug supplies in the experimental arms from manufacturers (SoFEA and BR13). As each trial randomised one or more treatment against a standard treatment, taking into account the number of arms and randomisation ratio, the average cost of a patient recruited to the protocol was less than SOC. In one trial (REAL-3), an industry grant provided a per payment fee of £1000 and in another trial (Telovac), one of the experimental arms was associated with fewer cycles of SOC chemotherapy given in association with the investigation medicinal product. In one trial (QUASAR2) comparing adjuvant capecitabine with capecitabine plus bevacizumab, the manufacturer supplied both the investigation medicinal product (bevacizumab) and the SOC (capecitabine) free of charge for patients recruited to either arm of the trial. With the exception of REAL-3, treatment of patients on the other six non-commercial trial protocols associated with an overall TC saving offered financial benefits that would have been passed on to commissioners, primarily by the provision of free drug that would have had to be paid for if the patient was not on trial. Considering the actual numbers of patients recruited to all of the non-commercial trials annually, the net TC saving was estimated at £67 962 in 2009 and £29 002 in 2010. For the 26 commercial trials (Table 3), the average TC per protocol ranged from £0 to a saving of £71 480 (mean saving was £9294, median £4370 per protocol). Commercial trials were not associated with excess TCs, as income from the trial conduct was gained to cover all costs associated with treatment. The main source of cost saving was through provision of free drug supplies from the sponsor. In two renal cancer trials (Bevlin and Rad001), the sponsors provided the experimental drug free and in a third renal trial (Torisel404) both experimental and SOC drug were provided. With the exception of four randomised trials where SOC was observation or best supportive care, treating patients on the remaining 22 protocols would have passed on significant drug cost savings to commissioners, by avoiding the SOC costs the commissioners would have incurred if the patients were not on trial. Using the actual numbers of patients recruited to all of the commercial trials, there was an estimated net treatment cost saving of £320 758 in 2009 and £467 554 in 2010. Figure 2 summarises the distribution of average TCs for all 26 non-commercial and 27 commercial trial protocols. In general, it can be seen that both commercial and non-commercial trials saved drug costs, whereas the extent of drug and dispensing cost savings was markedly higher for commercial compared with non-commercial trials. The main source of excess TC was the cost of delivering treatment, most often generated by trials evaluating IMPs given in addition to SOC. The contribution of extra clinic visits and investigations, in particular CT scans used for response assessment, was significantly higher for commercial compared with non-commercial trials. By taking into account the number of patients entered on each protocol, the overall annual TCs incurred were estimated. On an average, treating a patient on a clinical trial protocol was associated with a TC saving of £4340. The overall TCs incurred annually was a saving of £388 719 in 2009 and £496 556 in 2010 (Figure 3), around 90% of the savings coming from commercial sponsored trials.\n\nBODY.DISCUSSION:\nResearch is considered an integral part of healthcare by many nations, as the benefits of new treatments tested in clinical trials are recognised to improve patient outcomes. It is important that the cost of that research is identified and properly funded. In the UK, the AcoRD guidance (2012) provides a framework to identify, attribute and recover the various costs associated with research, in a transparent, robust and consistent manner. It clarifies the distinction between research costs, service support costs and TCs. We have focused on quantitating the TCs associated with treating patients in cancer clinical trials, as uncertainty surrounding the size of cost difference between treatment in or outside of trials, particularly non-commercial trials, is a barrier to their efficient conduct. We have previously sought to estimate the cost of the treatment administered to patients recruited to cancer trials using individual patient data from clinical research forms and pharmacy prescriptions (Corrie et al, 2011). However, TCs extend beyond the drug costs alone. So, to better inform service delivery, we developed a methodology designed to include all elements of TCs as described in the AcoRD guidelines. The method provides a unit TC difference associated with an individual trial protocol, taking into account the number of arms in a trial and, where appropriate, the randomisation ratio. The unit TC difference can only be an estimate, as the duration of treatment may vary for an individual patient, whereas for randomised trials, one might expect the estimate to be more accurate the more patients recruited to any protocol. In addition, we tailored SOC to local treatment guidelines, which may differ between regions. There are several limitations associated with this methodology. First, we did not take into account changes to the treatment plan associated with adverse drug reactions and did not include any associated non-elective hospital admissions, as individual patient data was not analysed in depth other than to estimate duration of palliative treatment in some instances. In addition, we did not include follow-up of patients after completion of anticancer therapy, assuming that this would account for a proportionally small element of treatment costs: commercial trials would be expected to fully reimburse all costs above SOC, whereas in general, frequency of follow-up in non-commercial trials tends to reflect routine follow-up arrangements. Finally, costing of commercial trials was based on direct costs only and did not include overheads charged to industry. Despite these limitations, we believe this methodology has the potential to be applied on a wider scale to inform providers and commissioners regarding the impact of conducting cancer clinical trials on health service resources. Review of patients recruited to interventional cancer trials in a single research active institution over a 2-year period identified significant variation in associated TCs between protocols. The average TC for 27 non-commercial trials was an excess of £431 (range £6393 excess to £6005 saving) and for 26 commercial trials was an average saving of £9294 (range £0 to £71 480). Breaking down the TCs by the main categories, by far the largest contribution to cost savings was provision or reimbursement of IMPs by pharmaceutical companies, primarily as part of the conduct of commercial sponsored studies, but also, and to a lesser extent, in partnership with academic groups conducting non-commercial NIHR portfolio studies. The distribution of TCs was rather different for commercial and non-commercial trials. For non-commercial trials, the main excess TC was due to administering IMPs. For commercial trials, considerable extra costs were incurred by additional clinic visits and imaging investigations. This might suggest that pharmaceutical companies drive a more intensive patient management plan, and/or that, as many of the commercial protocols were for international multi-centre trials, they were designed to reflect practice in countries outside of the UK. Even so, because commercial trials were fully funded, the potential excess TCs associated with these protocols should not be passed on to commissioners. Our findings confirmed that the pathway for patients entered into academic trials did, in general, reflect routine clinical practice. By assigning the 357 patients to the protocols in which they were enroled, the overall treatment cost saving to the NHS was estimated to be approaching £0.5 million per annum. In our view, this data provides overwhelming evidence to refute any concern that clinical research generates a cost pressure for the health service. On the contrary, we have demonstrated significant financial gains. The largest gains were from commercial sponsored trials. There is currently a growing emphasis on expanding the commercial trial portfolio across all disease areas. Although this may bring financial benefits, the main reason for our undertaking this review was to address concerns that academic clinical trials were a financial liability. Taken individually, this may be the case in some instances, but viewed across a whole portfolio, there are clear gains even for non-commercial clinical trials. Academic trials represent an important part of the NIHR portfolio, affording the opportunity to address questions that would not be a priority for industry sponsors. For example, two non-commercial trials testing whether shorter durations of adjuvant therapy might be as effective as longer durations were associated with significant TC savings during their conduct, whereas positive outcomes would have much wider implications for health service savings. In summary, this unique analysis of TCs associated with patients recruited to cancer clinical trials indicates a wide range of per patient TC differences associated with individual trials compared with SOC, whereas the risk of there being an excess TC (as opposed to a saving) primarily lies with non-commercial studies. However, over the course of a year, clinical trials—both commercial and non-commercial—offer an opportunity to realise considerable TC savings. These findings support the Department of Health's directive for commissioning of clinical trials as an integral part of service delivery. A balanced portfolio of both commercial and non-commercial research should offer the greatest benefits to patients and the overall health economy.\n\n**Question:** Compared to Commercial sponsored trials what was the result of Academic, non-commercial sponsored trials on Treatment cost?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
977
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The effect of aerobic exercise in the maintenance treatment of depression\n\n ABSTRACT:\nWe investigated the efficacy of aerobic exercise alongside antidepressant medication as an adjuvant maintenance treatment for depression. Fifty patients in remission were randomly assigned to either medication only or medication plus exercise. Assessment of psychopathology was made at 6-weekly intervals (for 24 weeks) using the Hamilton Rating Scale for Depression. The medication-plus-exercise group showed significantly more improvement at 12 and 24 weeks than the medication-only group. This study adds to a growing evidence base that suggests aerobic exercise is worthy of further development in the treatment of depressive disorders.\n\nBODY:\nThe role of exercise in treating depression has been studied extensively in recent years. For example, Rethorst et al (2009) carried out a meta-analysis of 58 randomised controlled trials, and found that groups whose treatment incorporated exercise had significantly lower depression scores than controls. De Zeeuw et al (2010) showed that exercise could have preventive potential, as it was found to reduce the likelihood of depression in high-risk employees with sedentary jobs and an inactive lifestyle. Exercise has also been reported to reduce relapse rates. For example, Babyak et al (2000) found that participants with major depressive disorder who used exercise therapy alone had significantly lower relapse rates than those who received medication. Despite these promising findings and its low cost, the effectiveness of exercise is yet to be fully addressed in the psychosocial treatment literature or incorporated into mainstream treatment practice. For example, the National Institute for Health and Clinical Excellence (NICE, 2009) recommended 'structured group physical activity' only as a low-intensity psychosocial intervention for subthreshold depressive symptoms, or depression with chronic physical health problems. This may be because many studies have been criticised for their methodological limitations and incomplete literature searches (Rethorst et al, 2009); indeed, the classification and duration of exercise are not specified in the majority of reports. Of further note is that the efficacy of exercise in a maintenance capacity remains largely unexplored. Thus, our rationale was to investigate the role of exercise alongside antidepressant medication as a post-remission maintenance therapy, in terms of reducing both the severity of psychopathology and relapse rates.\n\nBODY.METHOD:\nThe 50 patients recruited were attending the psychiatric out-patient department of Sir Sunderland Hospital, Banaras Hindu University, India. Selection was made (via referrals from two senior psychiatrists at the hospital) by the researcher who carried out the post-treatment evaluations. The following selection criteria were used:\nage 15–45 years, in order to cope with the physical exercise regimea diagnosis of major depressive or dysthymic disorder, according to DSM-IV-TR criteria (American Psychiatric Association, 2000)absence of any comorbid psychiatric illnessno major cardiac or orthopaedic illness that would contraindicate exercise. Ethical approval was granted by the University's research ethics committee. After inclusion in the study, patients were prescribed sertraline (200 mg/day – the maximum permitted daily dose) and alprazolam (1 mg/day) for 3–4 weeks in order to achieve remission of clinical depression. Once significant remission – defined as scoring less than 16 on the 21-item Hamilton Rating Scale for Depression (HRSD; Hamilton, 1960) – was obtained, patients were randomly allocated to groups; blinding was not possible, however, as participants were necessarily aware of the treatment group they were in and the researcher who administered exercise therapy also did the follow-up assessment of psychopathology.\n\nBODY.METHOD.PRE-TREATMENT EVALUATION:\nThe researcher assessed the following using a semi-structured pro forma:\nsociodemographic and clinical dataseverity of depressive psychopathology (using the HRSD, plus a structured interview guide)stressful life events (lifetime and past 3 months), using the 51-item Presumptive Stressful Life Events Scale (PSLES; Singh et al, 1984) (a culturally appropriate instrument)melancholic and atypical features of depression (as per DSM-IV-TR)baseline heart rate (mean taken from three 1-minute time slots)fitness levels using the Harvard Step Test, to monitor compliance with the exercise programme (Brouha, 1943).\n\nBODY.METHOD.TREATMENT:\nBoth groups continued to receive sertraline (200 mg/day) and alprazolam (1 mg/day) and did not take part in any form of counselling or psychotherapy. For the medication-plus-exercise group, participants also received training in daily exercise therapy. That therapy consisted of a 5-minute warm-up, 15 minutes of stationary running, and a 10-minute cool-down, this being convenient and economical for all participants regardless of gender or location and within guidelines on exer cise (Whaley et al, 2006). Participants were also advised not to perform any other exercise beyond usual daily activities. Heart rate recordings were made in order to adjust the intensity of exercise to keep within the target heart rate zone, this being 60–84% of the maximum achievable heart rate (Whaley et al, 2006); exercise sessions at home were logged and monitored by carers, and the logs were reviewed by the researcher at every follow-up. None of the participants or their carers reported missing any exercise sessions or cutting sessions short. Follow-up was at 6-weekly intervals for 24 weeks for all participants, and HRSD and PSLES scores were assessed on each occasion. The medication-plus-exercise group also received refresher exercise training and the Harvard Step Test was performed at every follow-up.\n\nBODY.METHOD.ANALYSIS:\nStatistical tests used were the chi-square and Student t-test. Epi Info 6 software, version 6.04a, was used to analyse the data.\n\nBODY.RESULTS:\nAlthough all 50 patients initially agreed to participate, seven did not complete the study (three from the medication-plus-exercise group and four from the medication-only group). Table 1 shows the sample characteristics and features of depression, which were evenly distributed between the groups. Table 2 shows the inter-group comparisons at all 6-weekly follow-ups. The medication-plus-exercise group showed a greater improvement in HRSD scores (change in score from baseline) than the medication-only group from the first follow-up, at 6 weeks, and this difference became statistically significant at weeks 12 (P = 0.04, t = 2.10) and 24 (P = 0.02, t = 2.42); there was no significant difference between the groups' PSLES scores at any follow-up point. Table 1 Baseline characteristics of the sample Variable Medication-only group ( n = 25) Medication-and-exercise group ( n = 25) Number (%) female 9 (60) 6 (40) Number (%) male 16 (46) 19 (54) Usual degree of physical activity: number (%)  sedentary  light  moderate 15 (65) 9 (41) 1 (20) 8 (35) 13 (59) 4 (80) Mean (s.d.) age (years) 29.6 (8.2) 30.5 (8.9) Mean (s.d.) PSLES score for lifetime 216.2 (124.9) 188.0 (86.3) Mean (s.d.) PSLES score for last 3 months 105.0 (72.6) 107.2 (65.3) Mean (s.d.) HRSD score * 23.2 (4.2) 22.8 (3.7) Mean (s.d.) number of melancholic features * 1.2 (1.6) 0.7 (1.1) Mean (s.d.) number of atypical features * 0.1 (0.33) 0.0 (0.3) HRSD, Hamilton Rating Scale for Depression; PSLES, Presumptive Stressful Life Events Scale. * Between group comparisons were non-significant. Table 2 Mean (s.d.) improvement (difference from baseline) in score on the Hamilton Rating Scale for Depression Follow-up Medication-only group Medication-and-exercise group t d.f. P 6 weeks 2.6 (5.3) 4.1 (3.4) 1.17 46 >0.05 12 weeks 2.3 (5.2) 5.0 (3.4) 2.10 45 0.04 18 weeks 3.0 (6.5) 5.6 (5.1) 1.45 42 >0.05 24 weeks 1.0 (7.0) 5.6 (5.4) 2.42 41 0.02 Relapse rates were also recorded via subjective reports and as a measured increase of >50% from baseline HRSD scores (Neumeister et al, 1997). In total, nine patients relapsed, two from the medication-plus-exercise group (both in week 18) and seven from the medication-only group (one relapse in week 6, two in week 12 and four in week 24). This difference in relapse rates did not reach significance (Fisher's exact test: one-tailed, P = 0.07; two-tailed, P = 0.14).\n\nBODY.DISCUSSION:\nWe found that the maintenance treatment of medication-plus-exercise resulted in significantly lower HRSD scores than medication alone at 12-week and 24-week assessments, as well as a promising reduction of relapse rates. These differences were not considered to be due to extraneous life events, as there was no significant difference between the two groups' PSLES scores. These findings are in line with Rethorst et al's (2009) meta-analysis in terms of the potential for exercise to lower depression scores, and with the finding from Babyak et al (2000) that exercise is effective in reducing relapse. We found no studies, however, on the combined role of exercise and antidepressant medication as a maintenance therapy for depressive disorder. The present findings suggest a potential role for exercise as a protective agent during remission, and one which might reduce long-term cost. The study was limited in several ways. There is potential for bias via use of a clinician-rated tool such as the HRSD, particularly in an unblinded study. However, to reduce this, the same experienced assessor was used for the pre-intervention and follow-up assessments and a second assessor made random cross-checks at least once for every participant. Assessors completed the rating scales with the participants through discussion. The sample size was limited to 50, which precluded analysis of potential confounds. For example, the medication-only group was randomly allocated a greater proportion of sedentary members from the outset. There was also an unequal proportion of male/female participants; however, this reflected the population of patients within the hospital. A further limitation was the lack of comparison with an exercise-only group. It is possible that the exercise group may have derived social gain from the increased contact with carers and researchers; interestingly, Babyak et al (2000) raise the point that psychological factors such as proactive agency may be engaged via choosing to exercise. A further issue for the current study was the drop in the rate of improvement for the medication-only group at week 24. The reasons for this are unclear; however, such a change dictates that studies with longer periods of assessment are needed to make stronger interpretations. Thus, future studies could benefit from: larger and more representative samples; extended follow-up periods; additional outcome measures (including patient-rated scales); and the inclusion of further control groups, such as exercise only, social contact only, and group versus solitary exercise. In conclusion, this study adds to the growing evi dence base that suggests exercise is valuable in the treatment of depressive disorders. In particular, we highlight its potential role as a supplementary maintenance treatment during remission. It is worth highlighting that, despite the relatively short time for which the participants exercised, there were measurable benefits. We believe this area of research warrants further attention, particularly as exercise provides a low-cost, flexible and easily implemented treatment with minimal adverse effects, and especially where costly delays in receiving mainstream psychological interventions are commonplace.\n\n**Question:** Compared to Medication only what was the result of Medication plus exercise on Improvement in the Hamilton Rating Scale for Depression at 12 weeks?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
990
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Singing teaching as a therapy for chronic respiratory disease - a randomised controlled trial and qualitative evaluation\n\n ABSTRACT.BACKGROUND:\nDespite optimal pharmacological therapy and pulmonary rehabilitation, patients with COPD continue to be breathless. There is a need to develop additional strategies to alleviate symptoms. Learning to sing requires control of breathing and posture and might have benefits that translate into daily life.\n\nABSTRACT.METHODS:\nTo test this hypothesis we performed a randomised controlled trial, comparing a six week course of twice weekly singing classes to usual care, in 28 COPD patients. The experience of singing was assessed in a qualitative fashion, through interviews with a psychologist. In addition, we surveyed patients with chronic respiratory conditions who participated in a series of open singing workshops.\n\nABSTRACT.RESULTS:\nIn the RCT, the physical component score of the SF36 improved in the singers (n = 15) compared to the controls (n = 13); +7.5(14.6) vs. -3.8(8.4) p = 0.02. Singers also had a significant fall in HAD anxiety score; -1.1(2.7) vs. +0.8(1.7) p = 0.03. Singing did not improve single breath counting, breath hold time or shuttle walk distance. In the qualitative element, 8 patients from the singing group were interviewed. Positive effects on physical sensation, general well-being, community/social support and achievement/efficacy emerged as common themes. 150 participants in open workshops completed a questionnaire. 96% rated the workshops as \"very enjoyable\" and 98% thought the workshop had taught them something about breathing in a different way. 81% of attendees felt a \"marked physical difference\" after the workshop.\n\nABSTRACT.CONCLUSION:\nSinging classes can improve quality of life measures and anxiety and are viewed as a very positive experience by patients with respiratory disease; no adverse consequences of participation were observed.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent Controlled Trials - ISRCTN17544114.\n\nBODY.BACKGROUND:\nChronic obstructive pulmonary disease (COPD) causes breathlessness, which can occur on exertion or, in some individuals, at rest. Although therapy including bronchodilators, pulmonary rehabilitation and oxygen can improve symptoms, the underlying pathology is largely irreversible, meaning that many patients remain limited despite optimal therapy [1,2]. The pattern of breathing typically adopted in COPD is caused by the presence of expiratory flow limitation which leads patients to adopt higher operating lung volumes - dynamic hyperinflation [3]. Although this pattern allows increased expiratory flow rates, this comes at the expense of a greater inspiratory load and mechanical disadvantage for the respiratory muscles. A rapid breathing pattern reduces expiratory time and will exacerbate this. The disparity between respiratory work and ventilatory output that develops is an important determinant of symptoms [4]. A range of non-pharmacological measures for breathlessness have been trialled in respiratory patients [5], including pursed lip breathing [6,7], yoga [8,9], singing [10] and laughter [11], all based on the premise that changing patients' pattern of breathing will improve symptoms A limitation of some physiotherapy approaches is that by focusing attention on breathing pattern they may accentuate awareness of respiratory limitation [12]. Singing requires the development of skills in controlling posture and breath that might be transferrable to everyday life. Breathlessness is a complex sensation [13]; singing lessons might offer techniques that address both the sensory component - largely control of respiratory pattern to reduce hyperinflation, and the affective component - a conscious experience of using the breath for 'something positive'. In addition, there is some evidence that singing may have beneficial effects on wellbeing in healthy [14] and chronic disease populations [15,16]. To evaluate this further we used three approaches; firstly a randomized controlled trial of a six week course of singing training in COPD, secondly an evaluation of a program of open 'drop-in' singing lessons for patients with respiratory disease and thirdly a qualitative evaluation of patients' experience of participating in singing.\n\nBODY.METHODS.RANDOMIZED CONTROLLED TRIAL:\nPatients with COPD, diagnosed according to the GOLD guidelines, who were attending respiratory clinics at the Royal Brompton Hospital, were invited to participate. The study was approved by the Royal Brompton, Harefield and National Heart and Lung Institute Research Ethics Committee and all subjects gave written informed consent. The trial was registered at Current Controlled Trials - ISRCTN17544114. Demographics, height, weight, clinical history and spirometry (Microlab, CareFusion, Kent, UK,) were recorded. Participants completed the Hospital Anxiety and Depression (HAD) Questionnaire [17], St George's Respiratory Questionnaire [18], and the Short Form 36 Questionnaire [19]. Functional exercise capacity was assessed using the incremental shuttle walk test (ISWT). Time to recovery of oxygen saturation, Borg dyspnoea score and heart rate following the walk was documented [20]. Control of breathing was assessed using two measures. 1) Breath hold test - subjects held their breath from maximum inspiration [21,22]. 2) Single breath counting - subjects were instructed to breathe in and then count out loud in time with a metronome running at 60 beats/min[23]. Three attempts at each manoeuvre were made and the mean values recorded. Both techniques were in routine use in the physiotherapy department for the assessment of hyperventilation. All subjects received a thirty minute standard session on breathing control and techniques to manage breathlessness, delivered by one of two senior respiratory physiotherapists involved in the study. Pursed lip breathing and nose breathing were also discussed in relation to managing episodes of shortness of breath. Each subject received a standard Royal Brompton Hospital \"Help Yourself - physiotherapy for people with respiratory symptoms\" booklet (Additional file 1) and was advised to practice the techniques at home. At the end of the baseline session, patients were randomised to either the singing or control groups, using block randomization through consecutive sealed envelopes. The subjects in the singing group attended a hospital-based workshop, led by a singing teacher (PC), twice weekly for six weeks. Each session lasted approximately one hour and included teaching of posture, relaxation and vocal exercises. Tasks of increasing difficulty were given as the sessions progressed - described in more detail in additional file 2. Each subject was given homework and an accompanying CD of songs to practice at home. The teacher was unaware of the tests being used to assess breathing control, so that she would not 'teach to the test'. The control group had no further intervention. At seven weeks follow-up, study participants were again assessed by the same respiratory physiotherapists, who were blinded to treatment allocation. All subjects were instructed not to tell the physiotherapist which group they had been allocated to. Control of breathing, quality of life and functional exercise capacity were measured as at baseline. Statistical analysis was performed using StatView 4.0, using t-tests to compare change in outcome measures between patients and controls. A p value of < 0.5 was taken as significant.\n\nBODY.METHODS.EVALUATION OF OPEN SINGING WORKSHOPS FOR RESPIRATORY PATIENTS:\nAlongside the randomized control study, we ran open, twice-weekly, 'Singing for BreathingTM' workshops for patients with any respiratory condition who were at Royal Brompton Hospital either as inpatients, day cases or outpatients. Conditions included interstitial lung disease, cystic fibrosis, asthma, bronchiectasis and COPD. The sessions were advertised through posters in the hospital and by word of mouth to patients from the arts team, respiratory nurses, physiotherapists and other healthcare professionals. Patients involved in the randomized control study were excluded from these sessions as were those where infection control issues existed. The open sessions were led by the same singing teacher as the randomised study and encompassed relaxation and posture education, as well as phonation exercises. Patients attending these workshops for the first time were asked to complete an anonymous satisfaction questionnaire at the end (Additional file 3).\n\nBODY.METHODS.QUALITATIVE SURVEY OF PATIENT EXPERIENCE OF SINGING:\nFollowing completion of the randomized control study, a chartered counselling psychologist (AE), who had not otherwise participated in the study, interviewed a random sample of eight individuals who had been in the singing arm of the trial. The thirty minute interviews were structured and each subject was asked about 1) physical changes they had noticed. 2) Emotional changes they had noticed. 3) Behavioural changes they had noticed. 4) Any detrimental effects or negative experiences. Participants were assured that they could make negative comments if they wished and that their responses would be anonymous. The interview sheet is shown in additional file 4.\n\nBODY.RESULTS.RANDOMIZED CONTROLLED TRIAL:\nParticipation in the study is described in the CONSORT diagram (Figure 1). Ninety two patients were approached to participate in the study. Fifty four did not wish to take part, one person who was already enrolled in another trial was excluded and one lived too far away to participate. Of the remaining thirty six patients, twenty were allocated to the singing group and sixteen to the control group. Of the singing group, two subjects withdrew from the study once the workshops had begun and three did not attend the final assessment once completing the workshops. In the control group, one withdrew and two did not attend the final assessment. Data are therefore presented for the 15 singers and 13 controls who completed the study. Of these, 6 attended 12 sessions, 4 attended 11, 2 came to 10 and 3 to 9. Figure 1CONSORT Flow Diagram. The baseline characteristics of the singing and control groups were well matched (Table 1). Four of each group were on long term oxygen therapy. In singers, there was a significant improvement in HAD anxiety score -1.1(2.7) vs. +0.8(1.7) p = 0.03 and SF-36 physical component score +7.5(14.6) vs. -3.8(8.4) p = 0.02. Breath hold time actually increased in the control group relative to the singing group -0.3(6.9) sec vs. +5.3(5.7) sec (p = 0.03). There was no significant difference in single breath counting, functional exercise capacity or recovery times following ISWT (Table 2). On direct questioning by the physiotherapist at the end of the study all participants in the singing group reported that they had found the workshops helpful, had practiced their singing at home and that they had found the singing helped them to control their breathing. Table 1 Baseline characteristics Total n = 28 Singing n = 15 Control n = 13 P value Age (yrs) 67.3 (8.1) 66.6 (9.3) 68.1 (6.8) 0.6 FEV 1 % predicted 37.2 (18.6) 36.8 (15.4) 37.7 (22.4) 0.9 Breath hold time (s) 26.3 (11.1) 27.4 (11.7) 24.9 (10.5) 0.9 Single breath counting (n) 23.4 (6.9) 24.8 (7.7) 21.8 (5.7) 0.3 HAD - anxiety score 5.8 (2.9) 6.3 (3.1) 5.3 (2.6) 0.4 HAD -depression score 5.8 (3.1) 5.7 (2.8) 5.8 (3.6) 1.0 SGRQ Total 48.4 (14.5) 51.1 (14.8) 45.3 (14.0) 0.3 SF36 PCS 36.8 (18.4) 32.0 (14.0) 42.2 (21.8) 0.1 SF36 MCS 52.0 (17.9) 47.4 (14.8) 57.2 (20.1) 0.2 ISWT (m) 198.6 (131.3) 174.0 (137.2) 226.9 (123.3) 0.3 Subjective recovery (s) 124.2 (77.6) 115.9 (50.4) 133.8 (102.0) 0.6 O 2 sat n recovery (s) 77.7 (71.6) 73.5 (44.6) 82.6 (95.7) 0.7 HR recovery (s) 131.0 (89.5) 97.6 (40.8) 169.6 (114.3) 0.03* FEV 1 - forced expiratory volume in one second, HAD - hospital anxiety and depression score, SGRQ - St George's Respiratory Questionnaire, SF36 short form 36, PCS - physical component score, MCS - mental component score, ISWT - incremental shuttle walk test distance, HR heart rate. *p < 0.05 unpaired t test singing group vs control group. Table 2 Changes from baseline Singing group n = 15 Control group n = 13 Δ breath hold time (s) - 0.3 (6.9) + 5.3 (5.7) 0.029* Δ single breath counting + 0.3 (7.7) + 2.0 (2.7) 0.46 Δ HAD anxiety - 1.1 (2.7) + 0.8 (1.7) 0.033* Δ HAD depression - 1.1 (2.5) - 0.1 (1.7) 0.21 Δ SGRQ Total - 1.1 (10.6) - 0.4 (5.6) 0.81 Δ ISWT (m) + 26 (52.6) + 11.3 (83.0) 0.58 Δ ISWT subjective recovery (s) + 9.9 (60.7) - 7.4 (81.7) 0.53 Δ O 2 Sat n recovery (s) + 47.3 (67.6) + 32.2 (124.7) 0.69 Δ HR recovery time (s) + 29 (63.8) + 19.4 (110.0) 0.78 Δ SF36 PCS + 7.5 (14.6) - 3.8 (8.4) 0.02* Δ SF36 MCS + 2.5 (20.9) - 3.2 (10.5) 0.38 HAD - hospital anxiety and depression score, SGRQ - St George's Respiratory Questionnaire, ISWT - incremental shuttle walk test distance, HR heart rate, SF36 short form 36, PCS - physical component score, MCS - mental component score. *p < 0.05 unpaired t test singing group vs control group. \n\nBODY.RESULTS.EVALUATION OF OPEN SINGING WORKSHOPS FOR RESPIRATORY PATIENTS:\n150 patients attended the open workshops. 61% had never attended singing lessons or workshops previously. 96% rated the workshops as \"very enjoyable\" and 98% thought the workshop had taught them something about breathing in a different way. 81% of attendees felt a marked physical difference after the workshop. 92% of the participants said that they would like to attend a further singing workshop at the hospital. The remaining patients stated that the distance to the hospital would be the main reason for not attending.\n\nBODY.RESULTS.QUALITATIVE SURVEY OF PATIENT EXPERIENCE OF THE SINGING CLASSES:\nEight patients who had participated in the singing groups were interviewed. The effect of the group was described by all in mainly positive terms and no participants reported any negative effects. Positive effects occurred in two main areas: physical and general well-being. Positive physical effects reported by the participants often related to the breathing training which was an integral part of the program, particularly \"diaphragmatic breathing\". This training brought a feeling of awareness and control which many of the participants found helped their breathing and eased symptoms of breathlessness. Singing training itself brought breathing control too; \"It has made my life easier; I would have liked this when I was first diagnosed\", \"I increased my out breath from 4 to 14 counts\", \"I started breathing much better, from the stomach\", \"The exercises, thinking about breathing and relaxing when I have (breathing) problems....this has been very useful\", \"I always felt better afterwards physically\". Other positive physical effects described were an impact on lifestyle and functional ability; \"I have better posture now\"; \"Walking better, I go out more when it's not cold\", \"Now things are less of a chore, housework is no longer a struggle\". Beneficial effects relating to general wellbeing also emerged. Relating to mood/pleasure participants reported that; \"It was very enjoyable\", \"It opened up a new lease of life\", \"Emotionally......during singing it lifts you. I feel on top on the world. I also feel like that the day after. It makes COPD a lot easier to live with\", \"Its uplifting to sing...this diagnosis is gloomy so the psychological effect of the group is good\". A feeling of community/social support was also reported; \"Nice to have human contact, we achieved something together in the group\", \"I enjoyed the social contact it was great fun\", \"In six weeks we gelled as a group\", \"It felt good to be part of a team\". Achievement/efficacy - many participants had carried aspects of the courses into daily life. Many of them continued to sing songs and perform the exercises from the group using printed sheets or a CD. It seems that learning to sing in a group brought a sense of achievement. \"I do the exercises from the CD 2-3 times a week. I am looking for a local choir\"; \"I will keep it up; it's more enjoyable than other kinds of exercise\"; \"I learnt a lot, you are never too old to learn\"; \"The group taught me how and when to breathe; I got to relive my past (of singing) and get praise\". Others reported that participating in the group had motivated them to reengage with pulmonary rehabilitation classes, or to take up exercises such as swimming and walking. One was having private singing lessons and several expressed a desire for such groups to be ongoing and made more available to patients with respiratory problems.\n\nBODY.DISCUSSION:\nThe main findings of this study are that (1) the program of singing classes improved quality of life and anxiety but did not improve the control of breathing measures, or functional exercise capacity. (2) In interviews, patients who had participated in the trial reported benefits in their physical performance and general well-being as well as a sense of achievement and self-efficacy. (3) Individuals who chose to participate in the open singing sessions were overwhelmingly positive about the experience.\n\nBODY.DISCUSSION.SIGNIFICANCE OF FINDINGS:\nThe idea that singing was beneficial for health goes back at least to the 19th century [24]. Two previous studies have looked specifically at singing in COPD patients. Bonilha et al randomised patients to either singing lessons, or a handicraft class once a week for 6 months [10]. Although singing practice produced an acute increase in inspiratory capacity, implying a reduction in gas trapping, it did not alter basal dyspnoea index (BDI). SGRQ improved equally in both groups and the singing group had an improvement in maximum expiratory pressure. Exercise capacity was not assessed. Engen studied 7 patients with emphysema who attended twice weekly vocal instruction classes for 6 weeks [23]. There was no change in spirometry, inspiratory muscle strength or exercise capacity, but patients had an improvement in single breath counting, which was associated with a change from a 'clavicular' to a 'diaphragmatic' pattern of breathing. In that study there was no control group and the teacher was not blind to the outcome measures. In the present study, anxiety score and physical component score of the SF36 improved and participants who were interviewed reported that they felt that the singing had been helpful in their everyday lives. HAD scores were not particularly high at baseline and patients were selected on the basis that they had symptomatic COPD and were able to attend, rather than because they were felt specifically to have a dysfunctional breathing pattern or psychological difficulties. It is possible that greater improvements might be found in particularly anxious sub-populations. The benefit in the physical rather than the mental component is consistent with the open workshop participants' reports that they felt physically different after the sessions. The immediate benefits reported by patients participating in the open workshops and from the interviews with singing patients are consistent with the previous observation that singing has an acute effect of reducing gas trapping [10]. Most reported sustained physical benefits, specifically identifying the value of learning breathing control. Several participants also mentioned that they felt that it was important to keep up the vocal exercises and singing in order to maintain any gains which may suggest that a longer period is necessary. The present study is also consistent with previous work that has found that singing training does not change, at least over the time courses explored to date, parameters such as exercise capacity [10,23]. We had expected that even if the singing class did not improve exercise tolerance it might hasten recovery, with patients adopting a more efficient breathing strategy. However, recovery time for heart rate, oxygen saturation or symptoms did not improve. Given the subjective benefits in physical sensation described by patients, the lack of change in exercise capacity is interesting. One possibility is that the improvements are entirely mediated in a psychological fashion. Since anxiety and depression are common and important co-morbidities in COPD [25,26] a novel approach to dealing with them could in any case be useful, particularly a non-drug treatment. An analogy would be the use of exercise prescriptions to treat depression [27]. Given the clear effects on wellbeing mentioned by the participants, one could also speculate that there might be benefits to efficacy and control in the absence of great physiological improvement. A patient's experience of their illness will be affected by a range of psychosocial factors in addition to their physical condition. All patients interviewed reported doing more singing and other pleasurable activities in their lives, suggesting that there may be long term benefits to participation in such a group. Consistent with this, singing has been used with some success in small studies to treat chronic pain where it seemed to improve coping [15] and after knee surgery [28]. A second possibility is that the 'dose' of singing was insufficient to alter conventional measures of breathing control or exercise capacity and that a longer or more intense period would be required to produce changes apparent in daily life. The singing teacher reported that she felt that after 6 weeks she was \"only just starting to make progress\" with many of the participants. It is interesting that in the current study, patients reported that what was identified to them as a 'diaphragmatic' breathing pattern was helpful. This involves facilitating outward movement of the abdomen while reducing upper ribcage movement during inspiration. It has been thought to be helpful in some patients, but although a reduction in oxygen cost of breathing has been noted [29], the approach has been associated with both an increased sensation of breathlessness and reduced mechanical efficiency in severe COPD [30,31] and is not recommended for routine use in the recent British Thoracic Society/Association for Chartered Physiotherapist in Respiratory Care(BTS/ACPRC) guidelines on \"Physiotherapy management of the adult, medical, spontaneously breathing patient\" [12]. The improvement in breath hold time observed in the control group was unexpected. A strength of the study was that the singing teacher was blind to the outcome measures used so could not \"teach to the test\". The teacher speculated that the singing group may have learnt to take a more controlled or comfortable breath and therefore smaller breath in, leading paradoxically to a reduction in breath hold time. Hyperventilation has been shown to increase breath hold time in patients with respiratory disease which might also be relevant [22]. If the benefits of participating in a singing group are largely psychological, it would suggest that attention would need to be focused on the aspects of the singing that addressed this, or even on the choice of material, rather than focussing on particular nuances of technique. Singing is, of course, likely to be a therapy that suits some people and not others, and the benefits that accrue are likely to be greatest in those who enjoy the experience. This is to some extent different from pulmonary rehabilitation where the health benefits of exercise are generally accepted. As with pulmonary rehabilitation the physiological and psychological effects could be complementary. A limitation of our study was the absence of an active control group. The previous paper of Bonilha et al. showed similar improvements of quality of life in patients submitted to singing training or handcraft artwork [10], so the observed improvements in anxiety and quality of life in the trial may have been due to regular contact with a social group rather than the singing specifically. However, in our study patients reported immediate positive effects on well being (probably because of a reduction in dynamic hyperinflation [10]) and this may therefore have reinforced participation in singing groups (as opposed to other 'social activities' which, although social have no physical effect), making the provision of singing classes a good strategy for reducing social isolation, a significant problem in chronic respiratory disease [32].\n\nBODY.CONCLUSIONS:\nSinging lessons improved anxiety and the physical component score of the SF36 but did not improve measures of breathing control, functional exercise capacity or recovery time. Participants reported that they found the singing beneficial and reported positive changes in their physical ability and wellbeing. It is likely that the effects of singing training will vary between individuals, but that it will be a positive experience for those who choose to take part. Further work is needed to quantify the magnitude and duration of improvement benefits, to set them in context against the resource implications of making singing groups for patients more widely available as a palliative therapy.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nNSH, VML, VJH, PC and MIP conceived the study; VML, PC, EJF, AE and JLK performed the study measurements. VML, VJH, AE and NSH performed the data analysis. VML wrote the first draft of the paper to which all authors subsequently made contributions. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2466/10/41/prepub\n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional file 1Help Yourself - physiotherapy for people with respiratory symptoms\". Standard booklet given to all patients participating in the trial.Click here for file Additional file 2Singing for breathing - description of program. Detailed description of singing program, selection of materials and conduct of classes.Click here for file Additional file 3Open sessions satisfaction survey. Survey used for participants to assess the open singing sessions.Click here for file Additional file 4Structured interview template. Template used to guide assessments by psychologists following the singing trial.Click here for file\n\n**Question:** Compared to Usual care what was the result of Singing classes on Single breath counting and functional exercise capacity?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
991
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Singing teaching as a therapy for chronic respiratory disease - a randomised controlled trial and qualitative evaluation\n\n ABSTRACT.BACKGROUND:\nDespite optimal pharmacological therapy and pulmonary rehabilitation, patients with COPD continue to be breathless. There is a need to develop additional strategies to alleviate symptoms. Learning to sing requires control of breathing and posture and might have benefits that translate into daily life.\n\nABSTRACT.METHODS:\nTo test this hypothesis we performed a randomised controlled trial, comparing a six week course of twice weekly singing classes to usual care, in 28 COPD patients. The experience of singing was assessed in a qualitative fashion, through interviews with a psychologist. In addition, we surveyed patients with chronic respiratory conditions who participated in a series of open singing workshops.\n\nABSTRACT.RESULTS:\nIn the RCT, the physical component score of the SF36 improved in the singers (n = 15) compared to the controls (n = 13); +7.5(14.6) vs. -3.8(8.4) p = 0.02. Singers also had a significant fall in HAD anxiety score; -1.1(2.7) vs. +0.8(1.7) p = 0.03. Singing did not improve single breath counting, breath hold time or shuttle walk distance. In the qualitative element, 8 patients from the singing group were interviewed. Positive effects on physical sensation, general well-being, community/social support and achievement/efficacy emerged as common themes. 150 participants in open workshops completed a questionnaire. 96% rated the workshops as \"very enjoyable\" and 98% thought the workshop had taught them something about breathing in a different way. 81% of attendees felt a \"marked physical difference\" after the workshop.\n\nABSTRACT.CONCLUSION:\nSinging classes can improve quality of life measures and anxiety and are viewed as a very positive experience by patients with respiratory disease; no adverse consequences of participation were observed.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent Controlled Trials - ISRCTN17544114.\n\nBODY.BACKGROUND:\nChronic obstructive pulmonary disease (COPD) causes breathlessness, which can occur on exertion or, in some individuals, at rest. Although therapy including bronchodilators, pulmonary rehabilitation and oxygen can improve symptoms, the underlying pathology is largely irreversible, meaning that many patients remain limited despite optimal therapy [1,2]. The pattern of breathing typically adopted in COPD is caused by the presence of expiratory flow limitation which leads patients to adopt higher operating lung volumes - dynamic hyperinflation [3]. Although this pattern allows increased expiratory flow rates, this comes at the expense of a greater inspiratory load and mechanical disadvantage for the respiratory muscles. A rapid breathing pattern reduces expiratory time and will exacerbate this. The disparity between respiratory work and ventilatory output that develops is an important determinant of symptoms [4]. A range of non-pharmacological measures for breathlessness have been trialled in respiratory patients [5], including pursed lip breathing [6,7], yoga [8,9], singing [10] and laughter [11], all based on the premise that changing patients' pattern of breathing will improve symptoms A limitation of some physiotherapy approaches is that by focusing attention on breathing pattern they may accentuate awareness of respiratory limitation [12]. Singing requires the development of skills in controlling posture and breath that might be transferrable to everyday life. Breathlessness is a complex sensation [13]; singing lessons might offer techniques that address both the sensory component - largely control of respiratory pattern to reduce hyperinflation, and the affective component - a conscious experience of using the breath for 'something positive'. In addition, there is some evidence that singing may have beneficial effects on wellbeing in healthy [14] and chronic disease populations [15,16]. To evaluate this further we used three approaches; firstly a randomized controlled trial of a six week course of singing training in COPD, secondly an evaluation of a program of open 'drop-in' singing lessons for patients with respiratory disease and thirdly a qualitative evaluation of patients' experience of participating in singing.\n\nBODY.METHODS.RANDOMIZED CONTROLLED TRIAL:\nPatients with COPD, diagnosed according to the GOLD guidelines, who were attending respiratory clinics at the Royal Brompton Hospital, were invited to participate. The study was approved by the Royal Brompton, Harefield and National Heart and Lung Institute Research Ethics Committee and all subjects gave written informed consent. The trial was registered at Current Controlled Trials - ISRCTN17544114. Demographics, height, weight, clinical history and spirometry (Microlab, CareFusion, Kent, UK,) were recorded. Participants completed the Hospital Anxiety and Depression (HAD) Questionnaire [17], St George's Respiratory Questionnaire [18], and the Short Form 36 Questionnaire [19]. Functional exercise capacity was assessed using the incremental shuttle walk test (ISWT). Time to recovery of oxygen saturation, Borg dyspnoea score and heart rate following the walk was documented [20]. Control of breathing was assessed using two measures. 1) Breath hold test - subjects held their breath from maximum inspiration [21,22]. 2) Single breath counting - subjects were instructed to breathe in and then count out loud in time with a metronome running at 60 beats/min[23]. Three attempts at each manoeuvre were made and the mean values recorded. Both techniques were in routine use in the physiotherapy department for the assessment of hyperventilation. All subjects received a thirty minute standard session on breathing control and techniques to manage breathlessness, delivered by one of two senior respiratory physiotherapists involved in the study. Pursed lip breathing and nose breathing were also discussed in relation to managing episodes of shortness of breath. Each subject received a standard Royal Brompton Hospital \"Help Yourself - physiotherapy for people with respiratory symptoms\" booklet (Additional file 1) and was advised to practice the techniques at home. At the end of the baseline session, patients were randomised to either the singing or control groups, using block randomization through consecutive sealed envelopes. The subjects in the singing group attended a hospital-based workshop, led by a singing teacher (PC), twice weekly for six weeks. Each session lasted approximately one hour and included teaching of posture, relaxation and vocal exercises. Tasks of increasing difficulty were given as the sessions progressed - described in more detail in additional file 2. Each subject was given homework and an accompanying CD of songs to practice at home. The teacher was unaware of the tests being used to assess breathing control, so that she would not 'teach to the test'. The control group had no further intervention. At seven weeks follow-up, study participants were again assessed by the same respiratory physiotherapists, who were blinded to treatment allocation. All subjects were instructed not to tell the physiotherapist which group they had been allocated to. Control of breathing, quality of life and functional exercise capacity were measured as at baseline. Statistical analysis was performed using StatView 4.0, using t-tests to compare change in outcome measures between patients and controls. A p value of < 0.5 was taken as significant.\n\nBODY.METHODS.EVALUATION OF OPEN SINGING WORKSHOPS FOR RESPIRATORY PATIENTS:\nAlongside the randomized control study, we ran open, twice-weekly, 'Singing for BreathingTM' workshops for patients with any respiratory condition who were at Royal Brompton Hospital either as inpatients, day cases or outpatients. Conditions included interstitial lung disease, cystic fibrosis, asthma, bronchiectasis and COPD. The sessions were advertised through posters in the hospital and by word of mouth to patients from the arts team, respiratory nurses, physiotherapists and other healthcare professionals. Patients involved in the randomized control study were excluded from these sessions as were those where infection control issues existed. The open sessions were led by the same singing teacher as the randomised study and encompassed relaxation and posture education, as well as phonation exercises. Patients attending these workshops for the first time were asked to complete an anonymous satisfaction questionnaire at the end (Additional file 3).\n\nBODY.METHODS.QUALITATIVE SURVEY OF PATIENT EXPERIENCE OF SINGING:\nFollowing completion of the randomized control study, a chartered counselling psychologist (AE), who had not otherwise participated in the study, interviewed a random sample of eight individuals who had been in the singing arm of the trial. The thirty minute interviews were structured and each subject was asked about 1) physical changes they had noticed. 2) Emotional changes they had noticed. 3) Behavioural changes they had noticed. 4) Any detrimental effects or negative experiences. Participants were assured that they could make negative comments if they wished and that their responses would be anonymous. The interview sheet is shown in additional file 4.\n\nBODY.RESULTS.RANDOMIZED CONTROLLED TRIAL:\nParticipation in the study is described in the CONSORT diagram (Figure 1). Ninety two patients were approached to participate in the study. Fifty four did not wish to take part, one person who was already enrolled in another trial was excluded and one lived too far away to participate. Of the remaining thirty six patients, twenty were allocated to the singing group and sixteen to the control group. Of the singing group, two subjects withdrew from the study once the workshops had begun and three did not attend the final assessment once completing the workshops. In the control group, one withdrew and two did not attend the final assessment. Data are therefore presented for the 15 singers and 13 controls who completed the study. Of these, 6 attended 12 sessions, 4 attended 11, 2 came to 10 and 3 to 9. Figure 1CONSORT Flow Diagram. The baseline characteristics of the singing and control groups were well matched (Table 1). Four of each group were on long term oxygen therapy. In singers, there was a significant improvement in HAD anxiety score -1.1(2.7) vs. +0.8(1.7) p = 0.03 and SF-36 physical component score +7.5(14.6) vs. -3.8(8.4) p = 0.02. Breath hold time actually increased in the control group relative to the singing group -0.3(6.9) sec vs. +5.3(5.7) sec (p = 0.03). There was no significant difference in single breath counting, functional exercise capacity or recovery times following ISWT (Table 2). On direct questioning by the physiotherapist at the end of the study all participants in the singing group reported that they had found the workshops helpful, had practiced their singing at home and that they had found the singing helped them to control their breathing. Table 1 Baseline characteristics Total n = 28 Singing n = 15 Control n = 13 P value Age (yrs) 67.3 (8.1) 66.6 (9.3) 68.1 (6.8) 0.6 FEV 1 % predicted 37.2 (18.6) 36.8 (15.4) 37.7 (22.4) 0.9 Breath hold time (s) 26.3 (11.1) 27.4 (11.7) 24.9 (10.5) 0.9 Single breath counting (n) 23.4 (6.9) 24.8 (7.7) 21.8 (5.7) 0.3 HAD - anxiety score 5.8 (2.9) 6.3 (3.1) 5.3 (2.6) 0.4 HAD -depression score 5.8 (3.1) 5.7 (2.8) 5.8 (3.6) 1.0 SGRQ Total 48.4 (14.5) 51.1 (14.8) 45.3 (14.0) 0.3 SF36 PCS 36.8 (18.4) 32.0 (14.0) 42.2 (21.8) 0.1 SF36 MCS 52.0 (17.9) 47.4 (14.8) 57.2 (20.1) 0.2 ISWT (m) 198.6 (131.3) 174.0 (137.2) 226.9 (123.3) 0.3 Subjective recovery (s) 124.2 (77.6) 115.9 (50.4) 133.8 (102.0) 0.6 O 2 sat n recovery (s) 77.7 (71.6) 73.5 (44.6) 82.6 (95.7) 0.7 HR recovery (s) 131.0 (89.5) 97.6 (40.8) 169.6 (114.3) 0.03* FEV 1 - forced expiratory volume in one second, HAD - hospital anxiety and depression score, SGRQ - St George's Respiratory Questionnaire, SF36 short form 36, PCS - physical component score, MCS - mental component score, ISWT - incremental shuttle walk test distance, HR heart rate. *p < 0.05 unpaired t test singing group vs control group. Table 2 Changes from baseline Singing group n = 15 Control group n = 13 Δ breath hold time (s) - 0.3 (6.9) + 5.3 (5.7) 0.029* Δ single breath counting + 0.3 (7.7) + 2.0 (2.7) 0.46 Δ HAD anxiety - 1.1 (2.7) + 0.8 (1.7) 0.033* Δ HAD depression - 1.1 (2.5) - 0.1 (1.7) 0.21 Δ SGRQ Total - 1.1 (10.6) - 0.4 (5.6) 0.81 Δ ISWT (m) + 26 (52.6) + 11.3 (83.0) 0.58 Δ ISWT subjective recovery (s) + 9.9 (60.7) - 7.4 (81.7) 0.53 Δ O 2 Sat n recovery (s) + 47.3 (67.6) + 32.2 (124.7) 0.69 Δ HR recovery time (s) + 29 (63.8) + 19.4 (110.0) 0.78 Δ SF36 PCS + 7.5 (14.6) - 3.8 (8.4) 0.02* Δ SF36 MCS + 2.5 (20.9) - 3.2 (10.5) 0.38 HAD - hospital anxiety and depression score, SGRQ - St George's Respiratory Questionnaire, ISWT - incremental shuttle walk test distance, HR heart rate, SF36 short form 36, PCS - physical component score, MCS - mental component score. *p < 0.05 unpaired t test singing group vs control group. \n\nBODY.RESULTS.EVALUATION OF OPEN SINGING WORKSHOPS FOR RESPIRATORY PATIENTS:\n150 patients attended the open workshops. 61% had never attended singing lessons or workshops previously. 96% rated the workshops as \"very enjoyable\" and 98% thought the workshop had taught them something about breathing in a different way. 81% of attendees felt a marked physical difference after the workshop. 92% of the participants said that they would like to attend a further singing workshop at the hospital. The remaining patients stated that the distance to the hospital would be the main reason for not attending.\n\nBODY.RESULTS.QUALITATIVE SURVEY OF PATIENT EXPERIENCE OF THE SINGING CLASSES:\nEight patients who had participated in the singing groups were interviewed. The effect of the group was described by all in mainly positive terms and no participants reported any negative effects. Positive effects occurred in two main areas: physical and general well-being. Positive physical effects reported by the participants often related to the breathing training which was an integral part of the program, particularly \"diaphragmatic breathing\". This training brought a feeling of awareness and control which many of the participants found helped their breathing and eased symptoms of breathlessness. Singing training itself brought breathing control too; \"It has made my life easier; I would have liked this when I was first diagnosed\", \"I increased my out breath from 4 to 14 counts\", \"I started breathing much better, from the stomach\", \"The exercises, thinking about breathing and relaxing when I have (breathing) problems....this has been very useful\", \"I always felt better afterwards physically\". Other positive physical effects described were an impact on lifestyle and functional ability; \"I have better posture now\"; \"Walking better, I go out more when it's not cold\", \"Now things are less of a chore, housework is no longer a struggle\". Beneficial effects relating to general wellbeing also emerged. Relating to mood/pleasure participants reported that; \"It was very enjoyable\", \"It opened up a new lease of life\", \"Emotionally......during singing it lifts you. I feel on top on the world. I also feel like that the day after. It makes COPD a lot easier to live with\", \"Its uplifting to sing...this diagnosis is gloomy so the psychological effect of the group is good\". A feeling of community/social support was also reported; \"Nice to have human contact, we achieved something together in the group\", \"I enjoyed the social contact it was great fun\", \"In six weeks we gelled as a group\", \"It felt good to be part of a team\". Achievement/efficacy - many participants had carried aspects of the courses into daily life. Many of them continued to sing songs and perform the exercises from the group using printed sheets or a CD. It seems that learning to sing in a group brought a sense of achievement. \"I do the exercises from the CD 2-3 times a week. I am looking for a local choir\"; \"I will keep it up; it's more enjoyable than other kinds of exercise\"; \"I learnt a lot, you are never too old to learn\"; \"The group taught me how and when to breathe; I got to relive my past (of singing) and get praise\". Others reported that participating in the group had motivated them to reengage with pulmonary rehabilitation classes, or to take up exercises such as swimming and walking. One was having private singing lessons and several expressed a desire for such groups to be ongoing and made more available to patients with respiratory problems.\n\nBODY.DISCUSSION:\nThe main findings of this study are that (1) the program of singing classes improved quality of life and anxiety but did not improve the control of breathing measures, or functional exercise capacity. (2) In interviews, patients who had participated in the trial reported benefits in their physical performance and general well-being as well as a sense of achievement and self-efficacy. (3) Individuals who chose to participate in the open singing sessions were overwhelmingly positive about the experience.\n\nBODY.DISCUSSION.SIGNIFICANCE OF FINDINGS:\nThe idea that singing was beneficial for health goes back at least to the 19th century [24]. Two previous studies have looked specifically at singing in COPD patients. Bonilha et al randomised patients to either singing lessons, or a handicraft class once a week for 6 months [10]. Although singing practice produced an acute increase in inspiratory capacity, implying a reduction in gas trapping, it did not alter basal dyspnoea index (BDI). SGRQ improved equally in both groups and the singing group had an improvement in maximum expiratory pressure. Exercise capacity was not assessed. Engen studied 7 patients with emphysema who attended twice weekly vocal instruction classes for 6 weeks [23]. There was no change in spirometry, inspiratory muscle strength or exercise capacity, but patients had an improvement in single breath counting, which was associated with a change from a 'clavicular' to a 'diaphragmatic' pattern of breathing. In that study there was no control group and the teacher was not blind to the outcome measures. In the present study, anxiety score and physical component score of the SF36 improved and participants who were interviewed reported that they felt that the singing had been helpful in their everyday lives. HAD scores were not particularly high at baseline and patients were selected on the basis that they had symptomatic COPD and were able to attend, rather than because they were felt specifically to have a dysfunctional breathing pattern or psychological difficulties. It is possible that greater improvements might be found in particularly anxious sub-populations. The benefit in the physical rather than the mental component is consistent with the open workshop participants' reports that they felt physically different after the sessions. The immediate benefits reported by patients participating in the open workshops and from the interviews with singing patients are consistent with the previous observation that singing has an acute effect of reducing gas trapping [10]. Most reported sustained physical benefits, specifically identifying the value of learning breathing control. Several participants also mentioned that they felt that it was important to keep up the vocal exercises and singing in order to maintain any gains which may suggest that a longer period is necessary. The present study is also consistent with previous work that has found that singing training does not change, at least over the time courses explored to date, parameters such as exercise capacity [10,23]. We had expected that even if the singing class did not improve exercise tolerance it might hasten recovery, with patients adopting a more efficient breathing strategy. However, recovery time for heart rate, oxygen saturation or symptoms did not improve. Given the subjective benefits in physical sensation described by patients, the lack of change in exercise capacity is interesting. One possibility is that the improvements are entirely mediated in a psychological fashion. Since anxiety and depression are common and important co-morbidities in COPD [25,26] a novel approach to dealing with them could in any case be useful, particularly a non-drug treatment. An analogy would be the use of exercise prescriptions to treat depression [27]. Given the clear effects on wellbeing mentioned by the participants, one could also speculate that there might be benefits to efficacy and control in the absence of great physiological improvement. A patient's experience of their illness will be affected by a range of psychosocial factors in addition to their physical condition. All patients interviewed reported doing more singing and other pleasurable activities in their lives, suggesting that there may be long term benefits to participation in such a group. Consistent with this, singing has been used with some success in small studies to treat chronic pain where it seemed to improve coping [15] and after knee surgery [28]. A second possibility is that the 'dose' of singing was insufficient to alter conventional measures of breathing control or exercise capacity and that a longer or more intense period would be required to produce changes apparent in daily life. The singing teacher reported that she felt that after 6 weeks she was \"only just starting to make progress\" with many of the participants. It is interesting that in the current study, patients reported that what was identified to them as a 'diaphragmatic' breathing pattern was helpful. This involves facilitating outward movement of the abdomen while reducing upper ribcage movement during inspiration. It has been thought to be helpful in some patients, but although a reduction in oxygen cost of breathing has been noted [29], the approach has been associated with both an increased sensation of breathlessness and reduced mechanical efficiency in severe COPD [30,31] and is not recommended for routine use in the recent British Thoracic Society/Association for Chartered Physiotherapist in Respiratory Care(BTS/ACPRC) guidelines on \"Physiotherapy management of the adult, medical, spontaneously breathing patient\" [12]. The improvement in breath hold time observed in the control group was unexpected. A strength of the study was that the singing teacher was blind to the outcome measures used so could not \"teach to the test\". The teacher speculated that the singing group may have learnt to take a more controlled or comfortable breath and therefore smaller breath in, leading paradoxically to a reduction in breath hold time. Hyperventilation has been shown to increase breath hold time in patients with respiratory disease which might also be relevant [22]. If the benefits of participating in a singing group are largely psychological, it would suggest that attention would need to be focused on the aspects of the singing that addressed this, or even on the choice of material, rather than focussing on particular nuances of technique. Singing is, of course, likely to be a therapy that suits some people and not others, and the benefits that accrue are likely to be greatest in those who enjoy the experience. This is to some extent different from pulmonary rehabilitation where the health benefits of exercise are generally accepted. As with pulmonary rehabilitation the physiological and psychological effects could be complementary. A limitation of our study was the absence of an active control group. The previous paper of Bonilha et al. showed similar improvements of quality of life in patients submitted to singing training or handcraft artwork [10], so the observed improvements in anxiety and quality of life in the trial may have been due to regular contact with a social group rather than the singing specifically. However, in our study patients reported immediate positive effects on well being (probably because of a reduction in dynamic hyperinflation [10]) and this may therefore have reinforced participation in singing groups (as opposed to other 'social activities' which, although social have no physical effect), making the provision of singing classes a good strategy for reducing social isolation, a significant problem in chronic respiratory disease [32].\n\nBODY.CONCLUSIONS:\nSinging lessons improved anxiety and the physical component score of the SF36 but did not improve measures of breathing control, functional exercise capacity or recovery time. Participants reported that they found the singing beneficial and reported positive changes in their physical ability and wellbeing. It is likely that the effects of singing training will vary between individuals, but that it will be a positive experience for those who choose to take part. Further work is needed to quantify the magnitude and duration of improvement benefits, to set them in context against the resource implications of making singing groups for patients more widely available as a palliative therapy.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nNSH, VML, VJH, PC and MIP conceived the study; VML, PC, EJF, AE and JLK performed the study measurements. VML, VJH, AE and NSH performed the data analysis. VML wrote the first draft of the paper to which all authors subsequently made contributions. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2466/10/41/prepub\n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional file 1Help Yourself - physiotherapy for people with respiratory symptoms\". Standard booklet given to all patients participating in the trial.Click here for file Additional file 2Singing for breathing - description of program. Detailed description of singing program, selection of materials and conduct of classes.Click here for file Additional file 3Open sessions satisfaction survey. Survey used for participants to assess the open singing sessions.Click here for file Additional file 4Structured interview template. Template used to guide assessments by psychologists following the singing trial.Click here for file\n\n**Question:** Compared to Usual care what was the result of Singing classes on Recovery time after incremental shuttle walk test?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1011
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Comparing the Hemodynamic Effects of Spinal Anesthesia in Preeclamptic and Healthy Parturients During Cesarean Section\n\n ABSTRACT.BACKGROUND:\nDespite controversies about the safest anesthetic technique for cesarean delivery in severely preeclamptic women, there is evidence that supports the use of spinal anesthesia in this group of patients.\n\nABSTRACT.OBJECTIVES:\nThis prospective randomized clinical trial was designed to determine the hemodynamic effects of low-dose spinal bupivacaine and the incidence of spinal anesthesia-associated hypotension in severely preeclamptic and healthy parturients undergoing cesarean sections.\n\nABSTRACT.PATIENTS AND METHODS:\nSpinal anesthesia was performed with 10 mg (= 2 mL) hyperbaric 0.5% bupivacaine plus 2.5 μg sufentanil in two groups of patients after they received 500 mL of IV lactated Ringer's solution. Heart rate and blood pressure were recorded before spinal anesthesia and at two minutes intervals for 15 minutes after the block, and then every five minutes until the end of the surgery. Hypotension was defined as more than 25% of decline in the mean arterial blood pressure compared to the baseline in both groups (or systolic blood pressure < 100 mmHg in healthy parturients) and was treated with 5 mg IV ephedrine. The total amounts of intravenous administered fluid and the total doses of ephedrine were recorded for each patient as well.\n\nABSTRACT.RESULTS:\nThe incidence rate of hypotension among the preeclamptic patients was lower than that of the healthy parturients, despite the former group receiving smaller volumes of intravenous fluids (P < 0.05). The total doses of IV ephedrine for treating hypotension were significantly lower among the preeclamptic patients (3.2 mg in preeclamptic patients versus 7.6 mg in normotensive patients) (P = 0.02). The one-minute Apgar score was significantly lower for the preeclamptic parturients (8.4 ± 0.7 versus 7.2 ± 1.5) (P = 0.001), but there was no significant difference in the five-minute Apgar scores between the two groups.\n\nABSTRACT.CONCLUSIONS:\nOur results confirm that low-dose bupivacaine spinal anesthesia is associated with a lower risk of hypotension than previously believed, and it can therefore be safely used in severe preeclamptic women undergoing cesarean delivery.\n\nBODY.1. BACKGROUND:\nPregnancy-induced hypertension is a major cause of morbidity and mortality in obstetrics, complicating 3% - 8% of pregnancies. Severe preeclampsia poses a dilemma for anesthesiologists, and there is some controversy about the best anesthetic technique for cesarean delivery in such cases (1, 2). Because of the risks related to airway edema, difficulty with the airway or failed intubation, hypertensive response to direct laryngoscopy, and aspiration pneumonitis, general anesthesia is associated with more untoward outcomes in this particular group of patients (3, 4). When there is no contraindication for performing regional anesthesia, risk-benefit considerations strongly favor neuraxial techniques over general anesthesia for cesarean delivery in cases of severe preeclampsia. Regional anesthesia techniques have been widely used recently, however, spinal anesthesia, once considered contraindicated due to the common belief that the sudden and extensive sympathetic blockade following the subarachnoid block would result in severe hypotension and compromise uteroplacental blood flow in this group of patients (5-8). Although controversial, some studies have shown the effectiveness of colloid loading on reducing the incidence of hypotension in spinal anesthesia (9, 10), but vasopressor agents and volume loading, which are commonly used to manage spinal anesthesia-induced hypotension, could put the preeclamptic patients at increased risk of hypertension and pulmonary edema (6). Recent evidence has challenged this view, suggesting that spinal anesthesia may in fact be an appropriate choice for preeclamptic women when cesarean delivery is planned, as long as neuraxial anesthesia is not contraindicated (e.g., coagulopathy, eclampsia with persistent neurologic deficits) (2, 5, 8). Although the relative safety of the subarachnoid block in these patients has been demonstrated, there are few studies that compare the differences in the hemodynamic changes and newborn well-being after single-shot spinal anesthesia between preeclamptic and healthy parturients\n\nBODY.2. OBJECTIVES:\nThis prospective randomized clinical trial was designed to compare the hemodynamic effects and the incidence of spinal anesthesia-associated hypotension after spinal anesthesia with bupivacaine plus sufentanil in severely preeclamptic versus healthy parturients undergoing cesarean sections.\n\nBODY.3. PATIENTS AND METHODS:\nAfter obtaining institutional ethics committee approval, 80 parturients (37 healthy and 43 severely preeclamptic parturients) that were being cared for in our unit from April 2011 to July 2012 were enrolled in the study after providing informed consent. Severe preeclampsia was defined as a systolic arterial blood pressure of 160 mmHg or higher, or a diastolic blood pressure of 110 mmHg or higher, associated with proteinuria > 5 g in 24 hours. Patients who were excluded were those with coagulopathy (including those with platelet counts < 50,000), placental abruption, severe fetal distress, a history of allergy to local anesthetics, oliguria of less than 500 mL in 24 hours or persistently < 30 mL/hour, cerebral or visual disturbances, pulmonary edema, hemodynamic instability, local infection of the spinal injection site, or refusal of a spinal block. All patients in the preeclampsia group received a 4 g loading dose of intravenous magnesium sulfate (Mg SO4) followed by a 1 g/hour infusion for 24 hours for seizure prophylaxis. Intravenous hydralazine of 5 mg was given at 20-minute intervals to decrease diastolic blood pressure to approximately 90 mmHg. Before performing spinal anesthesia on each patient, preoperative fluid administration equal to 10 mL/kg of Ringer's lactate solution was administered over the course of 15 - 20 minutes. All patients received 1500 - 2000 mL lactated Ringer's solution after spinal anesthesia and during the operation. The volume of administered fluid was not restricted in the preeclamptic patients because of the contracted intravascular volume in this group of patients and the high incidence of hypotension caused by spinal anesthesia-induced sympathetic blockade. Patients were monitored with standard monitoring devices including automated blood pressure cuffs, electrocardiogram, and pulse oximetry. Spinal anesthesia was performed with 10 mg hyperbaric 0.5% bupivacaine plus 2.5 - 3 μg sufentanil (2.5 mL volumes) in two groups in the sitting position in the L3-L4 or L4-L5 vertebral interspaces. Each patient was then placed in the supine position with a left lateral tilt of 15-20 degrees. The height of the sensory block was assessed using a pinprick test, and a 10°-15° head down-tilt (Trendelenburg position) was initiated if a T4 sensory level was not achieved at 10 minutes after the spinal injection. After achieving an adequate sensory block (T4 level), the patient was prepared for surgery. Heart rate and blood pressure were recorded before performing spinal anesthesia at two-minute intervals for 15 minutes after the block, and then every five minutes until the end of the surgery. Hypotension was defined as more than 25% decline in mean arterial blood pressure (MAP) compared to the baseline in both groups (or systolic blood pressure (SBP) < 100 mmHg in healthy parturients) and was treated with 5 mg IV ephedrine. The total amounts of intravenous administered fluid and the total doses of ephedrine were recorded as well. Based on the findings of previous studies, we calculated that at least 38 patients per group were required to show a 25% difference in the incidence of hypotension, with 80% power at the 5% level. Data are presented as number, median and range, mean ± SD, or percentage as appropriate. Fisher's exact test was used for intergroup comparisons of the incidence of hypotension. Mean values of most of the quantitative study variables were compared by using the unpaired Student's t-test. A P value of less than 0.05 was considered to indicate statistical significance. Data entry and analysis were performed using SPSS software version 16.\n\nBODY.4. RESULTS:\nEighty patients (severe preeclampsia = 43 and healthy = 37) were studied. Demographic data, the times from spinal injection to delivery, the median sensory blocked levels at the time of incision, the volumes of estimated blood loss, and the surgical durations were similar between the two groups (Table 1). Table 1. Maternal, Neonatal, and Anesthetic Considerations a Variable Healthy Preeclampsia P Value \n n \n 37 43 \n Age, y \n 28.1 ± 5 29.3 ± 6.6 0.12 \n Gestational age, week \n 38.8 ± 1.2 34.3 ± 3.8 0.00 \n Base line MAP \n 101.1±10.1 119.5±15.9 0.02 \n Upper Sensory Level (Median, Range) \n T4 (T2-T5) T4 (T2-T5) 0.32 \n IV fluid, mL \n 2500 ± 0.1 2400 ± 0.2 0.000 \n Spinal puncture to delivery interval, minute \n 17.5 ± 2.5 17.1 ± 2.5 0.83 \n Ephedrine dose, mg \n 7.5 ± 9.2 3.2 ± 7.8 0.04 \n APGAR score 1-min (Range) \n 8.4 ± 0.7, (5 - 10) 7.2 ± 1.5, (2 - 10) 0.001 \n APGAR score 5-min (Range) \n 9.4 ± 0.7, (7 - 10) 8.6 ± 1.1, (5 - 10) 0.35 a Values are expressed as mean ± SD unless otherwise indicated. The mean gestational age and mean one-minute Apgar scores in the patients with severe preeclampsia were significantly lower than those of the healthy parturients (Table 1). However, there was no significant difference in the five-minute APGAR score between the two groups. The incidence rate of hypotension in the preeclamptic patients (55.8%) was less than that of the healthy parturients (89.2%) (Table 2), despite the former receiving smaller volumes of intravenous fluids (2.4 versus 2.5 lit) (Table 1) (P = 0.001). Table 2. Incidence of Hypotension and Changes in Blood Pressure After Spinal Anesthesia in the Two Groups Variable Healthy Preeclampsia P Value \n Incidence of MAP hypotension (%) \n 33 (89.2%) 24 (55.8%) 0.001 \n Lowest SBP mmHg \n 98 ± 12.9 116 ± 18.6 0.02 \n Lowest DBP mmHg \n 47.5 ± 9.5 66.3 ± 14.1 0.03 \n Lowest MAP mmHg \n 64.9 ± 10.1 83.7 ± 14.9 0.008 The SBP, DBP, and MAP measured at the baseline were higher for the patients with preeclampsia, and the mean lowest SBP and MAP measured among the preeclamptic patients were consistently higher than the corresponding values among the healthy parturients (Table 2). Furthermore, the total doses of IV ephedrine for treating hypotension were significantly lower for the preeclamptic patients than for the normotensive patients (3.2 ± 7.8 mg versus 7.5 ± 9.2 mg) (P = 0.04).\n\nBODY.5. DISCUSSION:\nSpinal anesthesia-associated hypotension may occur in up to 64% - 100% of pregnant women undergoing cesarean delivery (2). Severely preeclamptic patients have been considered to be at higher risk of severe hypotension (1, 2, 5-8), and the concern of severe hypotension caused by subarachnoid block may often deter the anesthesiologist from choosing this technique for this group of patients. Epidural anesthesia has traditionally been regarded to be safer for preeclamptic parturients as it does not produce sudden hypotension. However, some studies have shown that the two techniques produce a similar incidence and severity of hypotension in preeclamptic parturients (6, 7, 11). There is growing interest in using spinal anesthesia on preeclamptic patients because of its simplicity, faster onset, lower dose of injected local anesthetic (which decreases the probability of systemic toxicity), and less tissue trauma caused by the use of a smaller gauge spinal needle (12-14). As a result of this interest, a number of studies have been conducted to show the hemodynamic consequences of spinal anesthesia in patients with preeclampsia. A prospective study by Aya et al. found that the risk of hypotension following spinal anesthesia in preeclamptic patients was significantly lower than the risk among healthy term parturients (17% in preeclamptic parturients and 53% in healthy parturients) (2). In another study, the same author suggested that the risk of hypotension following a subarachnoid block in preeclampsia was related to preeclampsia-associated factors rather than a small uterine size (15). Similar to the studies by Aya et al., the incidence of hypotension in severely preeclamptic patients undergoing spinal anesthesia for cesarean delivery was found to be significantly lower in comparison to the rate among healthy parturients (55% versus 89%) in our study. Factors such as difference in gestational age, the carrying of a smaller fetus, less aortocaval compression, sympathetic hyperactivity, and high vascular tone might have led to this finding (1, 2, 8, 16). The injection of different doses of bupivacaine (10 mg versus 8 - 12 mg) for the induction of the subarachnoid block and the different criteria for defining hypotension (a 25% versus 30% decline to baseline MAP) might explain why the incidence of hypotension was higher in both groups in our study compared to the corresponding rates in Aya et al.'s study. According to two other studies conducted by Mendes et al. (17) and Saha et al. (8), the hemodynamic changes and newborn well-being appeared to be comparable in severely preeclamptic and healthy parturients submitted to spinal anesthesia for cesarean section, and spinal anesthesia seemed to be a safe option for patients with severe preeclampsia. The ephedrine requirement for treatment of spinal anesthesia-induced hypotension in preeclampsia has been reported to be lower than that required by healthy parturients (2, 18). Preeclamptics have been reported as requiring significantly less phenylephrine to treat hypotension as well (8). These results were comparable to our findings, in that the total doses of IV ephedrine for treating hypotension were significantly lower for the preeclamptic patients (3.2 ± 7.8 mg) than for the normotensive patients (7.5 ± 9.2 mg) (P = 0.04). There is little evidence in the current literature supporting the use of phenylephrine as the vasopressor of choice in high-risk pregnancies such as those involving preeclampsia (19), so we chose to use ephedrine for treating hypotension in our patients. More studies are needed to investigate the effects of vasopressors while considering the influence on feto-maternal physiology in patients with preeclampsia. The results of a review by Dyer et al. showed that after spinal anesthesia for cesarean section, patients with preeclampsia had a lower susceptibility to hypotension and less impairment of cardiac output than healthy parturients (20). In a prospective observational study on 15 parturients with severe preeclampsia, no clinically significant change in cardiac output was shown after the subarachnoid block (21). The focus of our study was on the blood pressure changes during spinal anesthesia in the preeclamptic patients, and therefore we did not measure the cardiac output fluctuations in our patients. Further studies with larger sample sizes evaluating cardiac output are needed for better understanding of hemodynamic changes during spinal anesthesia in this group of patients. It is believed that the incidence of spinal anesthesia-induced hypotension is related to the local anesthetic dose, so one particular strategy to minimize the hemodynamic disruption after spinal anesthesia involves using small intrathecal local anesthetic doses. In a pilot study which compared the hemodynamic consequences of two doses of spinal bupivacaine (7.5 mg versus 10 mg) for cesarean delivery in those with severe preeclampsia, pre-delivery MAP was lower and the ephedrine requirements were greater in the 10 mg group (22). In another study, Roofthoof and Van de Velde have shown that when low-dose spinal anesthesia (6.5 mg bupivacaine) was administered with sufentanil as part of a combined spinal-epidural (CSE) technique in shorter surgeries (less than 60 minutes), the need for epidural supplementation was rare (23). The ED95 of intrathecal bupivacaine coadministered with intrathecal 2.5 μg sufentanil using CSE anesthesia for cesarean section in severely preeclamptic patients was reported to be 8.82 mg in another study, and using smaller doses of intrathecal bupivacaine in the patients resulted in a decrease of incidences of maternal hypotension and vasopressor requirements (24). However, no studies have compared CSE with single-shot spinal anesthesia in severe preeclampsia, and further research is needed to elucidate the best strategy to optimize the hemodynamics and uteroplacental perfusion in this particular group of patients. Considering the neonatal outcomes after various anesthesia techniques in cesarean delivery among preeclamptic patients, no statistically significant difference was found in the one- and five-minute Apgar scores and the umbilical artery blood gas markers between the two groups of patients receiving spinal or general anesthesia (25). Other studies in support of subarachnoid block have also shown that transient neonatal depression and birth asphyxia are more common among preeclamptic women who have received general anesthesia (26). Comparing umbilical arterial fetal base deficit and other markers of maternal and neonatal well-being in 70 preeclamptic patients undergoing cesarean delivery who were randomized into groups receiving either spinal or general anesthesia, the spinal group had a higher mean umbilical arterial base deficit and a lower median umbilical arterial pH, but other markers of a compromised neonatal condition, including the requirement for neonatal resuscitation, an Apgar score < 7, an umbilical arterial pH < 7.2, and the need for neonatal intermittent positive pressure ventilation were the same among the two groups (27). In comparison with healthy subjects, patients with severe preeclampsia had a younger gestational age (34 weeks versus 39 weeks) in our study, which is one of the likely causes of the lower one-minute Apgar scores of the neonates among the first group. Although there was evidence as early as 1950 that preeclampsia attenuates spinal anesthesia-induced hypotension, it has taken a long time for clinical trials to demonstrate the safety of spinal and CSE anesthesia in preeclamptic parturients. Recently, after five decades of research, the relationship between spinal anesthesia, pre-eclampsia, and hypotension can be properly acknowledged and put into clinical practice (28). Because of an altered balance of vascular tone, reduced responses to endogenous pressors, and increased synthesis of vasodilator prostaglandins and nitric oxide, the normal pregnant patient is very sensitive to spinal anesthesia. These effects increase dependence on sympathetic vascular tone in normal pregnancy, and this can be the main cause of spinal anesthesia-induced hypotension in healthy parturients, while damaged vascular epithelium results in persistent vasoconstriction in preeclampsia (8, 16). There is a dramatic increase in the use of spinal anesthesia for cesarean delivery in severe preeclampsia that could be related to the documented safety of subarachnoid block in this group of patients. Therefore, single-shot subarachnoid block may be a good choice for cesarean delivery in patients with severe preeclampsia, since it has been shown to be safe for both the mother and the neonate (28).\n\nBODY.5. DISCUSSION.5.1. CONCLUSION:\nOur results have also confirmed that single-shot low-dose bupivacaine spinal anesthesia is associated with a lower risk of hypotension and vasopressor requirements in comparison to the rates of healthy subjects, and could be safely used in patients with severe preeclampsia undergoing cesarean delivery. However, more studies with the CSE technique using smaller doses of local anesthetics and larger sample sizes are suggested. Further research is needed to find the best strategies to optimize hemodynamics and uteroplacental perfusion in severely preeclamptic parturients during spinal anesthesia for cesarean delivery.\n\n**Question:** Compared to Low-dose bupivacaine spinal anesthesia in healthy parturients what was the result of Low-dose bupivacaine spinal anesthesia in pre-eclamptic parturients on IV ephedrine total dosage?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1012
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Comparing the Hemodynamic Effects of Spinal Anesthesia in Preeclamptic and Healthy Parturients During Cesarean Section\n\n ABSTRACT.BACKGROUND:\nDespite controversies about the safest anesthetic technique for cesarean delivery in severely preeclamptic women, there is evidence that supports the use of spinal anesthesia in this group of patients.\n\nABSTRACT.OBJECTIVES:\nThis prospective randomized clinical trial was designed to determine the hemodynamic effects of low-dose spinal bupivacaine and the incidence of spinal anesthesia-associated hypotension in severely preeclamptic and healthy parturients undergoing cesarean sections.\n\nABSTRACT.PATIENTS AND METHODS:\nSpinal anesthesia was performed with 10 mg (= 2 mL) hyperbaric 0.5% bupivacaine plus 2.5 μg sufentanil in two groups of patients after they received 500 mL of IV lactated Ringer's solution. Heart rate and blood pressure were recorded before spinal anesthesia and at two minutes intervals for 15 minutes after the block, and then every five minutes until the end of the surgery. Hypotension was defined as more than 25% of decline in the mean arterial blood pressure compared to the baseline in both groups (or systolic blood pressure < 100 mmHg in healthy parturients) and was treated with 5 mg IV ephedrine. The total amounts of intravenous administered fluid and the total doses of ephedrine were recorded for each patient as well.\n\nABSTRACT.RESULTS:\nThe incidence rate of hypotension among the preeclamptic patients was lower than that of the healthy parturients, despite the former group receiving smaller volumes of intravenous fluids (P < 0.05). The total doses of IV ephedrine for treating hypotension were significantly lower among the preeclamptic patients (3.2 mg in preeclamptic patients versus 7.6 mg in normotensive patients) (P = 0.02). The one-minute Apgar score was significantly lower for the preeclamptic parturients (8.4 ± 0.7 versus 7.2 ± 1.5) (P = 0.001), but there was no significant difference in the five-minute Apgar scores between the two groups.\n\nABSTRACT.CONCLUSIONS:\nOur results confirm that low-dose bupivacaine spinal anesthesia is associated with a lower risk of hypotension than previously believed, and it can therefore be safely used in severe preeclamptic women undergoing cesarean delivery.\n\nBODY.1. BACKGROUND:\nPregnancy-induced hypertension is a major cause of morbidity and mortality in obstetrics, complicating 3% - 8% of pregnancies. Severe preeclampsia poses a dilemma for anesthesiologists, and there is some controversy about the best anesthetic technique for cesarean delivery in such cases (1, 2). Because of the risks related to airway edema, difficulty with the airway or failed intubation, hypertensive response to direct laryngoscopy, and aspiration pneumonitis, general anesthesia is associated with more untoward outcomes in this particular group of patients (3, 4). When there is no contraindication for performing regional anesthesia, risk-benefit considerations strongly favor neuraxial techniques over general anesthesia for cesarean delivery in cases of severe preeclampsia. Regional anesthesia techniques have been widely used recently, however, spinal anesthesia, once considered contraindicated due to the common belief that the sudden and extensive sympathetic blockade following the subarachnoid block would result in severe hypotension and compromise uteroplacental blood flow in this group of patients (5-8). Although controversial, some studies have shown the effectiveness of colloid loading on reducing the incidence of hypotension in spinal anesthesia (9, 10), but vasopressor agents and volume loading, which are commonly used to manage spinal anesthesia-induced hypotension, could put the preeclamptic patients at increased risk of hypertension and pulmonary edema (6). Recent evidence has challenged this view, suggesting that spinal anesthesia may in fact be an appropriate choice for preeclamptic women when cesarean delivery is planned, as long as neuraxial anesthesia is not contraindicated (e.g., coagulopathy, eclampsia with persistent neurologic deficits) (2, 5, 8). Although the relative safety of the subarachnoid block in these patients has been demonstrated, there are few studies that compare the differences in the hemodynamic changes and newborn well-being after single-shot spinal anesthesia between preeclamptic and healthy parturients\n\nBODY.2. OBJECTIVES:\nThis prospective randomized clinical trial was designed to compare the hemodynamic effects and the incidence of spinal anesthesia-associated hypotension after spinal anesthesia with bupivacaine plus sufentanil in severely preeclamptic versus healthy parturients undergoing cesarean sections.\n\nBODY.3. PATIENTS AND METHODS:\nAfter obtaining institutional ethics committee approval, 80 parturients (37 healthy and 43 severely preeclamptic parturients) that were being cared for in our unit from April 2011 to July 2012 were enrolled in the study after providing informed consent. Severe preeclampsia was defined as a systolic arterial blood pressure of 160 mmHg or higher, or a diastolic blood pressure of 110 mmHg or higher, associated with proteinuria > 5 g in 24 hours. Patients who were excluded were those with coagulopathy (including those with platelet counts < 50,000), placental abruption, severe fetal distress, a history of allergy to local anesthetics, oliguria of less than 500 mL in 24 hours or persistently < 30 mL/hour, cerebral or visual disturbances, pulmonary edema, hemodynamic instability, local infection of the spinal injection site, or refusal of a spinal block. All patients in the preeclampsia group received a 4 g loading dose of intravenous magnesium sulfate (Mg SO4) followed by a 1 g/hour infusion for 24 hours for seizure prophylaxis. Intravenous hydralazine of 5 mg was given at 20-minute intervals to decrease diastolic blood pressure to approximately 90 mmHg. Before performing spinal anesthesia on each patient, preoperative fluid administration equal to 10 mL/kg of Ringer's lactate solution was administered over the course of 15 - 20 minutes. All patients received 1500 - 2000 mL lactated Ringer's solution after spinal anesthesia and during the operation. The volume of administered fluid was not restricted in the preeclamptic patients because of the contracted intravascular volume in this group of patients and the high incidence of hypotension caused by spinal anesthesia-induced sympathetic blockade. Patients were monitored with standard monitoring devices including automated blood pressure cuffs, electrocardiogram, and pulse oximetry. Spinal anesthesia was performed with 10 mg hyperbaric 0.5% bupivacaine plus 2.5 - 3 μg sufentanil (2.5 mL volumes) in two groups in the sitting position in the L3-L4 or L4-L5 vertebral interspaces. Each patient was then placed in the supine position with a left lateral tilt of 15-20 degrees. The height of the sensory block was assessed using a pinprick test, and a 10°-15° head down-tilt (Trendelenburg position) was initiated if a T4 sensory level was not achieved at 10 minutes after the spinal injection. After achieving an adequate sensory block (T4 level), the patient was prepared for surgery. Heart rate and blood pressure were recorded before performing spinal anesthesia at two-minute intervals for 15 minutes after the block, and then every five minutes until the end of the surgery. Hypotension was defined as more than 25% decline in mean arterial blood pressure (MAP) compared to the baseline in both groups (or systolic blood pressure (SBP) < 100 mmHg in healthy parturients) and was treated with 5 mg IV ephedrine. The total amounts of intravenous administered fluid and the total doses of ephedrine were recorded as well. Based on the findings of previous studies, we calculated that at least 38 patients per group were required to show a 25% difference in the incidence of hypotension, with 80% power at the 5% level. Data are presented as number, median and range, mean ± SD, or percentage as appropriate. Fisher's exact test was used for intergroup comparisons of the incidence of hypotension. Mean values of most of the quantitative study variables were compared by using the unpaired Student's t-test. A P value of less than 0.05 was considered to indicate statistical significance. Data entry and analysis were performed using SPSS software version 16.\n\nBODY.4. RESULTS:\nEighty patients (severe preeclampsia = 43 and healthy = 37) were studied. Demographic data, the times from spinal injection to delivery, the median sensory blocked levels at the time of incision, the volumes of estimated blood loss, and the surgical durations were similar between the two groups (Table 1). Table 1. Maternal, Neonatal, and Anesthetic Considerations a Variable Healthy Preeclampsia P Value \n n \n 37 43 \n Age, y \n 28.1 ± 5 29.3 ± 6.6 0.12 \n Gestational age, week \n 38.8 ± 1.2 34.3 ± 3.8 0.00 \n Base line MAP \n 101.1±10.1 119.5±15.9 0.02 \n Upper Sensory Level (Median, Range) \n T4 (T2-T5) T4 (T2-T5) 0.32 \n IV fluid, mL \n 2500 ± 0.1 2400 ± 0.2 0.000 \n Spinal puncture to delivery interval, minute \n 17.5 ± 2.5 17.1 ± 2.5 0.83 \n Ephedrine dose, mg \n 7.5 ± 9.2 3.2 ± 7.8 0.04 \n APGAR score 1-min (Range) \n 8.4 ± 0.7, (5 - 10) 7.2 ± 1.5, (2 - 10) 0.001 \n APGAR score 5-min (Range) \n 9.4 ± 0.7, (7 - 10) 8.6 ± 1.1, (5 - 10) 0.35 a Values are expressed as mean ± SD unless otherwise indicated. The mean gestational age and mean one-minute Apgar scores in the patients with severe preeclampsia were significantly lower than those of the healthy parturients (Table 1). However, there was no significant difference in the five-minute APGAR score between the two groups. The incidence rate of hypotension in the preeclamptic patients (55.8%) was less than that of the healthy parturients (89.2%) (Table 2), despite the former receiving smaller volumes of intravenous fluids (2.4 versus 2.5 lit) (Table 1) (P = 0.001). Table 2. Incidence of Hypotension and Changes in Blood Pressure After Spinal Anesthesia in the Two Groups Variable Healthy Preeclampsia P Value \n Incidence of MAP hypotension (%) \n 33 (89.2%) 24 (55.8%) 0.001 \n Lowest SBP mmHg \n 98 ± 12.9 116 ± 18.6 0.02 \n Lowest DBP mmHg \n 47.5 ± 9.5 66.3 ± 14.1 0.03 \n Lowest MAP mmHg \n 64.9 ± 10.1 83.7 ± 14.9 0.008 The SBP, DBP, and MAP measured at the baseline were higher for the patients with preeclampsia, and the mean lowest SBP and MAP measured among the preeclamptic patients were consistently higher than the corresponding values among the healthy parturients (Table 2). Furthermore, the total doses of IV ephedrine for treating hypotension were significantly lower for the preeclamptic patients than for the normotensive patients (3.2 ± 7.8 mg versus 7.5 ± 9.2 mg) (P = 0.04).\n\nBODY.5. DISCUSSION:\nSpinal anesthesia-associated hypotension may occur in up to 64% - 100% of pregnant women undergoing cesarean delivery (2). Severely preeclamptic patients have been considered to be at higher risk of severe hypotension (1, 2, 5-8), and the concern of severe hypotension caused by subarachnoid block may often deter the anesthesiologist from choosing this technique for this group of patients. Epidural anesthesia has traditionally been regarded to be safer for preeclamptic parturients as it does not produce sudden hypotension. However, some studies have shown that the two techniques produce a similar incidence and severity of hypotension in preeclamptic parturients (6, 7, 11). There is growing interest in using spinal anesthesia on preeclamptic patients because of its simplicity, faster onset, lower dose of injected local anesthetic (which decreases the probability of systemic toxicity), and less tissue trauma caused by the use of a smaller gauge spinal needle (12-14). As a result of this interest, a number of studies have been conducted to show the hemodynamic consequences of spinal anesthesia in patients with preeclampsia. A prospective study by Aya et al. found that the risk of hypotension following spinal anesthesia in preeclamptic patients was significantly lower than the risk among healthy term parturients (17% in preeclamptic parturients and 53% in healthy parturients) (2). In another study, the same author suggested that the risk of hypotension following a subarachnoid block in preeclampsia was related to preeclampsia-associated factors rather than a small uterine size (15). Similar to the studies by Aya et al., the incidence of hypotension in severely preeclamptic patients undergoing spinal anesthesia for cesarean delivery was found to be significantly lower in comparison to the rate among healthy parturients (55% versus 89%) in our study. Factors such as difference in gestational age, the carrying of a smaller fetus, less aortocaval compression, sympathetic hyperactivity, and high vascular tone might have led to this finding (1, 2, 8, 16). The injection of different doses of bupivacaine (10 mg versus 8 - 12 mg) for the induction of the subarachnoid block and the different criteria for defining hypotension (a 25% versus 30% decline to baseline MAP) might explain why the incidence of hypotension was higher in both groups in our study compared to the corresponding rates in Aya et al.'s study. According to two other studies conducted by Mendes et al. (17) and Saha et al. (8), the hemodynamic changes and newborn well-being appeared to be comparable in severely preeclamptic and healthy parturients submitted to spinal anesthesia for cesarean section, and spinal anesthesia seemed to be a safe option for patients with severe preeclampsia. The ephedrine requirement for treatment of spinal anesthesia-induced hypotension in preeclampsia has been reported to be lower than that required by healthy parturients (2, 18). Preeclamptics have been reported as requiring significantly less phenylephrine to treat hypotension as well (8). These results were comparable to our findings, in that the total doses of IV ephedrine for treating hypotension were significantly lower for the preeclamptic patients (3.2 ± 7.8 mg) than for the normotensive patients (7.5 ± 9.2 mg) (P = 0.04). There is little evidence in the current literature supporting the use of phenylephrine as the vasopressor of choice in high-risk pregnancies such as those involving preeclampsia (19), so we chose to use ephedrine for treating hypotension in our patients. More studies are needed to investigate the effects of vasopressors while considering the influence on feto-maternal physiology in patients with preeclampsia. The results of a review by Dyer et al. showed that after spinal anesthesia for cesarean section, patients with preeclampsia had a lower susceptibility to hypotension and less impairment of cardiac output than healthy parturients (20). In a prospective observational study on 15 parturients with severe preeclampsia, no clinically significant change in cardiac output was shown after the subarachnoid block (21). The focus of our study was on the blood pressure changes during spinal anesthesia in the preeclamptic patients, and therefore we did not measure the cardiac output fluctuations in our patients. Further studies with larger sample sizes evaluating cardiac output are needed for better understanding of hemodynamic changes during spinal anesthesia in this group of patients. It is believed that the incidence of spinal anesthesia-induced hypotension is related to the local anesthetic dose, so one particular strategy to minimize the hemodynamic disruption after spinal anesthesia involves using small intrathecal local anesthetic doses. In a pilot study which compared the hemodynamic consequences of two doses of spinal bupivacaine (7.5 mg versus 10 mg) for cesarean delivery in those with severe preeclampsia, pre-delivery MAP was lower and the ephedrine requirements were greater in the 10 mg group (22). In another study, Roofthoof and Van de Velde have shown that when low-dose spinal anesthesia (6.5 mg bupivacaine) was administered with sufentanil as part of a combined spinal-epidural (CSE) technique in shorter surgeries (less than 60 minutes), the need for epidural supplementation was rare (23). The ED95 of intrathecal bupivacaine coadministered with intrathecal 2.5 μg sufentanil using CSE anesthesia for cesarean section in severely preeclamptic patients was reported to be 8.82 mg in another study, and using smaller doses of intrathecal bupivacaine in the patients resulted in a decrease of incidences of maternal hypotension and vasopressor requirements (24). However, no studies have compared CSE with single-shot spinal anesthesia in severe preeclampsia, and further research is needed to elucidate the best strategy to optimize the hemodynamics and uteroplacental perfusion in this particular group of patients. Considering the neonatal outcomes after various anesthesia techniques in cesarean delivery among preeclamptic patients, no statistically significant difference was found in the one- and five-minute Apgar scores and the umbilical artery blood gas markers between the two groups of patients receiving spinal or general anesthesia (25). Other studies in support of subarachnoid block have also shown that transient neonatal depression and birth asphyxia are more common among preeclamptic women who have received general anesthesia (26). Comparing umbilical arterial fetal base deficit and other markers of maternal and neonatal well-being in 70 preeclamptic patients undergoing cesarean delivery who were randomized into groups receiving either spinal or general anesthesia, the spinal group had a higher mean umbilical arterial base deficit and a lower median umbilical arterial pH, but other markers of a compromised neonatal condition, including the requirement for neonatal resuscitation, an Apgar score < 7, an umbilical arterial pH < 7.2, and the need for neonatal intermittent positive pressure ventilation were the same among the two groups (27). In comparison with healthy subjects, patients with severe preeclampsia had a younger gestational age (34 weeks versus 39 weeks) in our study, which is one of the likely causes of the lower one-minute Apgar scores of the neonates among the first group. Although there was evidence as early as 1950 that preeclampsia attenuates spinal anesthesia-induced hypotension, it has taken a long time for clinical trials to demonstrate the safety of spinal and CSE anesthesia in preeclamptic parturients. Recently, after five decades of research, the relationship between spinal anesthesia, pre-eclampsia, and hypotension can be properly acknowledged and put into clinical practice (28). Because of an altered balance of vascular tone, reduced responses to endogenous pressors, and increased synthesis of vasodilator prostaglandins and nitric oxide, the normal pregnant patient is very sensitive to spinal anesthesia. These effects increase dependence on sympathetic vascular tone in normal pregnancy, and this can be the main cause of spinal anesthesia-induced hypotension in healthy parturients, while damaged vascular epithelium results in persistent vasoconstriction in preeclampsia (8, 16). There is a dramatic increase in the use of spinal anesthesia for cesarean delivery in severe preeclampsia that could be related to the documented safety of subarachnoid block in this group of patients. Therefore, single-shot subarachnoid block may be a good choice for cesarean delivery in patients with severe preeclampsia, since it has been shown to be safe for both the mother and the neonate (28).\n\nBODY.5. DISCUSSION.5.1. CONCLUSION:\nOur results have also confirmed that single-shot low-dose bupivacaine spinal anesthesia is associated with a lower risk of hypotension and vasopressor requirements in comparison to the rates of healthy subjects, and could be safely used in patients with severe preeclampsia undergoing cesarean delivery. However, more studies with the CSE technique using smaller doses of local anesthetics and larger sample sizes are suggested. Further research is needed to find the best strategies to optimize hemodynamics and uteroplacental perfusion in severely preeclamptic parturients during spinal anesthesia for cesarean delivery.\n\n**Question:** Compared to Low-dose bupivacaine spinal anesthesia in healthy parturients what was the result of Low-dose bupivacaine spinal anesthesia in pre-eclamptic parturients on One-minute Apgar score?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1021
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Management of \n\n ABSTRACT.INTRODUCTION:\nPrevalence of diabetes and its complications have been a burden to the society from the ancient times, in the present and also will be in the future unless proper measures are taken to prevent its manifestation. There have been an increasing number of death associated-amputation cases which are mainly caused by nonhealing wounds. These facts urge researchers to develop new, more effective wound treatments for diabetic patients.\n\nABSTRACT.AIM:\nTo evaluate and compare the effect of Katupila Kalka (Securinega leucopyrus [Willd.] Muell. leaf paste) and Tila Taila (Sesamum indicum oil) in Madhumehajanya Vrana (diabetic wounds/ulcers) with Betadine ointment.\n\nABSTRACT.MATERIALS AND METHODS:\nA total of 23 patients of Madhumehajanya Vrana were chosen and randomly divided into two groups (Group A and B). Patients of Group A were treated with local application of Katupila Kalka with Tila Taila, whereas, in Group B, Betadine ointment was applied on the affected parts, once a day in the morning for 30 days. The relief in signs and symptoms were assessed on scoring pattern.\n\nABSTRACT.RESULTS:\nIn Group A, diabetic wounds treated with Katupila paste got healed within 28 days with minimal scar formation without any complications, whereas in Group B, wound was healed completely only in two patients within 28 days. In both the groups, no patients reported any adverse drug reaction during the entire course of treatment as well as in follow-up period.\n\nABSTRACT.CONCLUSION:\nStudy concluded that the drug Katupila Kalka possesses Vrana Ropana (wound healing) activity with fine scaring.\n\nBODY.INTRODUCTION:\nIn 2011, it was estimated that 366 million people worldwide had diabetes, representing roughly 8.3% of the adult population (20–79 years of age group). The number is expected to reach 552 million by 2030, representing 9.9% of the adult population. Diabetes is associated with various short and long-term complications, many of which, if left untreated, can be fatal. All these also have the potential to reduce the quality of life of people with diabetes and their families. The most common long-term complications affect the heart, eyes, kidneys, and feet.[1] Foot infections in persons with diabetes are a common and complex.[23] In addition to causing severe morbidities, they now account for the largest number of diabetes-related hospital bed-days[4] and are the most common proximate, nontraumatic cause of amputations.[56] Diabetic foot infections require careful attention and coordinated management, preferably by a multidisciplinary foot-care team.[7] Ayurveda has its own principles, but it is incorporating new theories and drugs in it and is presenting them according to its principles. Number of drugs known to Ayurveda since Vedic era is increasing with the time. Acharya Charaka says that a physician can add new drugs to these Mahakashayas by his own intelligence that will help him to treat the patients.[8] He also states that one should get the knowledge about plants from tribal people or cowherds as they know and use the plants in a better way.[9] However, after Acharya Bhavamishra, the process of addition of new drugs to science has diminished or somewhat ceased. It is said that the wounds/ulcers of the diabetic patient are cured with difficulty.[10] Even with the latest technology and modern medicine in hand, highly trained medical team around, yet the majority of the diabetic ulcers end up with more or less amputation of the concerned major or minor part of the lower limb. Securinega leucopyrus (Willd.) Muell. is known as Humari in India, Katupila in Sri Lanka and Spinous Fluggea in English. It is a desert climatic plant used topically in paste form for the healing of chronic and nonhealing wounds. Hence, in the light of the above references from classics, Katupila Kalka (paste) was selected for this study with an aim to assess its effect in the management of Madhumehajanya Vrana (diabetic ulcer).\n\nBODY.MATERIALS AND METHODS:\nDiagnosed patients of diabetic wounds were selected from outpatient and inpatient departments of Shalya Tantra irrespective of the age, sex, religion, caste, occupation, etc. The study was started after obtaining approval from the Institutional Ethics Committee (PGT/7-A/2012-2013/1964 dated 21/09/2012) and study has been registered under Clinical Trial Registry of India (No.: CTRI/2013/09/004020). Informed written consent from each patient was obtained before starting treatment course.\n\nBODY.MATERIALS AND METHODS.INCLUSION CRITERIA:\nDiagnosed cases of Madhumehajanya Vrana with Type I and Type II diabetes mellitus of 18–70 years age of either sex.\n\nBODY.MATERIALS AND METHODS.EXCLUSION CRITERIA:\nThe patients with a history of tuberculosis, wounds other than diabetic wounds, uncontrolled hypertension, cardiac diseases, such as myocardial infarction, coronary artery disease, and ischemic heart disease were excluded in this study. Patients with human immune virus (HIV), venereal disease research laboratory (VDRL) and hepatitis-B test positive were also excluded from the study.\n\nBODY.MATERIALS AND METHODS.LABORATORY INVESTIGATIONS:\nRoutine hematological parameters such as hemoglobin percentage, total leukocyte count, differential leucocyte count, bleeding time, clotting time, erythrocyte sedimentation rate; blood biochemical parameters such as fasting blood sugar, postprandial blood sugar, hepatitis B surface antigen, VDRL, and HIV; urine examination (routine and microscopic) was done to find out any other pathological condition.\n\nBODY.MATERIALS AND METHODS.RADIOLOGICAL EXAMINATION:\nX-ray of the affected foot (dorsoventral and lateral view) was done to rule out the osteomylitis of bone.\n\nBODY.MATERIALS AND METHODS.GROUPING AND POSOLOGY:\nTotally 23 patients of the diabetic wound were allocated randomly into two groups (Group A and B). \nGroup A (n = 13): In this group, Katupila Kalka (paste of S. leucopyrus leaves) with Tila Taila (sesame oil) was applied locally on the affected part. The dressing was done once in the morning up to 1 month or healing of wound which is earlierGroup B (n = 10): In this group, patients were treated with dressing of Betadine ointment once daily in the morning and treated as the standard control group.\n The leaves of Katupila were obtained from Sri Lanka and authenticated[11] at Pharmacognosy Laboratory, IPGT and RA, Jamnagar. Patients were advised to continue their anti-diabetic and anti-hypertensive medications (Allopathic/Ayurvedic) during treatment.\n\nBODY.MATERIALS AND METHODS.ASSESSMENT OF THERAPY:\nThe assessment of results was carried out mainly on the basis of relief in the signs and symptoms of the ulcers that are odor (Gandha), peri-wound skin (Varna), exudates level (Srava), type of exudates, pain (Vedana), wound measurement in cm (Akruti), and signs of infection [Table 1]. Table 1 Gradation for assessment of results Effect of therapy was statistically analysed by using student 't' test.\n\nBODY.MATERIALS AND METHODS.OVERALL EFFECT OF THERAPY:\n\nCured: 100% improvement in subjective and objective parametersMarked improvement: 70–99%Moderate improvement: 50–69%Mild improvement: 30–49%Unchanged: <30%.\n\n\nBODY.OBSERVATIONS:\nThe demographic data reveal that maximum patients were from the age group of 46 to 65 years (57%). Male patients were more that is 78.3% may be due to their irregular eating habits, addiction, excess standing work, and less care. Maximum patients were following Hindu religion (78%) and married (87%). Socioeconomically middle-class patients were in the majority (78.2%). The 78.27% of patients were having a history of diabetes and 21.73% of patients found with a history of both diabetes and hypertension. The 69.5% of patients had family history of diabetes. For the management of diabetic wound, 39.1% of patients had received medical treatment in past and 52.1% of patients had undergone surgery, whereas 8.6% of patients had both medical and surgical treatment. The majority of the patients that is 56.5% were suffering from wounds since 1 to 6 months, 34.7% of patients noted traumatic onset and maximum 73.9% were found with single wound. The 82.6% of patients were having Puya Srava (purulent exudates) and 95.6% of patients had Ghana Srava with 43.4% were having moderate Srava (discharge). Maximum numbers of patients presented without pain whereas only 34.9% of patients were found with pain. Bad odor was noted in all wounds and fever was found in maximum 73.91% of patients. In maximum 47.81% of patients the location of wound was at foot, as this part of the body more often gets trauma.\n\nBODY.RESULTS:\nIn this study, infection was controlled within 3–4 days and foul smell was controlled after 3 days in patients of Group A. In Group B, foul smell from ulcer continues until the wound was healed. The surrounded skin color became normal skin color within 7 days in maximum patients (76%) of Group A, whereas only 32% patients showed normal peri-wound skin color in Group B. In Group A, 55% of patients showed a complete cessation of discharge within 7 days whereas in Group B, exudation found reduced in 34% of patients at the end of 28th day. In all patients, pain sensation at wound site regained by 7th day in Group A, due to neo-vascularization and restored pain sensation in nerve fibers. In Group A, 12 patients wound size was reduced completely within 28 days, whereas in Group B, only 29% of patients wound size was reduced completely within 28 days. In Group A, signs of infections relieved 61% by the 7th day and totally relieved at the end of 28th day, whereas in Group B, patients infection was relieved 41% by the end of 28th day [Tables 2 and 3]. Table 2 Effect of therapy on signs and symptoms of wound in Group A Table 3 Effect of therapy on signs and symptoms of wound in Group B In Group A, 92.3% of patients were completely cured, whereas in Group B, 20% of patients were completely cured [Figure 1]. Wound healing effect of Katupila in four cases is depicted in Figure 2. Figure 1Overall effect of therapy Figure 2Before and after treatment wound status of Katupila treated cases\n\nBODY.DISCUSSION:\nIn this study, foul smell was controlled after 3 days in Group A patients, which was found statistically significant (P < 0.001). In Group B, foul smell from ulcer continues until the wound was healed. The Krimighna, Vishaghna, Rakshoghna (antimicrobial) properties of the Katupila helped to control the local infection and ultimately the bad odor.[12] In Group A, 100% normal skin coloration was achieved which was found statistically significant (P < 0.001). In 32% of patients of Group B, peri-wound skin color became normal which was found statistically significant (P > 0.05). Katupila may have Rakta Shodhana, Pittashamana, Varnya, Twaka Prasadana properties which helped to improve the surrounding wound skin color. Complete cessation of discharge within 7 days was observed in 55% of patients of Group A, while in remaining patients discharge was reduced by 28th day which was statistically significant (P < 0.001). Krimighna property of drug helped to control the infection. Hence, it can be said that Katupila paste had a better role than Betadine to reduce exudates level. In Group A, 77% of patients presented without pain while the pain was reported in 23% of patients due to diabetic neuropathy. After treating with Katupila, pain sensation regained by 7th day due to neo-vascularization and restored pain sensation in nerve fibers. The statistically insignificant result in pain relief was seen due to the restoration of pain sensation by correcting neuropathy. The Snigdha Guna of Tila Taila helped in control the vitiation of Vata and helped to relief in pain.[13] In Group B, there was minimal neovascularization, but the pain was relieved in 25% of patients due to the healed wounds which were statistically significant (P > 0.05). Hence, it can be inferred that Katupila is better than Betadine ointment to regain pain sensation (initial stage) as well as to relieve pain (in later stage). Complete wound healing was observed within 28 days in 92% cases so wound size reduction was found statistically highly significant (P < 0.001) which may be due to Katupila which may have Lekhana (scraping), Shodhana (cleaning) and Ropana (healing) properties. Madhura Rasa, Prinana (nourishment), Dhatuvardhana (tissue strength) and Dhatuposhana (nutrition) actions of Katupila helped to reduce the wound size by promoting healing and the rate of contraction. The infection at the wound site was assessed on the basis of odor, exudate level, local temperature, and swelling in both the groups. In Group A, signs of infections relieved 61% by the 7th day and totally relieved at the end of 28th day. Local application of Katupila reduces the discharge, swelling, pain by controlling the bacterial load, so wound became Shuddha (clean or healthy). Whereas in Group B, infection was relieved in 41% by the end of 28th day. Both drugs have shown statistically highly significant (P < 0.001) result in controlling infection. However, on the comparison, the duration of infection control by Katupila paste (that is 1st week) was found earlier than Betadine which took longer duration in controlling infections. The strength and positive finding of this study is Katupila Kalka had the potential to cure the diabetic wounds early than Betadine ointment. Findings in previously published case reports also showed the good potential in wound healing by Katupila.[1415] None of the adverse drug reaction noted during the treatment period. A reverse pharmacological study in the suitable animal model is a further scope of this work. As this drug is being used by traditional people and of herbal origin, there is a need to generate safety and toxicity profile of Katupila. The limitation of this research work is less number of patients due to the limited time of the postgraduate study.\n\nBODY.CONCLUSION:\nThe study concluded that Katupila Kalka (paste) possesses highly qualitative efficacy in \"Vrana Ropana\" with fine scaring. Therefore, it may be recommended that this low cost, easily preparable application may be prescribed for continuous use for the healing purposes.\n\nBODY.CONCLUSION.FINANCIAL SUPPORT AND SPONSORSHIP:\nIPGT & RA, Gujarat Ayurved University, Jamnagar.\n\nBODY.CONCLUSION.CONFLICTS OF INTEREST:\nThere are no conflicts of interest.\n\n**Question:** Compared to Betadine ointment what was the result of Katupila Kalka with Tila Taila on Adverse reactions?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1284
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Ticagrelor vs. clopidogrel in patients with non-ST-elevation acute coronary syndrome with or without revascularization: results from the PLATO trial\n\n ABSTRACT.AIMS:\nThe optimal platelet inhibition strategy for ACS patients managed without revascularization is unknown. We aimed to evaluate efficacy and safety of ticagrelor vs. clopidogrel in the non-ST-elevation acute coronary syndrome (NSTE-ACS) subgroup of the PLATO trial, in the total cohort, and in the subgroups managed with and without revascularization within 10 days of randomization.\n\nABSTRACT.METHODS AND RESULTS:\nWe performed a retrospective analysis of the primary endpoint of cardiovascular death/myocardial infarction/stroke. Among 18 624 PLATO patients, 11 080 (59%) were categorized as NSTE-ACS at randomization. During the initial 10 days, 74% had angiography, 46% PCI, and 5% CABG. In NSTE-ACS patients, the primary endpoint was reduced with ticagrelor vs. clopidogrel [10.0 vs. 12.3%; hazard ratio (HR) 0.83; 95% confidence interval (CI) = 0.74–0.93], as was myocardial infarction (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99), cardiovascular death (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93), and all-cause death (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90). Major bleeding rate was similar between treatment groups (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19), but ticagrelor was associated with an increase in non-CABG major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56). Within the first 10 days, 5366 (48.4%) patients were managed without revascularization. Regardless of revascularization or not, ticagrelor consistently reduced the primary outcome (HR 0.86 vs. 0.85, interaction P = 0.93), and all-cause death (HR 0.75 vs. 0.73, interaction P = 0.89) with no significant increase in overall major bleeding.\n\nABSTRACT.CONCLUSION:\nIn patients with NSTE-ACS, benefit of ticagrelor over clopidogrel in reducing ischaemic events and total mortality was consistent with the overall PLATO trial, independent of actually performed revascularization during the initial 10 days.\n\nBODY:\nSee page 2055 for the editorial comment on this article (doi:10.1093/eurheartj/ehu202)\n\nBODY.INTRODUCTION:\nDual antiplatelet therapy with aspirin and a P2Y12 inhibitor is standard of care in acute coronary syndromes (ACS).1,2 Ticagrelor is the first reversibly binding direct P2Y12 inhibitor. As opposed to clopidogrel and prasugrel, it does not require enzymatic activation, and causes faster, greater, and more consistent platelet inhibition compared with clopidogrel.3,4 The Platelet Inhibition and Patient Outcomes (PLATO) trial compared ticagrelor with clopidogrel in patients with ACS. Ticagrelor was superior in preventing ischaemic events (the composite of death from vascular causes, myocardial infarction, and stroke), as well as death from any cause, without a significant increase in all-cause major bleeding. PLATO included both invasively and non-invasively managed patients. The decision on which management strategy to pursue was made by the investigator.5 The third-generation thienopyridine prasugrel is, like clopidogrel, an irreversible P2Y12 inhibitor, but with faster and more consistent platelet inhibition.6 Prasugrel, when compared with clopidogrel, reduced ischaemic events in patients with ACS planned for PCI in the Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition with Prasugrel (TRITON),7 but in non-ST-elevation acute coronary syndrome (NSTE-ACS) patients planned for management without revascularization, prasugrel showed no benefit over clopidogrel in the Targeted Platelet Inhibition to Clarify the Optimal Strategy to Medically Manage Acute Coronary Syndromes (TRILOGY-ACS) trial.8 The aim of the present study was to explore the effect of ticagrelor vs. clopidogrel in the total NSTE-ACS subgroup of the PLATO trial and also stratified by initial management with or without revascularization.\n\nBODY.METHODS.STUDY DESIGN:\nThe PLATO trial (ClinicalTrials.gov No. NCT00391872) was an international randomized double-blind, double-dummy phase III study comparing ticagrelor with clopidogrel in ACS. Details of the study design and overall results have been published previously.5,9 Briefly, a total of 18 624 patients were randomized to receive either ticagrelor or clopidogrel on background treatment with aspirin. The study included patients with ST-elevation MI intended for primary PCI, as well as patients with non-ST-elevation ACS, regardless if aimed for an initial invasive or non-invasive treatment strategy. It adhered to the Declaration of Helsinki, to the specifications of the International Conference of Harmonization, and to Good Clinical Practice; and was approved by national and institutional regulatory authorities and ethics committees. All participants provided written informed consent.\n\nBODY.METHODS.PATIENTS:\nTwo or more of the following inclusion criteria were required for patients enrolled without ST-elevation ACS at admission: (1) ST-segment changes on ECG indicating ischaemia [ST-segment depression or transient elevation (≥1 mm) in at least two contiguous leads]; (2) positive biomarker indicating myocardial necrosis (troponin I or T or CK-MB above the upper limit of normal); (3) one of the following: ≥60 years of age, previous myocardial infarction or coronary artery bypass surgery, coronary artery disease with ≥50% stenosis in ≥2 vessels, previous ischaemic stroke, transient ischaemic attack (TIA), carotid stenosis, cerebral revascularization, diabetes mellitus, peripheral artery disease, or chronic renal dysfunction. In the current analysis, NSTE-ACS was in the data-base defined as absence of either persistent ST-segment elevation of 1 mV for 20 min in two contiguous leads or new (or presumed new) left bundle branch block in entry ECG. In addition to the overall NSTE-ACS population, we also analysed outcomes in the subgroups of NSTE-ACS patients who initially underwent revascularization and those who were treated without early revascularization. Both cohorts required endpoint-free survival for 10 days post-randomization to determine revascularization status. The no-revascularization subgroup was defined as NSTE-ACS patients who did not undergo any revascularization procedure (PCI or CABG) with or without angiography during the first 10 days.\n\nBODY.METHODS.ENDPOINTS:\nThe primary efficacy endpoint of the present study was the composite of cardiovascular death, myocardial infarction, (excluding silent infarctions) and stroke. Each component alone and all-cause death were secondary efficacy endpoints. Major bleeding by PLATO criteria9 and life-threatening/fatal bleeding were the primary safety endpoints. Secondary safety endpoints were CABG-related major bleeding, non-CABG-related major bleeding, minor bleeding, intracranial bleeding, and fatal bleeding. Additional secondary safety outcomes were bleeding events as defined by TIMI (major, minor, and non-CABG related major) and GUSTO (severe and moderate). An independent central adjudication committee, unaware of treatment assignments, assessed all endpoints. Major bleeding as defined by TIMI was recorded from the electronic case report form, where a drop of 50 g/L in haemoglobin was used as a cut-off, but this did not necessarily require clinical evidence of bleeding.\n\nBODY.METHODS.STATISTICAL ANALYSES:\nPatient characteristics and medical history, in-hospital procedures and medications, discharge ACS status, and post-discharge medications are presented by treatment in the NSTE-ACS patient cohort as well as by revascularization status subgroup. Continuous variables are presented as median and 25th–75th percentiles; categorical variables are presented as number and percentage. The treatment effect of ticagrelor vs. clopidogrel on the primary and secondary endpoints was compared within the subgroup of NSTE-ACS patients. Kaplan–Meier estimated event rates at 360 days and total number of observed events during the study were presented for each endpoint. Hazard ratios (HR), confidence intervals (CI), and P-values from unadjusted Cox proportional hazards regression models were presented. Kaplan–Meier estimated event rates were plotted by treatment for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. To examine whether the effect of ticagrelor in patients with NSTE-ACS differed based on early revascularization, Cox proportional hazards models were fitted for each endpoint using a treatment-by-revascularization interaction term. In these models, we adjusted for region of the world to account for differences in revascularization practice and the time-to-event was measured from a landmark at 10 days post-randomization. Kaplan–Meier rates 350 days post-landmark are presented for each treatment/revascularization category along with HR for ticagrelor vs. clopidogrel in the revascularization and medical management subgroups. Interaction P-values assess whether the treatment effect is different depending on revascularization status. Kaplan–Meier event rates adjusted for region are plotted by the four treatment/revascularization categories for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. For consistency with the PLATO design paper,9 we also performed a 30-day landmark analysis; and as a sensitivity analysis, we also performed a 10-day landmark analysis of the full study population. Finally, the association of treatment with the primary efficacy endpoint was evaluated within several patient cohorts using Cox proportional hazards models, separately within the revascularization and non-revascularization subgroups. Forest plots present the HR for ticagrelor vs. clopidogrel in each subgroup cohort. Endpoints are defined using intent to treat and a P-value of 0.05 is used to denote statistical significance. Data were analysed using SAS version 9.2 for all analyses.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nThe PLATO trial included 18 624 patients with ACS, of which 11 080 patients were classified as NSTE-ACS at randomization. Of these, 5581 were randomized to ticagrelor and 5499 to clopidogrel (Figure 1). The baseline and in-hospital characteristics were similar between groups in the overall NSTE-ACS population (Table 1). In the overall NSTE-ACS group, 74% of patients had a coronary angiography performed, 46% underwent PCI, and 5% CABG. The discharge diagnosis (in this population categorized as NSTE-ACS at admission) was NSTEMI in 65%, STEMI in 8%, and unstable angina/other in 27%.\nTable 1Baseline characteristics and invasive procedures by randomized treatment in the overall NSTE-ACS population, and by initial treatment strategy (within the first 10 days after randomization) Overall NSTE-ACSNSTE-ACS with revascularizationNSTE-ACS without revascularizationTicagrelor (N = 5581)Clopidogrel (N = 5499)Ticagrelor (N = 2873)Clopidogrel (N = 2841)Ticagrelor (N = 2708)Clopidogrel (N = 2658)Demographics Age, median (25th–75th percentile), years64 (56–72)64 (56–72)63 (55–71)63 (55–71)65 (57–73)65 (57–73) Age ≥ 75 years, n (%)955 (17.1)1024 (18.6)420 (14.6)460 (16.2)535 (19.8)564 (21.2) Female gender, n (%)1746 (31.3)1746 (31.8)706 (24.6)716 (25.2)1040 (38.4)1030 (38.8) Body weight <60 kg, n (%)398 (7.2)389 (7.1)165 (5.8)172 (6.1)233 (8.6)217 (8.2) Body mass index, median (25th–75th percentile), kg/m227.5 (24.8–30.8)27.4 (24.8–30.5)27.5 (24.9–30.8)27.5 (24.9–30.5)27.5 (24.7–30.6)27.3 (24.6–30.5) GRACE risk score nomogram130 (112–150)130 (112–149)128 (110–145)127 (110–145)133 (114–154)134 (116–153)Cardiovascular risk factors, n (%) Current smoking1636 (29.4)1640 (29.9)1000 (34.8)960 (33.8)636 (23.6)680 (25.6) Hypertension3915 (70.2)3835 (69.8)1881 (65.5)1886 (66.4)2034 (75.3)1949 (73.4) Dyslipidaemia2885 (51.8)2852 (51.9)1515 (52.8)1569 (55.2)1370 (50.7)1283 (48.3) Diabetes mellitus1608 (28.9)1520 (27.7)756 (26.3)747 (26.3)852 (31.6)773 (29.1)Medical history, n (%) Angina pectoris2932 (52.6)2890 (52.6)1337 (46.5)1356 (47.7)1595 (59.1)1534 (57.8) Myocardial infarction1400 (25.1)1410 (25.7)609 (21.2)608 (21.4)791 (29.3)802 (30.2) Congestive heart failure397 (7.1)429 (7.8)99 (3.4)103 (3.6)298 (11.0)326 (12.3) Percutaneous coronary intervention (PCI)944 (16.9)918 (16.7)537 (18.7)517 (18.2)407 (15.1)401 (15.1) Coronary artery bypass graft (CABG)434 (7.8)474 (8.6)214 (7.4)218 (7.7)220 (8.1)256 (9.6) Transient ischaemic attack185 (3.3)189 (3.4)82 (2.9)74 (2.6)103 (3.8)115 (4.3) Non-hemorrhagic stroke246 (4.4)243 (4.4)81 (2.8)98 (3.5)165 (6.1)145 (5.5) Peripheral arterial disease400 (7.2)413 (7.5)174 (6.1)204 (7.2)226 (8.4)209 (7.9) Chronic renal disease273 (4.9)279 (5.1)117 (4.1)108 (3.8)156 (5.8)171 (6.4)Physical findings, median (25th–75th percentile) Heart rate (bpm)72 (64–80)72 (64–81)71 (62–80)72 (63–80)72 (64–82)72 (64–82) Systolic blood pressure (mmHg)135 (120–150)134 (120–150)135 (120–150)135 (120–150)134 (120–150)132 (120–150) Diastolic blood pressure (mmHg)80 (70–89)80 (70–87)80 (70–89)80 (70–87)80 (70–89)80 (70–87)Risk indicators, n (%) Troponin positive, n (%)4356 (80.8)4323 (81.3)2522 (89.6)2486 (89.3)1834 (71.3)1837 (72.4) ST depression (≥1 mm)3158 (56.8)3201 (58.4)1547 (54.0)1535 (54.2)1611 (59.8)1666 (62.8) TIMI risk score > 24838 (89.7)4785 (89.8)2584 (91.3)2566 (91.8)2254 (88.0)2219 (87.7)Type of ACS at discharge, n (%) STEMI449 (8.1)437 (8.0)330 (11.5)305 (10.7)119 (4.4)132 (5.0) NSTEMI3605 (64.8)3525 (64.3)2045 (71.2)2025 (71.3)1560 (58.0)1500 (56.7) UA/other1509 (27.1)1524 (27.8)497 (17.3)510 (18.0)1012 (37.6)1014 (38.3)Antithrombotic treatment during index hospitalization, n (%) Aspirin5386 (96.6)5316 (96.8)2797 (97.4)2779 (97.8)2589 (95.9)2537 (95.8) Unfractionated heparin2910 (52.1)2856 (51.9)1845 (64.2)1845 (64.9)1065 (39.3)1011 (38.0) Low molecular weight heparin3181 (57.0)3084 (56.1)1568 (54.6)1501 (52.8)1613 (59.6)1583 (59.6) Fondaparinux194 (3.5)191 (3.5)87 (3.0)76 (2.7)107 (4.0)115 (4.3) Bivalirudin140 (2.5)133 (2.4)135 (4.7)122 (4.3)5 (0.2)11 (0.4) GP IIb/IIIa inhibitor1170 (21.0)1117 (20.3)982 (34.2)963 (33.9)188 (6.9)154 (5.8)Other drug from randomization to end of study, n (%) Beta-blocker4768 (85.4)4685 (85.2)2536 (88.3)2512 (88.4)2232 (82.4)2173 (81.8) ACE-inhibitor and/or ARB4693 (84.2)4602 (83.8)2447 (85.2)2433 (85.6)2246 (83.2)2169 (81.9) Statin5196 (93.1)5109 (92.9)2777 (96.7)2745 (96.6)2419 (89.3)2364 (88.9) Proton-pump inhibitor2776 (49.7)2613 (47.5)1618 (56.3)1536 (54.1)1158 (42.8)1077 (40.5) Calcium channel inhibitor1553 (27.8)1515 (27.6)720 (25.1)723 (25.4)833 (30.8)792 (29.8) Diuretic2393 (42.9)2280 (41.5)1073 (37.3)1037 (36.5)1320 (48.9)1243 (46.9)Invasive procedures Coronary angiography During first 10 days, n (%)4143 (74.5)4072 (74.2)2873 (100.0)2841 (100.0)1270 (47.2)1231 (46.6) After first 10 days, n (%)357 (6.4)335 (6.1)0 (0)0 (0)357 (13.3)335 (12.7) PCI During first 10 days, n (%)2590 (46.4)2550 (46.4)2590 (90.1)2550 (89.8)0 (0)0 (0) After first 10 days, n (%)279 (5.0)291 (5.3)11 (0.4)3 (0.1)268 (9.9)288 (10.8) CABG During first 10 days, n (%)296 (5.3)305 (5.5)296 (10.3)305 (10.7)0 (0)0 (0) After first 10 days, n (%)382 (6.8)375 (6.8)55 (1.9)55 (1.9)327 (12.1)320 (12.0)\nFigure 1Trial profile—patients classified by entry ECG and by treatment during the first 10 days after randomization. At 10 days post-randomization, 5366 patients were alive and had not undergone revascularization. There were regional differences in the proportion of patients undergoing revascularization during the initial 10 days (Asia/Australia 44%, Central/South America 40.3%, Europe/Middle East/Africa 49.8%, North America 71.6%). Non-revascularized and revascularized patients were of similar age (63 vs. 65), but patients who did not undergo revascularization were more likely to be of female gender (39 vs. 25%), and more likely to have comorbidities (e.g. previous myocardial infarction, heart failure, renal disease). Although non-revascularized patients had more comorbidities, they were less likely to be troponin I positive (Table 1). The proportion of patients with TIMI risk score >2 was slightly higher in the revascularization group (92 vs. 88%). During the first 10 days, 47% of non-revascularized patients underwent coronary angiography. When including only those who had angiography during the initial 10 days, female gender was more common in the non-revascularized (36%) than revascularized (25%). No significant coronary artery disease at angiography was noted in 32 and 0.7% of the non-revascularized and revascularized, respectively.\n\nBODY.RESULTS.EFFICACY IN THE OVERALL NON-ST-ELEVATION ACUTE CORONARY SYNDROME POPULATION:\nEfficacy and safety outcomes of the overall NSTE-ACS population are summarized in Table 2, and Kaplan–Meier curves are shown in Supplementary material online. The incidence of the primary composite endpoint was reduced with ticagrelor vs. clopidogrel (10.0 vs. 12.3%; HR 0.83; 95% CI = 0.74–0.93; P = 0.0013) (see Supplementary material online, Figure S1A). Cardiovascular death occurred less often in the ticagrelor group than in the clopidogrel group (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93; P = 0.0070), and myocardial infarction was also less common with ticagrelor vs. clopidogrel (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99; P = 0.0419), whereas stroke incidence did not differ significantly between treatment arms (1.3 vs. 1.4%; HR 0.95; 95% CI 0.69–1.33; P = 0.79). All-cause death was reduced in those treated with ticagrelor vs. clopidogrel (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90; P = 0.0020) (see Supplementary material online, Figure S1B).\nTable 2Efficacy and safety outcomes in patients with NSTE-ACSTicagrelor % (n)Clopidogrel % (n)HR (95% CI)P-valueEfficacy endpoints CV death/MI (excluding silent)/stroke10.0 (533)12.3 (630)0.83 (0.74, 0.93)0.0013 All-cause death/MI(excl. silent)/stroke10.5 (557)13.0 (664)0.82 (0.73, 0.92)0.0006 CV death/MI(all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event15.5 (824)17.8 (918)0.88 (0.80, 0.96)0.0058 Myocardial infarction (excluding silent)6.6 (345)7.7 (392)0.86 (0.74, 0.99)0.0419 Cardiovascular death (includes vascular and unknown deaths)3.7 (194)4.9 (247)0.77 (0.64, 0.93)0.0070 Stroke1.3 (69)1.4 (71)0.95 (0.69, 1.33)0.79 All-cause death4.3 (224)5.8 (290)0.76 (0.64, 0.90)0.0020Safety endpoints Major bleeding (study criteria)13.4 (660)12.6 (618)1.07 (0.95, 1.19)0.26 Major or minor bleeding (study criteria)18.2 (900)16.3 (794)1.14 (1.03, 1.25)0.0078 Non-CABG related major bleeding (study criteria)4.8 (225)3.8 (176)1.28 (1.05, 1.56)0.0139 Fatal bleeding0.3 (13)0.4 (18)0.72 (0.35, 1.47)0.37 Life threatening or fatal bleeding (study criteria)6.6 (331)6.5 (315)1.05 (0.90, 1.22)0.56 Intracranial bleeding0.3 (14)0.2 (7)2.01 (0.81, 4.99)0.13 Other major bleeding7.2 (344)6.6 (318)1.08 (0.93, 1.25)0.34 Major bleeding (TIMI criteria)9.2 (452)8.7 (422)1.07 (0.94, 1.22)0.33 Major or minor bleeding (TIMI criteria)13.2 (653)12.3 (602)1.08 (0.97, 1.21)0.16 Non-CABG related major bleeding (TIMI criteria)2.9 (137)2.2 (99)1.39 (1.07, 1.80)0.0131 GUSTO severe bleeding3.1 (146)3.2 (151)0.96 (0.77, 1.21)0.74 GUSTO moderate or severe bleeding8.6 (416)7.8 (382)1.08 (0.94, 1.25)0.25Each treatment group is summarized as Kaplan–Meier rates at 360 days and total number of events during the study. P-values and hazard ratios (95% CI) come from unadjusted Cox models testing ticagrelor vs. clopidogrel.\n\nBODY.RESULTS.SAFETY:\nWith ticagrelor when compared with clopidogrel, there was no significant difference in PLATO major bleeding (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19; P = 0.26), but a higher rate of non-CABG-related major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56; P = 0.0139) (see Supplementary material online, Figure S2A and B). There was no significant difference in the rate of life-threatening or fatal bleeding (6.6 vs. 6.5%; HR 1.05; 95% CI = 0.90–1.22, P = 0.56), nor any significant difference in the rate of intracranial bleeding with ticagrelor compared with clopidogrel (0.3 vs. 0.2%; HR 2.01; 95% CI = 0.81–4.99; P = 0.13). The composite of major or minor bleeding (by PLATO criteria) occurred more often in the ticagrelor group (18.2 vs. 16.3%; HR 1.14; 95% CI = 1.03–1.25, P = 0.0078). When assessed by TIMI criteria, there was no significant difference in major or minor bleeding (13.2 vs. 12.3%; HR 1.08; 95% CI = 0.97–1.21; P = 0.16). TIMI major bleeding, GUSTO severe, and GUSTO moderate or severe bleeding also did not appear to differ significantly between ticagrelor and clopidogrel, whereas TIMI non-CABG-related major bleeding was more common in the ticagrelor group (Table 2).\n\nBODY.RESULTS.EFFICACY AND SAFETY ACCORDING TO TREATMENT STRATEGY:\nEvent rates were considerably higher in NSTE-ACS patients treated without revascularization compared with patients undergoing revascularization during the initial 10 days. For both revascularized and non-revascularized patients, there were similar proportional reductions of the primary endpoint with ticagrelor compared with clopidogrel (HR 0.86 vs. 0.85, interaction P = 0.93) (Figure 2A, Table 3) consistent with the overall trial. There was also a consistent reduction in all-cause death (HR 0.75 vs. 0.73; interaction P = 0.89) (Figure 2B). No significant difference in overall major bleeding was seen with ticagrelor vs. clopidogrel within each treatment strategy (revascularization/no revascularization) (Figure 3A). There was a higher incidence of non-CABG-related major bleeding with ticagrelor vs. clopidogrel in patients with NSTE-ACS with no significant interaction by invasive treatment strategy (HR 1.32 vs. 1.07; interaction P = 0.43) (Figure 3B). The primary outcome was reduced in major subgroups in both revascularized and non-revascularized patients (see Supplementary material online, Figures S3 and S4). The results were consistent with a 30-day landmark for revascularization (see Supplementary material online, Table S1), as well as in the full study population (see Supplementary material online, Table S2). In patients who underwent angiography during the initial 10 days with or without significant coronary disease, the effect of ticagrelor vs. clopidogrel was consistent (see Supplementary material online, Table S3).\nTable 3Interaction of ticagrelor treatment and revascularization within 10 days (adjusting for region) NSTE-ACS with revascularizationNSTE-ACS without revascularizationInteraction PNTicagrelor KM rateClopidogrel KM rateHR (95% CI)NTicagrelor KM rateClopidogrel KM rateHR (95% CI)Efficacy endpoints CV death/MI (excluding silent)/stroke54165.116.100.86 (0.68, 1.09)51899.6311.600.85 (0.72, 1.01)0.93 All-cause death/MI (excl. silent)/stroke54165.446.600.85 (0.67, 1.06)518910.1512.530.84 (0.71, 0.99)0.94 CV death/MI (all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event52908.7310.310.86 (0.71, 1.03)510914.1415.160.97 (0.84, 1.13)0.29 Myocardial infarction (excluding silent)54383.523.880.90 (0.68, 1.21)52016.046.680.94 (0.75, 1.17)0.85 Cardiovascular death (includes vascular and unknown deaths)56481.642.330.76 (0.52, 1.13)52174.075.440.75 (0.58, 0.98)0.95 Stroke56320.670.591.18 (0.60, 2.34)52091.481.690.92 (0.58, 1.46)0.56 All-cause death56482.032.880.75 (0.53, 1.07)52174.776.650.73 (0.57, 0.93)0.89Safety endpoints Major bleeding (study criteria)49835.254.681.10 (0.84, 1.44)493111.8311.431.05 (0.88, 1.26)0.82 Major or minor bleeding (study criteria)48427.766.351.22 (0.97, 1.54)484714.5913.961.07 (0.91, 1.25)0.34 Non-CABG major bleeding (study criteria)52703.142.381.32 (0.92, 1.90)49332.782.791.07 (0.74, 1.56)0.43 Fatal or life-threatening major bleeding (study criteria)51732.252.011.18 (0.79, 1.76)49625.776.110.95 (0.75, 1.22)0.37 Other major bleeding (study criteria)51783.102.851.02 (0.72, 1.45)49456.505.681.16 (0.91, 1.49)0.55 Major bleeding (TIMI criteria)51023.422.791.21 (0.86, 1.70)49528.048.390.97 (0.79, 1.20)0.28 Major or minor bleeding (TIMI criteria)49905.244.561.12 (0.85, 1.47)493311.7811.161.08 (0.91, 1.29)0.85 Non-CABG major bleeding (TIMI criteria)53161.881.191.66 (1.01, 2.72)49522.051.841.19 (0.76, 1.87)0.34 GUSTO severe bleeding52861.161.490.75 (0.45, 1.26)49462.752.461.13 (0.77, 1.65)0.22 GUSTO moderate or severe bleeding51343.933.301.13 (0.82, 1.55)49457.185.961.19 (0.95, 1.51)0.78Kaplan–Meier (KM) rates 350 days after day 10 post-randomization.\nFigure 2Efficacy endpoints stratified by management strategy—Kaplan–Meier estimates of time to first occurrence of: (A) primary endpoint, (B) all-cause death, from 10 days post-randomization onward.\nFigure 3Bleeding stratified by revascularization—Kaplan–Meier estimate of (A) time to major bleeding according to the PLATO criteria from day 10 post-randomization, and (B) time to non-CABG major bleeding.\n\nBODY.DISCUSSION:\nIn this subgroup analysis, ticagrelor compared with clopidogrel reduced the composite endpoint of cardiovascular death, myocardial infarction, and stroke as well as the individual endpoints of cardiovascular death, myocardial infarction, and all-cause death without any significant difference in major bleeding in patients with an entry diagnosis of NSTE-ACS. The event curves for the primary composite endpoint and total mortality separated continuously for the duration of the trial. The benefits with ticagrelor were observed both in patients who underwent and in those who did not undergo early revascularization, regardless whether angiography was performed or not. These results are consistent with the previously reported subgroup analysis of all ACS patients with an intended strategy (prior to randomization) of no revascularization, where about one-fifth still underwent PCI and about 4% had CABG surgery before discharge.10 Although current guidelines advocate early invasive management in NSTE-ACS,1,2 a large proportion of patients are managed non-invasively.11,12 Patients who are managed without revascularization usually have more comorbidities, higher risk of bleeding, and inferior outcome than patients who are revascularized.13 The optimal platelet inhibition strategy in these patients has been uncertain. P2Y12 inhibition has previously been shown to reduce ischaemic events in NSTE-ACS patients managed without revascularization. In the Clopidogrel in Unstable Angina to Prevent Recurrent Events (CURE) trial, which randomized NSTE-ACS patients to receive either clopidogrel or placebo (on background aspirin treatment), 64% of patients did not undergo revascularization after randomization. There were almost identical relative reductions in CV death/MI/stroke with clopidogrel in the non-invasive and invasive subgroups.14 The Clopidogrel for High Atherothrombotic Risk and Ischemic Stabilization, Management and Avoidance (CHARISMA) trial studied the addition of clopidogrel to aspirin in a stable, more heterogeneous population at risk for atherothrombotic events. Overall, the combination was not more effective than aspirin monotherapy, but a trend towards benefit was noted in those with symptomatic atherosclerotic disease, whereas a trend towards harm was seen in those included based only on multiple cardiovascular risk factors.15 This issue was also recently studied in the TRILOGY-ACS trial, in which NSTE-ACS patients intended for management without revascularization were prospectively randomized to receive either prasugrel or clopidogrel (both irreversible P2Y12 inhibitors). The results showed no significant overall benefit of prasugrel over clopidogrel during 24 months, even though the Kaplan–Meier curves for the efficacy endpoints tended to separate after 1 year,8 and with a reduction of the primary outcome in the subgroup who underwent angiography.16 In the no revascularization subgroup of the current PLATO NSTE-ACS substudy, there were consistent benefits with ticagrelor when compared with clopidogrel concerning both mortality and non-fatal ischaemic events. Although the no revascularization subgroup of the present PLATO substudy seems similar to the TRILOGY-ACS population, it is impossible to compare the effect of the respective new P2Y12 inhibitor vs. clopidogrel across the two studies. Patients with intention for treatment without revascularization were prospectively studied in TRILOGY-ACS, while the present PLATO substudy was a post hoc stratification with subgroups of revascularization/no revascularization defined post randomization and post procedures. The TRILOGY-ACS population also was of higher risk with more prevalent comorbidities such as hypertension, dyslipidaemia, and diabetes, and had higher rates of previous myocardial infarction and revascularization procedures. In ACS patients managed with revascularization, more potent P2Y12 inhibition has been associated with better outcomes, as shown in the intention for invasive management subgroup analysis of PLATO for ticagrelor17 and in TRITON for prasugrel.7 In the Clopidogrel and Aspirin Optimal Dose Usage to Reduce Recurrent Events–Seventh Organization to Assess Strategies in Ischemic Syndromes (CURRENT OASIS-7) trial also, intensified P2Y12 inhibition with double dose clopidogrel for the first 7 days showed no significant difference in outcome in the overall population,18 but a reduction in cardiovascular events, including stent thrombosis in the pre-specified (albeit post-randomization) subgroup undergoing PCI.19 With more potent platelet inhibition, bleeding complications have usually been increasing. In TRITON-TIMI 38, where prasugrel vs. clopidogrel was evaluated in ACS, prasugrel demonstrated a reduction in thrombotic events, but at the cost of significantly increased rates of major bleeding, including fatal bleeding, particularly in patients at high bleeding risk defined as high age and low body weight.7 In TRILOGY-ACS on the other hand, which included a lower maintenance dose, prasugrel did not cause any increased major bleeding rate, including patients >75 years of age. In the present analysis, there was no significant difference in PLATO-defined total major bleeding with ticagrelor compared with clopidogrel. However, the incidence of non-CABG-related major bleeding was significantly higher in the ticagrelor group. There was no significant difference in life-threatening or fatal bleeding or major bleeding as defined by the TIMI criteria, but the composite of PLATO major/minor bleeding was increased with ticagrelor.\n\nBODY.DISCUSSION.STUDY LIMITATIONS:\nThere are several limitations to this work. The sample size is large, which gives high power to detect even relatively small differences in effect that may or may not be clinically important. The revascularization/no revascularization analyses were post hoc investigations of subgroups identified post-randomization, which makes the analyses subject to potential bias. Because landmark analyses were used, the risk of time-dependent confounding is acknowledged. As sensitivity analyses, we also performed landmark analyses at 30 days instead of 10 days, with consistent results. Nevertheless, the findings reported with regard to revascularization status should be interpreted strictly as exploratory and hypothesis generating. The present results based on actual revascularization strategy support and complement those of our previous analysis based on pre-randomization intention to treat with invasive or conservative management, and the results based on performed revascularization or not are consistent with the overall NSTE-ACS results, as well as the overall PLATO results.\n\nBODY.CONCLUSIONS:\nIn this substudy of the PLATO trial, ticagrelor compared with clopidogrel consistently reduced the rates of ischaemic events and mortality without any difference in overall major bleeding in patients with an entry diagnosis of NSTE-ACS, and this effect was independent of whether or not early revascularization was performed. These results harmonize with the European Society of Cardiology (ESC) NSTE-ACS guidelines, which recommend ticagrelor in all patients at moderate-to-high risk of ischaemic events, regardless of initial treatment strategy.1\n\nBODY.SUPPLEMENTARY MATERIAL:\nSupplementary material is available at European Heart Journal online.\n\nBODY.FUNDING:\nThis work was supported by AstraZeneca, who funded the PLATO trial. Funding to pay the Open Access publication charges for this article was provided by Astra Zeneca. Conflicts of interest: D.L.: Institutional research grant from AstraZeneca to perform this research; and lecture fees from AstraZeneca. C.V.: research grant from AstraZeneca to perform this research; member of the Speakers' Bureaus for AstraZeneca, Eli Lilly & Company, and The Medicines Company. C.P.C.: research grants/support from Accumetrics, AstraZeneca, CSL Behring, Essentialis, GlaxoSmithKline, Merck, Regeneron, Sanofi, and Takeda; on advisory boards for Alnylam, Bristol-Myers Squibb, Lipimedix, and Pfizer (funds donated to charity); and holds equity in Automedics Medical Systems. R.A.H.: consulting/advisory board fees from Bristol-Myers Squibb, Sanofi, Portola Pharmaceuticals, Johnson & Johnson and Merck; grant support from Eli Lilly/Daiichi Sankyo., Merck, Portola Pharmaceuticals, Sanofi, Johnson & Johnson, Bristol-Myers Squibb, The Medicines Company, and AstraZeneca. A.H.: reports being an employee of AstraZeneca. J.M.: reports being an employee of AstraZeneca and having equity ownership. S.H.: advisory board member for AstraZeneca, Bristol-Myers Squibb, Pfizer, and Bayer; research support from GlaxoSmithKline, Pfizer, and Sanofi-Aventis. P.G.S.: research grant (to INSERM U698): NYU School of Medicine, Sanofi, Servier. Speaking or consulting: Amarin, AstraZeneca, Bayer, Boehringer-Ingelheim, Bristol-Myers-Squibb, Daiichi-Sankyo, GSK, Lilly, Medtronic, Otsuka, Pfizer, Roche, sanofi, Servier, The Medicines Company, Vivus. Stockholding: Aterovax. J.H.C.: advisory board fees from BMS, AstraZeneca, Eli Lilly/Daiichi Sankyo; consultancy fees from Merck and Servier. R.F.S.: research grants from AstraZeneca, Eli Lilly/Daiichi Sankyo, and Merck; research support from Accumetrics; honoraria from AstraZeneca, Eli Lilly/Daiichi Sankyo, Merck, Iroko, Accumetrics, and Medscape; consultancy fees from AstraZeneca, Merck, Novartis, Accumetrics, Sanofi-Aventis/Regeneron, Bristol-Myers Squibb, Eisai, Roche and Daiichi Sankyo. S.R.S.: reports no conflict of interest. L.W.: research grants from AstraZeneca, Merck & Co, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline; consultant for Merck & Co, Regado Biosciences, Evolva, Portola, C.S.L. Behring, Athera Biotechnologies, Boehringer-Ingelheim, AstraZeneca, GlaxoSmithKline, and Bristol-Myers Squibb/Pfizer; lecture fees from AstraZeneca, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; honoraria from Boehringer Ingelheim, AstraZeneca, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; travel support from AstraZeneca and Bristol-Myers Squibb/Pfizer. S.K.J.: research grant from AstraZeneca, Eli Lilly, Bristol-Myers Squibb, Terumo Inc, Medtronic, and Vascular Solutions; honoraria from The Medicines Company, AstraZeneca, Eli Lilly, Bristol-Myers Squibb, and IROKO; consultant/advisory board from AstraZeneca, Eli Lilly, Merck, Medtronic, and Sanofi.\n\nBODY.SUPPLEMENTARY MATERIAL:\n\n\nSupplementary Data\n\n**Question:** Compared to clopidogrel what was the result of ticagrelor on stroke incidence?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1286
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Ticagrelor vs. clopidogrel in patients with non-ST-elevation acute coronary syndrome with or without revascularization: results from the PLATO trial\n\n ABSTRACT.AIMS:\nThe optimal platelet inhibition strategy for ACS patients managed without revascularization is unknown. We aimed to evaluate efficacy and safety of ticagrelor vs. clopidogrel in the non-ST-elevation acute coronary syndrome (NSTE-ACS) subgroup of the PLATO trial, in the total cohort, and in the subgroups managed with and without revascularization within 10 days of randomization.\n\nABSTRACT.METHODS AND RESULTS:\nWe performed a retrospective analysis of the primary endpoint of cardiovascular death/myocardial infarction/stroke. Among 18 624 PLATO patients, 11 080 (59%) were categorized as NSTE-ACS at randomization. During the initial 10 days, 74% had angiography, 46% PCI, and 5% CABG. In NSTE-ACS patients, the primary endpoint was reduced with ticagrelor vs. clopidogrel [10.0 vs. 12.3%; hazard ratio (HR) 0.83; 95% confidence interval (CI) = 0.74–0.93], as was myocardial infarction (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99), cardiovascular death (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93), and all-cause death (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90). Major bleeding rate was similar between treatment groups (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19), but ticagrelor was associated with an increase in non-CABG major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56). Within the first 10 days, 5366 (48.4%) patients were managed without revascularization. Regardless of revascularization or not, ticagrelor consistently reduced the primary outcome (HR 0.86 vs. 0.85, interaction P = 0.93), and all-cause death (HR 0.75 vs. 0.73, interaction P = 0.89) with no significant increase in overall major bleeding.\n\nABSTRACT.CONCLUSION:\nIn patients with NSTE-ACS, benefit of ticagrelor over clopidogrel in reducing ischaemic events and total mortality was consistent with the overall PLATO trial, independent of actually performed revascularization during the initial 10 days.\n\nBODY:\nSee page 2055 for the editorial comment on this article (doi:10.1093/eurheartj/ehu202)\n\nBODY.INTRODUCTION:\nDual antiplatelet therapy with aspirin and a P2Y12 inhibitor is standard of care in acute coronary syndromes (ACS).1,2 Ticagrelor is the first reversibly binding direct P2Y12 inhibitor. As opposed to clopidogrel and prasugrel, it does not require enzymatic activation, and causes faster, greater, and more consistent platelet inhibition compared with clopidogrel.3,4 The Platelet Inhibition and Patient Outcomes (PLATO) trial compared ticagrelor with clopidogrel in patients with ACS. Ticagrelor was superior in preventing ischaemic events (the composite of death from vascular causes, myocardial infarction, and stroke), as well as death from any cause, without a significant increase in all-cause major bleeding. PLATO included both invasively and non-invasively managed patients. The decision on which management strategy to pursue was made by the investigator.5 The third-generation thienopyridine prasugrel is, like clopidogrel, an irreversible P2Y12 inhibitor, but with faster and more consistent platelet inhibition.6 Prasugrel, when compared with clopidogrel, reduced ischaemic events in patients with ACS planned for PCI in the Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition with Prasugrel (TRITON),7 but in non-ST-elevation acute coronary syndrome (NSTE-ACS) patients planned for management without revascularization, prasugrel showed no benefit over clopidogrel in the Targeted Platelet Inhibition to Clarify the Optimal Strategy to Medically Manage Acute Coronary Syndromes (TRILOGY-ACS) trial.8 The aim of the present study was to explore the effect of ticagrelor vs. clopidogrel in the total NSTE-ACS subgroup of the PLATO trial and also stratified by initial management with or without revascularization.\n\nBODY.METHODS.STUDY DESIGN:\nThe PLATO trial (ClinicalTrials.gov No. NCT00391872) was an international randomized double-blind, double-dummy phase III study comparing ticagrelor with clopidogrel in ACS. Details of the study design and overall results have been published previously.5,9 Briefly, a total of 18 624 patients were randomized to receive either ticagrelor or clopidogrel on background treatment with aspirin. The study included patients with ST-elevation MI intended for primary PCI, as well as patients with non-ST-elevation ACS, regardless if aimed for an initial invasive or non-invasive treatment strategy. It adhered to the Declaration of Helsinki, to the specifications of the International Conference of Harmonization, and to Good Clinical Practice; and was approved by national and institutional regulatory authorities and ethics committees. All participants provided written informed consent.\n\nBODY.METHODS.PATIENTS:\nTwo or more of the following inclusion criteria were required for patients enrolled without ST-elevation ACS at admission: (1) ST-segment changes on ECG indicating ischaemia [ST-segment depression or transient elevation (≥1 mm) in at least two contiguous leads]; (2) positive biomarker indicating myocardial necrosis (troponin I or T or CK-MB above the upper limit of normal); (3) one of the following: ≥60 years of age, previous myocardial infarction or coronary artery bypass surgery, coronary artery disease with ≥50% stenosis in ≥2 vessels, previous ischaemic stroke, transient ischaemic attack (TIA), carotid stenosis, cerebral revascularization, diabetes mellitus, peripheral artery disease, or chronic renal dysfunction. In the current analysis, NSTE-ACS was in the data-base defined as absence of either persistent ST-segment elevation of 1 mV for 20 min in two contiguous leads or new (or presumed new) left bundle branch block in entry ECG. In addition to the overall NSTE-ACS population, we also analysed outcomes in the subgroups of NSTE-ACS patients who initially underwent revascularization and those who were treated without early revascularization. Both cohorts required endpoint-free survival for 10 days post-randomization to determine revascularization status. The no-revascularization subgroup was defined as NSTE-ACS patients who did not undergo any revascularization procedure (PCI or CABG) with or without angiography during the first 10 days.\n\nBODY.METHODS.ENDPOINTS:\nThe primary efficacy endpoint of the present study was the composite of cardiovascular death, myocardial infarction, (excluding silent infarctions) and stroke. Each component alone and all-cause death were secondary efficacy endpoints. Major bleeding by PLATO criteria9 and life-threatening/fatal bleeding were the primary safety endpoints. Secondary safety endpoints were CABG-related major bleeding, non-CABG-related major bleeding, minor bleeding, intracranial bleeding, and fatal bleeding. Additional secondary safety outcomes were bleeding events as defined by TIMI (major, minor, and non-CABG related major) and GUSTO (severe and moderate). An independent central adjudication committee, unaware of treatment assignments, assessed all endpoints. Major bleeding as defined by TIMI was recorded from the electronic case report form, where a drop of 50 g/L in haemoglobin was used as a cut-off, but this did not necessarily require clinical evidence of bleeding.\n\nBODY.METHODS.STATISTICAL ANALYSES:\nPatient characteristics and medical history, in-hospital procedures and medications, discharge ACS status, and post-discharge medications are presented by treatment in the NSTE-ACS patient cohort as well as by revascularization status subgroup. Continuous variables are presented as median and 25th–75th percentiles; categorical variables are presented as number and percentage. The treatment effect of ticagrelor vs. clopidogrel on the primary and secondary endpoints was compared within the subgroup of NSTE-ACS patients. Kaplan–Meier estimated event rates at 360 days and total number of observed events during the study were presented for each endpoint. Hazard ratios (HR), confidence intervals (CI), and P-values from unadjusted Cox proportional hazards regression models were presented. Kaplan–Meier estimated event rates were plotted by treatment for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. To examine whether the effect of ticagrelor in patients with NSTE-ACS differed based on early revascularization, Cox proportional hazards models were fitted for each endpoint using a treatment-by-revascularization interaction term. In these models, we adjusted for region of the world to account for differences in revascularization practice and the time-to-event was measured from a landmark at 10 days post-randomization. Kaplan–Meier rates 350 days post-landmark are presented for each treatment/revascularization category along with HR for ticagrelor vs. clopidogrel in the revascularization and medical management subgroups. Interaction P-values assess whether the treatment effect is different depending on revascularization status. Kaplan–Meier event rates adjusted for region are plotted by the four treatment/revascularization categories for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. For consistency with the PLATO design paper,9 we also performed a 30-day landmark analysis; and as a sensitivity analysis, we also performed a 10-day landmark analysis of the full study population. Finally, the association of treatment with the primary efficacy endpoint was evaluated within several patient cohorts using Cox proportional hazards models, separately within the revascularization and non-revascularization subgroups. Forest plots present the HR for ticagrelor vs. clopidogrel in each subgroup cohort. Endpoints are defined using intent to treat and a P-value of 0.05 is used to denote statistical significance. Data were analysed using SAS version 9.2 for all analyses.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nThe PLATO trial included 18 624 patients with ACS, of which 11 080 patients were classified as NSTE-ACS at randomization. Of these, 5581 were randomized to ticagrelor and 5499 to clopidogrel (Figure 1). The baseline and in-hospital characteristics were similar between groups in the overall NSTE-ACS population (Table 1). In the overall NSTE-ACS group, 74% of patients had a coronary angiography performed, 46% underwent PCI, and 5% CABG. The discharge diagnosis (in this population categorized as NSTE-ACS at admission) was NSTEMI in 65%, STEMI in 8%, and unstable angina/other in 27%.\nTable 1Baseline characteristics and invasive procedures by randomized treatment in the overall NSTE-ACS population, and by initial treatment strategy (within the first 10 days after randomization) Overall NSTE-ACSNSTE-ACS with revascularizationNSTE-ACS without revascularizationTicagrelor (N = 5581)Clopidogrel (N = 5499)Ticagrelor (N = 2873)Clopidogrel (N = 2841)Ticagrelor (N = 2708)Clopidogrel (N = 2658)Demographics Age, median (25th–75th percentile), years64 (56–72)64 (56–72)63 (55–71)63 (55–71)65 (57–73)65 (57–73) Age ≥ 75 years, n (%)955 (17.1)1024 (18.6)420 (14.6)460 (16.2)535 (19.8)564 (21.2) Female gender, n (%)1746 (31.3)1746 (31.8)706 (24.6)716 (25.2)1040 (38.4)1030 (38.8) Body weight <60 kg, n (%)398 (7.2)389 (7.1)165 (5.8)172 (6.1)233 (8.6)217 (8.2) Body mass index, median (25th–75th percentile), kg/m227.5 (24.8–30.8)27.4 (24.8–30.5)27.5 (24.9–30.8)27.5 (24.9–30.5)27.5 (24.7–30.6)27.3 (24.6–30.5) GRACE risk score nomogram130 (112–150)130 (112–149)128 (110–145)127 (110–145)133 (114–154)134 (116–153)Cardiovascular risk factors, n (%) Current smoking1636 (29.4)1640 (29.9)1000 (34.8)960 (33.8)636 (23.6)680 (25.6) Hypertension3915 (70.2)3835 (69.8)1881 (65.5)1886 (66.4)2034 (75.3)1949 (73.4) Dyslipidaemia2885 (51.8)2852 (51.9)1515 (52.8)1569 (55.2)1370 (50.7)1283 (48.3) Diabetes mellitus1608 (28.9)1520 (27.7)756 (26.3)747 (26.3)852 (31.6)773 (29.1)Medical history, n (%) Angina pectoris2932 (52.6)2890 (52.6)1337 (46.5)1356 (47.7)1595 (59.1)1534 (57.8) Myocardial infarction1400 (25.1)1410 (25.7)609 (21.2)608 (21.4)791 (29.3)802 (30.2) Congestive heart failure397 (7.1)429 (7.8)99 (3.4)103 (3.6)298 (11.0)326 (12.3) Percutaneous coronary intervention (PCI)944 (16.9)918 (16.7)537 (18.7)517 (18.2)407 (15.1)401 (15.1) Coronary artery bypass graft (CABG)434 (7.8)474 (8.6)214 (7.4)218 (7.7)220 (8.1)256 (9.6) Transient ischaemic attack185 (3.3)189 (3.4)82 (2.9)74 (2.6)103 (3.8)115 (4.3) Non-hemorrhagic stroke246 (4.4)243 (4.4)81 (2.8)98 (3.5)165 (6.1)145 (5.5) Peripheral arterial disease400 (7.2)413 (7.5)174 (6.1)204 (7.2)226 (8.4)209 (7.9) Chronic renal disease273 (4.9)279 (5.1)117 (4.1)108 (3.8)156 (5.8)171 (6.4)Physical findings, median (25th–75th percentile) Heart rate (bpm)72 (64–80)72 (64–81)71 (62–80)72 (63–80)72 (64–82)72 (64–82) Systolic blood pressure (mmHg)135 (120–150)134 (120–150)135 (120–150)135 (120–150)134 (120–150)132 (120–150) Diastolic blood pressure (mmHg)80 (70–89)80 (70–87)80 (70–89)80 (70–87)80 (70–89)80 (70–87)Risk indicators, n (%) Troponin positive, n (%)4356 (80.8)4323 (81.3)2522 (89.6)2486 (89.3)1834 (71.3)1837 (72.4) ST depression (≥1 mm)3158 (56.8)3201 (58.4)1547 (54.0)1535 (54.2)1611 (59.8)1666 (62.8) TIMI risk score > 24838 (89.7)4785 (89.8)2584 (91.3)2566 (91.8)2254 (88.0)2219 (87.7)Type of ACS at discharge, n (%) STEMI449 (8.1)437 (8.0)330 (11.5)305 (10.7)119 (4.4)132 (5.0) NSTEMI3605 (64.8)3525 (64.3)2045 (71.2)2025 (71.3)1560 (58.0)1500 (56.7) UA/other1509 (27.1)1524 (27.8)497 (17.3)510 (18.0)1012 (37.6)1014 (38.3)Antithrombotic treatment during index hospitalization, n (%) Aspirin5386 (96.6)5316 (96.8)2797 (97.4)2779 (97.8)2589 (95.9)2537 (95.8) Unfractionated heparin2910 (52.1)2856 (51.9)1845 (64.2)1845 (64.9)1065 (39.3)1011 (38.0) Low molecular weight heparin3181 (57.0)3084 (56.1)1568 (54.6)1501 (52.8)1613 (59.6)1583 (59.6) Fondaparinux194 (3.5)191 (3.5)87 (3.0)76 (2.7)107 (4.0)115 (4.3) Bivalirudin140 (2.5)133 (2.4)135 (4.7)122 (4.3)5 (0.2)11 (0.4) GP IIb/IIIa inhibitor1170 (21.0)1117 (20.3)982 (34.2)963 (33.9)188 (6.9)154 (5.8)Other drug from randomization to end of study, n (%) Beta-blocker4768 (85.4)4685 (85.2)2536 (88.3)2512 (88.4)2232 (82.4)2173 (81.8) ACE-inhibitor and/or ARB4693 (84.2)4602 (83.8)2447 (85.2)2433 (85.6)2246 (83.2)2169 (81.9) Statin5196 (93.1)5109 (92.9)2777 (96.7)2745 (96.6)2419 (89.3)2364 (88.9) Proton-pump inhibitor2776 (49.7)2613 (47.5)1618 (56.3)1536 (54.1)1158 (42.8)1077 (40.5) Calcium channel inhibitor1553 (27.8)1515 (27.6)720 (25.1)723 (25.4)833 (30.8)792 (29.8) Diuretic2393 (42.9)2280 (41.5)1073 (37.3)1037 (36.5)1320 (48.9)1243 (46.9)Invasive procedures Coronary angiography During first 10 days, n (%)4143 (74.5)4072 (74.2)2873 (100.0)2841 (100.0)1270 (47.2)1231 (46.6) After first 10 days, n (%)357 (6.4)335 (6.1)0 (0)0 (0)357 (13.3)335 (12.7) PCI During first 10 days, n (%)2590 (46.4)2550 (46.4)2590 (90.1)2550 (89.8)0 (0)0 (0) After first 10 days, n (%)279 (5.0)291 (5.3)11 (0.4)3 (0.1)268 (9.9)288 (10.8) CABG During first 10 days, n (%)296 (5.3)305 (5.5)296 (10.3)305 (10.7)0 (0)0 (0) After first 10 days, n (%)382 (6.8)375 (6.8)55 (1.9)55 (1.9)327 (12.1)320 (12.0)\nFigure 1Trial profile—patients classified by entry ECG and by treatment during the first 10 days after randomization. At 10 days post-randomization, 5366 patients were alive and had not undergone revascularization. There were regional differences in the proportion of patients undergoing revascularization during the initial 10 days (Asia/Australia 44%, Central/South America 40.3%, Europe/Middle East/Africa 49.8%, North America 71.6%). Non-revascularized and revascularized patients were of similar age (63 vs. 65), but patients who did not undergo revascularization were more likely to be of female gender (39 vs. 25%), and more likely to have comorbidities (e.g. previous myocardial infarction, heart failure, renal disease). Although non-revascularized patients had more comorbidities, they were less likely to be troponin I positive (Table 1). The proportion of patients with TIMI risk score >2 was slightly higher in the revascularization group (92 vs. 88%). During the first 10 days, 47% of non-revascularized patients underwent coronary angiography. When including only those who had angiography during the initial 10 days, female gender was more common in the non-revascularized (36%) than revascularized (25%). No significant coronary artery disease at angiography was noted in 32 and 0.7% of the non-revascularized and revascularized, respectively.\n\nBODY.RESULTS.EFFICACY IN THE OVERALL NON-ST-ELEVATION ACUTE CORONARY SYNDROME POPULATION:\nEfficacy and safety outcomes of the overall NSTE-ACS population are summarized in Table 2, and Kaplan–Meier curves are shown in Supplementary material online. The incidence of the primary composite endpoint was reduced with ticagrelor vs. clopidogrel (10.0 vs. 12.3%; HR 0.83; 95% CI = 0.74–0.93; P = 0.0013) (see Supplementary material online, Figure S1A). Cardiovascular death occurred less often in the ticagrelor group than in the clopidogrel group (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93; P = 0.0070), and myocardial infarction was also less common with ticagrelor vs. clopidogrel (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99; P = 0.0419), whereas stroke incidence did not differ significantly between treatment arms (1.3 vs. 1.4%; HR 0.95; 95% CI 0.69–1.33; P = 0.79). All-cause death was reduced in those treated with ticagrelor vs. clopidogrel (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90; P = 0.0020) (see Supplementary material online, Figure S1B).\nTable 2Efficacy and safety outcomes in patients with NSTE-ACSTicagrelor % (n)Clopidogrel % (n)HR (95% CI)P-valueEfficacy endpoints CV death/MI (excluding silent)/stroke10.0 (533)12.3 (630)0.83 (0.74, 0.93)0.0013 All-cause death/MI(excl. silent)/stroke10.5 (557)13.0 (664)0.82 (0.73, 0.92)0.0006 CV death/MI(all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event15.5 (824)17.8 (918)0.88 (0.80, 0.96)0.0058 Myocardial infarction (excluding silent)6.6 (345)7.7 (392)0.86 (0.74, 0.99)0.0419 Cardiovascular death (includes vascular and unknown deaths)3.7 (194)4.9 (247)0.77 (0.64, 0.93)0.0070 Stroke1.3 (69)1.4 (71)0.95 (0.69, 1.33)0.79 All-cause death4.3 (224)5.8 (290)0.76 (0.64, 0.90)0.0020Safety endpoints Major bleeding (study criteria)13.4 (660)12.6 (618)1.07 (0.95, 1.19)0.26 Major or minor bleeding (study criteria)18.2 (900)16.3 (794)1.14 (1.03, 1.25)0.0078 Non-CABG related major bleeding (study criteria)4.8 (225)3.8 (176)1.28 (1.05, 1.56)0.0139 Fatal bleeding0.3 (13)0.4 (18)0.72 (0.35, 1.47)0.37 Life threatening or fatal bleeding (study criteria)6.6 (331)6.5 (315)1.05 (0.90, 1.22)0.56 Intracranial bleeding0.3 (14)0.2 (7)2.01 (0.81, 4.99)0.13 Other major bleeding7.2 (344)6.6 (318)1.08 (0.93, 1.25)0.34 Major bleeding (TIMI criteria)9.2 (452)8.7 (422)1.07 (0.94, 1.22)0.33 Major or minor bleeding (TIMI criteria)13.2 (653)12.3 (602)1.08 (0.97, 1.21)0.16 Non-CABG related major bleeding (TIMI criteria)2.9 (137)2.2 (99)1.39 (1.07, 1.80)0.0131 GUSTO severe bleeding3.1 (146)3.2 (151)0.96 (0.77, 1.21)0.74 GUSTO moderate or severe bleeding8.6 (416)7.8 (382)1.08 (0.94, 1.25)0.25Each treatment group is summarized as Kaplan–Meier rates at 360 days and total number of events during the study. P-values and hazard ratios (95% CI) come from unadjusted Cox models testing ticagrelor vs. clopidogrel.\n\nBODY.RESULTS.SAFETY:\nWith ticagrelor when compared with clopidogrel, there was no significant difference in PLATO major bleeding (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19; P = 0.26), but a higher rate of non-CABG-related major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56; P = 0.0139) (see Supplementary material online, Figure S2A and B). There was no significant difference in the rate of life-threatening or fatal bleeding (6.6 vs. 6.5%; HR 1.05; 95% CI = 0.90–1.22, P = 0.56), nor any significant difference in the rate of intracranial bleeding with ticagrelor compared with clopidogrel (0.3 vs. 0.2%; HR 2.01; 95% CI = 0.81–4.99; P = 0.13). The composite of major or minor bleeding (by PLATO criteria) occurred more often in the ticagrelor group (18.2 vs. 16.3%; HR 1.14; 95% CI = 1.03–1.25, P = 0.0078). When assessed by TIMI criteria, there was no significant difference in major or minor bleeding (13.2 vs. 12.3%; HR 1.08; 95% CI = 0.97–1.21; P = 0.16). TIMI major bleeding, GUSTO severe, and GUSTO moderate or severe bleeding also did not appear to differ significantly between ticagrelor and clopidogrel, whereas TIMI non-CABG-related major bleeding was more common in the ticagrelor group (Table 2).\n\nBODY.RESULTS.EFFICACY AND SAFETY ACCORDING TO TREATMENT STRATEGY:\nEvent rates were considerably higher in NSTE-ACS patients treated without revascularization compared with patients undergoing revascularization during the initial 10 days. For both revascularized and non-revascularized patients, there were similar proportional reductions of the primary endpoint with ticagrelor compared with clopidogrel (HR 0.86 vs. 0.85, interaction P = 0.93) (Figure 2A, Table 3) consistent with the overall trial. There was also a consistent reduction in all-cause death (HR 0.75 vs. 0.73; interaction P = 0.89) (Figure 2B). No significant difference in overall major bleeding was seen with ticagrelor vs. clopidogrel within each treatment strategy (revascularization/no revascularization) (Figure 3A). There was a higher incidence of non-CABG-related major bleeding with ticagrelor vs. clopidogrel in patients with NSTE-ACS with no significant interaction by invasive treatment strategy (HR 1.32 vs. 1.07; interaction P = 0.43) (Figure 3B). The primary outcome was reduced in major subgroups in both revascularized and non-revascularized patients (see Supplementary material online, Figures S3 and S4). The results were consistent with a 30-day landmark for revascularization (see Supplementary material online, Table S1), as well as in the full study population (see Supplementary material online, Table S2). In patients who underwent angiography during the initial 10 days with or without significant coronary disease, the effect of ticagrelor vs. clopidogrel was consistent (see Supplementary material online, Table S3).\nTable 3Interaction of ticagrelor treatment and revascularization within 10 days (adjusting for region) NSTE-ACS with revascularizationNSTE-ACS without revascularizationInteraction PNTicagrelor KM rateClopidogrel KM rateHR (95% CI)NTicagrelor KM rateClopidogrel KM rateHR (95% CI)Efficacy endpoints CV death/MI (excluding silent)/stroke54165.116.100.86 (0.68, 1.09)51899.6311.600.85 (0.72, 1.01)0.93 All-cause death/MI (excl. silent)/stroke54165.446.600.85 (0.67, 1.06)518910.1512.530.84 (0.71, 0.99)0.94 CV death/MI (all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event52908.7310.310.86 (0.71, 1.03)510914.1415.160.97 (0.84, 1.13)0.29 Myocardial infarction (excluding silent)54383.523.880.90 (0.68, 1.21)52016.046.680.94 (0.75, 1.17)0.85 Cardiovascular death (includes vascular and unknown deaths)56481.642.330.76 (0.52, 1.13)52174.075.440.75 (0.58, 0.98)0.95 Stroke56320.670.591.18 (0.60, 2.34)52091.481.690.92 (0.58, 1.46)0.56 All-cause death56482.032.880.75 (0.53, 1.07)52174.776.650.73 (0.57, 0.93)0.89Safety endpoints Major bleeding (study criteria)49835.254.681.10 (0.84, 1.44)493111.8311.431.05 (0.88, 1.26)0.82 Major or minor bleeding (study criteria)48427.766.351.22 (0.97, 1.54)484714.5913.961.07 (0.91, 1.25)0.34 Non-CABG major bleeding (study criteria)52703.142.381.32 (0.92, 1.90)49332.782.791.07 (0.74, 1.56)0.43 Fatal or life-threatening major bleeding (study criteria)51732.252.011.18 (0.79, 1.76)49625.776.110.95 (0.75, 1.22)0.37 Other major bleeding (study criteria)51783.102.851.02 (0.72, 1.45)49456.505.681.16 (0.91, 1.49)0.55 Major bleeding (TIMI criteria)51023.422.791.21 (0.86, 1.70)49528.048.390.97 (0.79, 1.20)0.28 Major or minor bleeding (TIMI criteria)49905.244.561.12 (0.85, 1.47)493311.7811.161.08 (0.91, 1.29)0.85 Non-CABG major bleeding (TIMI criteria)53161.881.191.66 (1.01, 2.72)49522.051.841.19 (0.76, 1.87)0.34 GUSTO severe bleeding52861.161.490.75 (0.45, 1.26)49462.752.461.13 (0.77, 1.65)0.22 GUSTO moderate or severe bleeding51343.933.301.13 (0.82, 1.55)49457.185.961.19 (0.95, 1.51)0.78Kaplan–Meier (KM) rates 350 days after day 10 post-randomization.\nFigure 2Efficacy endpoints stratified by management strategy—Kaplan–Meier estimates of time to first occurrence of: (A) primary endpoint, (B) all-cause death, from 10 days post-randomization onward.\nFigure 3Bleeding stratified by revascularization—Kaplan–Meier estimate of (A) time to major bleeding according to the PLATO criteria from day 10 post-randomization, and (B) time to non-CABG major bleeding.\n\nBODY.DISCUSSION:\nIn this subgroup analysis, ticagrelor compared with clopidogrel reduced the composite endpoint of cardiovascular death, myocardial infarction, and stroke as well as the individual endpoints of cardiovascular death, myocardial infarction, and all-cause death without any significant difference in major bleeding in patients with an entry diagnosis of NSTE-ACS. The event curves for the primary composite endpoint and total mortality separated continuously for the duration of the trial. The benefits with ticagrelor were observed both in patients who underwent and in those who did not undergo early revascularization, regardless whether angiography was performed or not. These results are consistent with the previously reported subgroup analysis of all ACS patients with an intended strategy (prior to randomization) of no revascularization, where about one-fifth still underwent PCI and about 4% had CABG surgery before discharge.10 Although current guidelines advocate early invasive management in NSTE-ACS,1,2 a large proportion of patients are managed non-invasively.11,12 Patients who are managed without revascularization usually have more comorbidities, higher risk of bleeding, and inferior outcome than patients who are revascularized.13 The optimal platelet inhibition strategy in these patients has been uncertain. P2Y12 inhibition has previously been shown to reduce ischaemic events in NSTE-ACS patients managed without revascularization. In the Clopidogrel in Unstable Angina to Prevent Recurrent Events (CURE) trial, which randomized NSTE-ACS patients to receive either clopidogrel or placebo (on background aspirin treatment), 64% of patients did not undergo revascularization after randomization. There were almost identical relative reductions in CV death/MI/stroke with clopidogrel in the non-invasive and invasive subgroups.14 The Clopidogrel for High Atherothrombotic Risk and Ischemic Stabilization, Management and Avoidance (CHARISMA) trial studied the addition of clopidogrel to aspirin in a stable, more heterogeneous population at risk for atherothrombotic events. Overall, the combination was not more effective than aspirin monotherapy, but a trend towards benefit was noted in those with symptomatic atherosclerotic disease, whereas a trend towards harm was seen in those included based only on multiple cardiovascular risk factors.15 This issue was also recently studied in the TRILOGY-ACS trial, in which NSTE-ACS patients intended for management without revascularization were prospectively randomized to receive either prasugrel or clopidogrel (both irreversible P2Y12 inhibitors). The results showed no significant overall benefit of prasugrel over clopidogrel during 24 months, even though the Kaplan–Meier curves for the efficacy endpoints tended to separate after 1 year,8 and with a reduction of the primary outcome in the subgroup who underwent angiography.16 In the no revascularization subgroup of the current PLATO NSTE-ACS substudy, there were consistent benefits with ticagrelor when compared with clopidogrel concerning both mortality and non-fatal ischaemic events. Although the no revascularization subgroup of the present PLATO substudy seems similar to the TRILOGY-ACS population, it is impossible to compare the effect of the respective new P2Y12 inhibitor vs. clopidogrel across the two studies. Patients with intention for treatment without revascularization were prospectively studied in TRILOGY-ACS, while the present PLATO substudy was a post hoc stratification with subgroups of revascularization/no revascularization defined post randomization and post procedures. The TRILOGY-ACS population also was of higher risk with more prevalent comorbidities such as hypertension, dyslipidaemia, and diabetes, and had higher rates of previous myocardial infarction and revascularization procedures. In ACS patients managed with revascularization, more potent P2Y12 inhibition has been associated with better outcomes, as shown in the intention for invasive management subgroup analysis of PLATO for ticagrelor17 and in TRITON for prasugrel.7 In the Clopidogrel and Aspirin Optimal Dose Usage to Reduce Recurrent Events–Seventh Organization to Assess Strategies in Ischemic Syndromes (CURRENT OASIS-7) trial also, intensified P2Y12 inhibition with double dose clopidogrel for the first 7 days showed no significant difference in outcome in the overall population,18 but a reduction in cardiovascular events, including stent thrombosis in the pre-specified (albeit post-randomization) subgroup undergoing PCI.19 With more potent platelet inhibition, bleeding complications have usually been increasing. In TRITON-TIMI 38, where prasugrel vs. clopidogrel was evaluated in ACS, prasugrel demonstrated a reduction in thrombotic events, but at the cost of significantly increased rates of major bleeding, including fatal bleeding, particularly in patients at high bleeding risk defined as high age and low body weight.7 In TRILOGY-ACS on the other hand, which included a lower maintenance dose, prasugrel did not cause any increased major bleeding rate, including patients >75 years of age. In the present analysis, there was no significant difference in PLATO-defined total major bleeding with ticagrelor compared with clopidogrel. However, the incidence of non-CABG-related major bleeding was significantly higher in the ticagrelor group. There was no significant difference in life-threatening or fatal bleeding or major bleeding as defined by the TIMI criteria, but the composite of PLATO major/minor bleeding was increased with ticagrelor.\n\nBODY.DISCUSSION.STUDY LIMITATIONS:\nThere are several limitations to this work. The sample size is large, which gives high power to detect even relatively small differences in effect that may or may not be clinically important. The revascularization/no revascularization analyses were post hoc investigations of subgroups identified post-randomization, which makes the analyses subject to potential bias. Because landmark analyses were used, the risk of time-dependent confounding is acknowledged. As sensitivity analyses, we also performed landmark analyses at 30 days instead of 10 days, with consistent results. Nevertheless, the findings reported with regard to revascularization status should be interpreted strictly as exploratory and hypothesis generating. The present results based on actual revascularization strategy support and complement those of our previous analysis based on pre-randomization intention to treat with invasive or conservative management, and the results based on performed revascularization or not are consistent with the overall NSTE-ACS results, as well as the overall PLATO results.\n\nBODY.CONCLUSIONS:\nIn this substudy of the PLATO trial, ticagrelor compared with clopidogrel consistently reduced the rates of ischaemic events and mortality without any difference in overall major bleeding in patients with an entry diagnosis of NSTE-ACS, and this effect was independent of whether or not early revascularization was performed. These results harmonize with the European Society of Cardiology (ESC) NSTE-ACS guidelines, which recommend ticagrelor in all patients at moderate-to-high risk of ischaemic events, regardless of initial treatment strategy.1\n\nBODY.SUPPLEMENTARY MATERIAL:\nSupplementary material is available at European Heart Journal online.\n\nBODY.FUNDING:\nThis work was supported by AstraZeneca, who funded the PLATO trial. Funding to pay the Open Access publication charges for this article was provided by Astra Zeneca. Conflicts of interest: D.L.: Institutional research grant from AstraZeneca to perform this research; and lecture fees from AstraZeneca. C.V.: research grant from AstraZeneca to perform this research; member of the Speakers' Bureaus for AstraZeneca, Eli Lilly & Company, and The Medicines Company. C.P.C.: research grants/support from Accumetrics, AstraZeneca, CSL Behring, Essentialis, GlaxoSmithKline, Merck, Regeneron, Sanofi, and Takeda; on advisory boards for Alnylam, Bristol-Myers Squibb, Lipimedix, and Pfizer (funds donated to charity); and holds equity in Automedics Medical Systems. R.A.H.: consulting/advisory board fees from Bristol-Myers Squibb, Sanofi, Portola Pharmaceuticals, Johnson & Johnson and Merck; grant support from Eli Lilly/Daiichi Sankyo., Merck, Portola Pharmaceuticals, Sanofi, Johnson & Johnson, Bristol-Myers Squibb, The Medicines Company, and AstraZeneca. A.H.: reports being an employee of AstraZeneca. J.M.: reports being an employee of AstraZeneca and having equity ownership. S.H.: advisory board member for AstraZeneca, Bristol-Myers Squibb, Pfizer, and Bayer; research support from GlaxoSmithKline, Pfizer, and Sanofi-Aventis. P.G.S.: research grant (to INSERM U698): NYU School of Medicine, Sanofi, Servier. Speaking or consulting: Amarin, AstraZeneca, Bayer, Boehringer-Ingelheim, Bristol-Myers-Squibb, Daiichi-Sankyo, GSK, Lilly, Medtronic, Otsuka, Pfizer, Roche, sanofi, Servier, The Medicines Company, Vivus. Stockholding: Aterovax. J.H.C.: advisory board fees from BMS, AstraZeneca, Eli Lilly/Daiichi Sankyo; consultancy fees from Merck and Servier. R.F.S.: research grants from AstraZeneca, Eli Lilly/Daiichi Sankyo, and Merck; research support from Accumetrics; honoraria from AstraZeneca, Eli Lilly/Daiichi Sankyo, Merck, Iroko, Accumetrics, and Medscape; consultancy fees from AstraZeneca, Merck, Novartis, Accumetrics, Sanofi-Aventis/Regeneron, Bristol-Myers Squibb, Eisai, Roche and Daiichi Sankyo. S.R.S.: reports no conflict of interest. L.W.: research grants from AstraZeneca, Merck & Co, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline; consultant for Merck & Co, Regado Biosciences, Evolva, Portola, C.S.L. Behring, Athera Biotechnologies, Boehringer-Ingelheim, AstraZeneca, GlaxoSmithKline, and Bristol-Myers Squibb/Pfizer; lecture fees from AstraZeneca, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; honoraria from Boehringer Ingelheim, AstraZeneca, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; travel support from AstraZeneca and Bristol-Myers Squibb/Pfizer. S.K.J.: research grant from AstraZeneca, Eli Lilly, Bristol-Myers Squibb, Terumo Inc, Medtronic, and Vascular Solutions; honoraria from The Medicines Company, AstraZeneca, Eli Lilly, Bristol-Myers Squibb, and IROKO; consultant/advisory board from AstraZeneca, Eli Lilly, Merck, Medtronic, and Sanofi.\n\nBODY.SUPPLEMENTARY MATERIAL:\n\n\nSupplementary Data\n\n**Question:** Compared to clopidogrel what was the result of ticagrelor on major bleeding?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1288
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Ticagrelor vs. clopidogrel in patients with non-ST-elevation acute coronary syndrome with or without revascularization: results from the PLATO trial\n\n ABSTRACT.AIMS:\nThe optimal platelet inhibition strategy for ACS patients managed without revascularization is unknown. We aimed to evaluate efficacy and safety of ticagrelor vs. clopidogrel in the non-ST-elevation acute coronary syndrome (NSTE-ACS) subgroup of the PLATO trial, in the total cohort, and in the subgroups managed with and without revascularization within 10 days of randomization.\n\nABSTRACT.METHODS AND RESULTS:\nWe performed a retrospective analysis of the primary endpoint of cardiovascular death/myocardial infarction/stroke. Among 18 624 PLATO patients, 11 080 (59%) were categorized as NSTE-ACS at randomization. During the initial 10 days, 74% had angiography, 46% PCI, and 5% CABG. In NSTE-ACS patients, the primary endpoint was reduced with ticagrelor vs. clopidogrel [10.0 vs. 12.3%; hazard ratio (HR) 0.83; 95% confidence interval (CI) = 0.74–0.93], as was myocardial infarction (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99), cardiovascular death (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93), and all-cause death (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90). Major bleeding rate was similar between treatment groups (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19), but ticagrelor was associated with an increase in non-CABG major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56). Within the first 10 days, 5366 (48.4%) patients were managed without revascularization. Regardless of revascularization or not, ticagrelor consistently reduced the primary outcome (HR 0.86 vs. 0.85, interaction P = 0.93), and all-cause death (HR 0.75 vs. 0.73, interaction P = 0.89) with no significant increase in overall major bleeding.\n\nABSTRACT.CONCLUSION:\nIn patients with NSTE-ACS, benefit of ticagrelor over clopidogrel in reducing ischaemic events and total mortality was consistent with the overall PLATO trial, independent of actually performed revascularization during the initial 10 days.\n\nBODY:\nSee page 2055 for the editorial comment on this article (doi:10.1093/eurheartj/ehu202)\n\nBODY.INTRODUCTION:\nDual antiplatelet therapy with aspirin and a P2Y12 inhibitor is standard of care in acute coronary syndromes (ACS).1,2 Ticagrelor is the first reversibly binding direct P2Y12 inhibitor. As opposed to clopidogrel and prasugrel, it does not require enzymatic activation, and causes faster, greater, and more consistent platelet inhibition compared with clopidogrel.3,4 The Platelet Inhibition and Patient Outcomes (PLATO) trial compared ticagrelor with clopidogrel in patients with ACS. Ticagrelor was superior in preventing ischaemic events (the composite of death from vascular causes, myocardial infarction, and stroke), as well as death from any cause, without a significant increase in all-cause major bleeding. PLATO included both invasively and non-invasively managed patients. The decision on which management strategy to pursue was made by the investigator.5 The third-generation thienopyridine prasugrel is, like clopidogrel, an irreversible P2Y12 inhibitor, but with faster and more consistent platelet inhibition.6 Prasugrel, when compared with clopidogrel, reduced ischaemic events in patients with ACS planned for PCI in the Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition with Prasugrel (TRITON),7 but in non-ST-elevation acute coronary syndrome (NSTE-ACS) patients planned for management without revascularization, prasugrel showed no benefit over clopidogrel in the Targeted Platelet Inhibition to Clarify the Optimal Strategy to Medically Manage Acute Coronary Syndromes (TRILOGY-ACS) trial.8 The aim of the present study was to explore the effect of ticagrelor vs. clopidogrel in the total NSTE-ACS subgroup of the PLATO trial and also stratified by initial management with or without revascularization.\n\nBODY.METHODS.STUDY DESIGN:\nThe PLATO trial (ClinicalTrials.gov No. NCT00391872) was an international randomized double-blind, double-dummy phase III study comparing ticagrelor with clopidogrel in ACS. Details of the study design and overall results have been published previously.5,9 Briefly, a total of 18 624 patients were randomized to receive either ticagrelor or clopidogrel on background treatment with aspirin. The study included patients with ST-elevation MI intended for primary PCI, as well as patients with non-ST-elevation ACS, regardless if aimed for an initial invasive or non-invasive treatment strategy. It adhered to the Declaration of Helsinki, to the specifications of the International Conference of Harmonization, and to Good Clinical Practice; and was approved by national and institutional regulatory authorities and ethics committees. All participants provided written informed consent.\n\nBODY.METHODS.PATIENTS:\nTwo or more of the following inclusion criteria were required for patients enrolled without ST-elevation ACS at admission: (1) ST-segment changes on ECG indicating ischaemia [ST-segment depression or transient elevation (≥1 mm) in at least two contiguous leads]; (2) positive biomarker indicating myocardial necrosis (troponin I or T or CK-MB above the upper limit of normal); (3) one of the following: ≥60 years of age, previous myocardial infarction or coronary artery bypass surgery, coronary artery disease with ≥50% stenosis in ≥2 vessels, previous ischaemic stroke, transient ischaemic attack (TIA), carotid stenosis, cerebral revascularization, diabetes mellitus, peripheral artery disease, or chronic renal dysfunction. In the current analysis, NSTE-ACS was in the data-base defined as absence of either persistent ST-segment elevation of 1 mV for 20 min in two contiguous leads or new (or presumed new) left bundle branch block in entry ECG. In addition to the overall NSTE-ACS population, we also analysed outcomes in the subgroups of NSTE-ACS patients who initially underwent revascularization and those who were treated without early revascularization. Both cohorts required endpoint-free survival for 10 days post-randomization to determine revascularization status. The no-revascularization subgroup was defined as NSTE-ACS patients who did not undergo any revascularization procedure (PCI or CABG) with or without angiography during the first 10 days.\n\nBODY.METHODS.ENDPOINTS:\nThe primary efficacy endpoint of the present study was the composite of cardiovascular death, myocardial infarction, (excluding silent infarctions) and stroke. Each component alone and all-cause death were secondary efficacy endpoints. Major bleeding by PLATO criteria9 and life-threatening/fatal bleeding were the primary safety endpoints. Secondary safety endpoints were CABG-related major bleeding, non-CABG-related major bleeding, minor bleeding, intracranial bleeding, and fatal bleeding. Additional secondary safety outcomes were bleeding events as defined by TIMI (major, minor, and non-CABG related major) and GUSTO (severe and moderate). An independent central adjudication committee, unaware of treatment assignments, assessed all endpoints. Major bleeding as defined by TIMI was recorded from the electronic case report form, where a drop of 50 g/L in haemoglobin was used as a cut-off, but this did not necessarily require clinical evidence of bleeding.\n\nBODY.METHODS.STATISTICAL ANALYSES:\nPatient characteristics and medical history, in-hospital procedures and medications, discharge ACS status, and post-discharge medications are presented by treatment in the NSTE-ACS patient cohort as well as by revascularization status subgroup. Continuous variables are presented as median and 25th–75th percentiles; categorical variables are presented as number and percentage. The treatment effect of ticagrelor vs. clopidogrel on the primary and secondary endpoints was compared within the subgroup of NSTE-ACS patients. Kaplan–Meier estimated event rates at 360 days and total number of observed events during the study were presented for each endpoint. Hazard ratios (HR), confidence intervals (CI), and P-values from unadjusted Cox proportional hazards regression models were presented. Kaplan–Meier estimated event rates were plotted by treatment for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. To examine whether the effect of ticagrelor in patients with NSTE-ACS differed based on early revascularization, Cox proportional hazards models were fitted for each endpoint using a treatment-by-revascularization interaction term. In these models, we adjusted for region of the world to account for differences in revascularization practice and the time-to-event was measured from a landmark at 10 days post-randomization. Kaplan–Meier rates 350 days post-landmark are presented for each treatment/revascularization category along with HR for ticagrelor vs. clopidogrel in the revascularization and medical management subgroups. Interaction P-values assess whether the treatment effect is different depending on revascularization status. Kaplan–Meier event rates adjusted for region are plotted by the four treatment/revascularization categories for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. For consistency with the PLATO design paper,9 we also performed a 30-day landmark analysis; and as a sensitivity analysis, we also performed a 10-day landmark analysis of the full study population. Finally, the association of treatment with the primary efficacy endpoint was evaluated within several patient cohorts using Cox proportional hazards models, separately within the revascularization and non-revascularization subgroups. Forest plots present the HR for ticagrelor vs. clopidogrel in each subgroup cohort. Endpoints are defined using intent to treat and a P-value of 0.05 is used to denote statistical significance. Data were analysed using SAS version 9.2 for all analyses.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nThe PLATO trial included 18 624 patients with ACS, of which 11 080 patients were classified as NSTE-ACS at randomization. Of these, 5581 were randomized to ticagrelor and 5499 to clopidogrel (Figure 1). The baseline and in-hospital characteristics were similar between groups in the overall NSTE-ACS population (Table 1). In the overall NSTE-ACS group, 74% of patients had a coronary angiography performed, 46% underwent PCI, and 5% CABG. The discharge diagnosis (in this population categorized as NSTE-ACS at admission) was NSTEMI in 65%, STEMI in 8%, and unstable angina/other in 27%.\nTable 1Baseline characteristics and invasive procedures by randomized treatment in the overall NSTE-ACS population, and by initial treatment strategy (within the first 10 days after randomization) Overall NSTE-ACSNSTE-ACS with revascularizationNSTE-ACS without revascularizationTicagrelor (N = 5581)Clopidogrel (N = 5499)Ticagrelor (N = 2873)Clopidogrel (N = 2841)Ticagrelor (N = 2708)Clopidogrel (N = 2658)Demographics Age, median (25th–75th percentile), years64 (56–72)64 (56–72)63 (55–71)63 (55–71)65 (57–73)65 (57–73) Age ≥ 75 years, n (%)955 (17.1)1024 (18.6)420 (14.6)460 (16.2)535 (19.8)564 (21.2) Female gender, n (%)1746 (31.3)1746 (31.8)706 (24.6)716 (25.2)1040 (38.4)1030 (38.8) Body weight <60 kg, n (%)398 (7.2)389 (7.1)165 (5.8)172 (6.1)233 (8.6)217 (8.2) Body mass index, median (25th–75th percentile), kg/m227.5 (24.8–30.8)27.4 (24.8–30.5)27.5 (24.9–30.8)27.5 (24.9–30.5)27.5 (24.7–30.6)27.3 (24.6–30.5) GRACE risk score nomogram130 (112–150)130 (112–149)128 (110–145)127 (110–145)133 (114–154)134 (116–153)Cardiovascular risk factors, n (%) Current smoking1636 (29.4)1640 (29.9)1000 (34.8)960 (33.8)636 (23.6)680 (25.6) Hypertension3915 (70.2)3835 (69.8)1881 (65.5)1886 (66.4)2034 (75.3)1949 (73.4) Dyslipidaemia2885 (51.8)2852 (51.9)1515 (52.8)1569 (55.2)1370 (50.7)1283 (48.3) Diabetes mellitus1608 (28.9)1520 (27.7)756 (26.3)747 (26.3)852 (31.6)773 (29.1)Medical history, n (%) Angina pectoris2932 (52.6)2890 (52.6)1337 (46.5)1356 (47.7)1595 (59.1)1534 (57.8) Myocardial infarction1400 (25.1)1410 (25.7)609 (21.2)608 (21.4)791 (29.3)802 (30.2) Congestive heart failure397 (7.1)429 (7.8)99 (3.4)103 (3.6)298 (11.0)326 (12.3) Percutaneous coronary intervention (PCI)944 (16.9)918 (16.7)537 (18.7)517 (18.2)407 (15.1)401 (15.1) Coronary artery bypass graft (CABG)434 (7.8)474 (8.6)214 (7.4)218 (7.7)220 (8.1)256 (9.6) Transient ischaemic attack185 (3.3)189 (3.4)82 (2.9)74 (2.6)103 (3.8)115 (4.3) Non-hemorrhagic stroke246 (4.4)243 (4.4)81 (2.8)98 (3.5)165 (6.1)145 (5.5) Peripheral arterial disease400 (7.2)413 (7.5)174 (6.1)204 (7.2)226 (8.4)209 (7.9) Chronic renal disease273 (4.9)279 (5.1)117 (4.1)108 (3.8)156 (5.8)171 (6.4)Physical findings, median (25th–75th percentile) Heart rate (bpm)72 (64–80)72 (64–81)71 (62–80)72 (63–80)72 (64–82)72 (64–82) Systolic blood pressure (mmHg)135 (120–150)134 (120–150)135 (120–150)135 (120–150)134 (120–150)132 (120–150) Diastolic blood pressure (mmHg)80 (70–89)80 (70–87)80 (70–89)80 (70–87)80 (70–89)80 (70–87)Risk indicators, n (%) Troponin positive, n (%)4356 (80.8)4323 (81.3)2522 (89.6)2486 (89.3)1834 (71.3)1837 (72.4) ST depression (≥1 mm)3158 (56.8)3201 (58.4)1547 (54.0)1535 (54.2)1611 (59.8)1666 (62.8) TIMI risk score > 24838 (89.7)4785 (89.8)2584 (91.3)2566 (91.8)2254 (88.0)2219 (87.7)Type of ACS at discharge, n (%) STEMI449 (8.1)437 (8.0)330 (11.5)305 (10.7)119 (4.4)132 (5.0) NSTEMI3605 (64.8)3525 (64.3)2045 (71.2)2025 (71.3)1560 (58.0)1500 (56.7) UA/other1509 (27.1)1524 (27.8)497 (17.3)510 (18.0)1012 (37.6)1014 (38.3)Antithrombotic treatment during index hospitalization, n (%) Aspirin5386 (96.6)5316 (96.8)2797 (97.4)2779 (97.8)2589 (95.9)2537 (95.8) Unfractionated heparin2910 (52.1)2856 (51.9)1845 (64.2)1845 (64.9)1065 (39.3)1011 (38.0) Low molecular weight heparin3181 (57.0)3084 (56.1)1568 (54.6)1501 (52.8)1613 (59.6)1583 (59.6) Fondaparinux194 (3.5)191 (3.5)87 (3.0)76 (2.7)107 (4.0)115 (4.3) Bivalirudin140 (2.5)133 (2.4)135 (4.7)122 (4.3)5 (0.2)11 (0.4) GP IIb/IIIa inhibitor1170 (21.0)1117 (20.3)982 (34.2)963 (33.9)188 (6.9)154 (5.8)Other drug from randomization to end of study, n (%) Beta-blocker4768 (85.4)4685 (85.2)2536 (88.3)2512 (88.4)2232 (82.4)2173 (81.8) ACE-inhibitor and/or ARB4693 (84.2)4602 (83.8)2447 (85.2)2433 (85.6)2246 (83.2)2169 (81.9) Statin5196 (93.1)5109 (92.9)2777 (96.7)2745 (96.6)2419 (89.3)2364 (88.9) Proton-pump inhibitor2776 (49.7)2613 (47.5)1618 (56.3)1536 (54.1)1158 (42.8)1077 (40.5) Calcium channel inhibitor1553 (27.8)1515 (27.6)720 (25.1)723 (25.4)833 (30.8)792 (29.8) Diuretic2393 (42.9)2280 (41.5)1073 (37.3)1037 (36.5)1320 (48.9)1243 (46.9)Invasive procedures Coronary angiography During first 10 days, n (%)4143 (74.5)4072 (74.2)2873 (100.0)2841 (100.0)1270 (47.2)1231 (46.6) After first 10 days, n (%)357 (6.4)335 (6.1)0 (0)0 (0)357 (13.3)335 (12.7) PCI During first 10 days, n (%)2590 (46.4)2550 (46.4)2590 (90.1)2550 (89.8)0 (0)0 (0) After first 10 days, n (%)279 (5.0)291 (5.3)11 (0.4)3 (0.1)268 (9.9)288 (10.8) CABG During first 10 days, n (%)296 (5.3)305 (5.5)296 (10.3)305 (10.7)0 (0)0 (0) After first 10 days, n (%)382 (6.8)375 (6.8)55 (1.9)55 (1.9)327 (12.1)320 (12.0)\nFigure 1Trial profile—patients classified by entry ECG and by treatment during the first 10 days after randomization. At 10 days post-randomization, 5366 patients were alive and had not undergone revascularization. There were regional differences in the proportion of patients undergoing revascularization during the initial 10 days (Asia/Australia 44%, Central/South America 40.3%, Europe/Middle East/Africa 49.8%, North America 71.6%). Non-revascularized and revascularized patients were of similar age (63 vs. 65), but patients who did not undergo revascularization were more likely to be of female gender (39 vs. 25%), and more likely to have comorbidities (e.g. previous myocardial infarction, heart failure, renal disease). Although non-revascularized patients had more comorbidities, they were less likely to be troponin I positive (Table 1). The proportion of patients with TIMI risk score >2 was slightly higher in the revascularization group (92 vs. 88%). During the first 10 days, 47% of non-revascularized patients underwent coronary angiography. When including only those who had angiography during the initial 10 days, female gender was more common in the non-revascularized (36%) than revascularized (25%). No significant coronary artery disease at angiography was noted in 32 and 0.7% of the non-revascularized and revascularized, respectively.\n\nBODY.RESULTS.EFFICACY IN THE OVERALL NON-ST-ELEVATION ACUTE CORONARY SYNDROME POPULATION:\nEfficacy and safety outcomes of the overall NSTE-ACS population are summarized in Table 2, and Kaplan–Meier curves are shown in Supplementary material online. The incidence of the primary composite endpoint was reduced with ticagrelor vs. clopidogrel (10.0 vs. 12.3%; HR 0.83; 95% CI = 0.74–0.93; P = 0.0013) (see Supplementary material online, Figure S1A). Cardiovascular death occurred less often in the ticagrelor group than in the clopidogrel group (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93; P = 0.0070), and myocardial infarction was also less common with ticagrelor vs. clopidogrel (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99; P = 0.0419), whereas stroke incidence did not differ significantly between treatment arms (1.3 vs. 1.4%; HR 0.95; 95% CI 0.69–1.33; P = 0.79). All-cause death was reduced in those treated with ticagrelor vs. clopidogrel (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90; P = 0.0020) (see Supplementary material online, Figure S1B).\nTable 2Efficacy and safety outcomes in patients with NSTE-ACSTicagrelor % (n)Clopidogrel % (n)HR (95% CI)P-valueEfficacy endpoints CV death/MI (excluding silent)/stroke10.0 (533)12.3 (630)0.83 (0.74, 0.93)0.0013 All-cause death/MI(excl. silent)/stroke10.5 (557)13.0 (664)0.82 (0.73, 0.92)0.0006 CV death/MI(all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event15.5 (824)17.8 (918)0.88 (0.80, 0.96)0.0058 Myocardial infarction (excluding silent)6.6 (345)7.7 (392)0.86 (0.74, 0.99)0.0419 Cardiovascular death (includes vascular and unknown deaths)3.7 (194)4.9 (247)0.77 (0.64, 0.93)0.0070 Stroke1.3 (69)1.4 (71)0.95 (0.69, 1.33)0.79 All-cause death4.3 (224)5.8 (290)0.76 (0.64, 0.90)0.0020Safety endpoints Major bleeding (study criteria)13.4 (660)12.6 (618)1.07 (0.95, 1.19)0.26 Major or minor bleeding (study criteria)18.2 (900)16.3 (794)1.14 (1.03, 1.25)0.0078 Non-CABG related major bleeding (study criteria)4.8 (225)3.8 (176)1.28 (1.05, 1.56)0.0139 Fatal bleeding0.3 (13)0.4 (18)0.72 (0.35, 1.47)0.37 Life threatening or fatal bleeding (study criteria)6.6 (331)6.5 (315)1.05 (0.90, 1.22)0.56 Intracranial bleeding0.3 (14)0.2 (7)2.01 (0.81, 4.99)0.13 Other major bleeding7.2 (344)6.6 (318)1.08 (0.93, 1.25)0.34 Major bleeding (TIMI criteria)9.2 (452)8.7 (422)1.07 (0.94, 1.22)0.33 Major or minor bleeding (TIMI criteria)13.2 (653)12.3 (602)1.08 (0.97, 1.21)0.16 Non-CABG related major bleeding (TIMI criteria)2.9 (137)2.2 (99)1.39 (1.07, 1.80)0.0131 GUSTO severe bleeding3.1 (146)3.2 (151)0.96 (0.77, 1.21)0.74 GUSTO moderate or severe bleeding8.6 (416)7.8 (382)1.08 (0.94, 1.25)0.25Each treatment group is summarized as Kaplan–Meier rates at 360 days and total number of events during the study. P-values and hazard ratios (95% CI) come from unadjusted Cox models testing ticagrelor vs. clopidogrel.\n\nBODY.RESULTS.SAFETY:\nWith ticagrelor when compared with clopidogrel, there was no significant difference in PLATO major bleeding (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19; P = 0.26), but a higher rate of non-CABG-related major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56; P = 0.0139) (see Supplementary material online, Figure S2A and B). There was no significant difference in the rate of life-threatening or fatal bleeding (6.6 vs. 6.5%; HR 1.05; 95% CI = 0.90–1.22, P = 0.56), nor any significant difference in the rate of intracranial bleeding with ticagrelor compared with clopidogrel (0.3 vs. 0.2%; HR 2.01; 95% CI = 0.81–4.99; P = 0.13). The composite of major or minor bleeding (by PLATO criteria) occurred more often in the ticagrelor group (18.2 vs. 16.3%; HR 1.14; 95% CI = 1.03–1.25, P = 0.0078). When assessed by TIMI criteria, there was no significant difference in major or minor bleeding (13.2 vs. 12.3%; HR 1.08; 95% CI = 0.97–1.21; P = 0.16). TIMI major bleeding, GUSTO severe, and GUSTO moderate or severe bleeding also did not appear to differ significantly between ticagrelor and clopidogrel, whereas TIMI non-CABG-related major bleeding was more common in the ticagrelor group (Table 2).\n\nBODY.RESULTS.EFFICACY AND SAFETY ACCORDING TO TREATMENT STRATEGY:\nEvent rates were considerably higher in NSTE-ACS patients treated without revascularization compared with patients undergoing revascularization during the initial 10 days. For both revascularized and non-revascularized patients, there were similar proportional reductions of the primary endpoint with ticagrelor compared with clopidogrel (HR 0.86 vs. 0.85, interaction P = 0.93) (Figure 2A, Table 3) consistent with the overall trial. There was also a consistent reduction in all-cause death (HR 0.75 vs. 0.73; interaction P = 0.89) (Figure 2B). No significant difference in overall major bleeding was seen with ticagrelor vs. clopidogrel within each treatment strategy (revascularization/no revascularization) (Figure 3A). There was a higher incidence of non-CABG-related major bleeding with ticagrelor vs. clopidogrel in patients with NSTE-ACS with no significant interaction by invasive treatment strategy (HR 1.32 vs. 1.07; interaction P = 0.43) (Figure 3B). The primary outcome was reduced in major subgroups in both revascularized and non-revascularized patients (see Supplementary material online, Figures S3 and S4). The results were consistent with a 30-day landmark for revascularization (see Supplementary material online, Table S1), as well as in the full study population (see Supplementary material online, Table S2). In patients who underwent angiography during the initial 10 days with or without significant coronary disease, the effect of ticagrelor vs. clopidogrel was consistent (see Supplementary material online, Table S3).\nTable 3Interaction of ticagrelor treatment and revascularization within 10 days (adjusting for region) NSTE-ACS with revascularizationNSTE-ACS without revascularizationInteraction PNTicagrelor KM rateClopidogrel KM rateHR (95% CI)NTicagrelor KM rateClopidogrel KM rateHR (95% CI)Efficacy endpoints CV death/MI (excluding silent)/stroke54165.116.100.86 (0.68, 1.09)51899.6311.600.85 (0.72, 1.01)0.93 All-cause death/MI (excl. silent)/stroke54165.446.600.85 (0.67, 1.06)518910.1512.530.84 (0.71, 0.99)0.94 CV death/MI (all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event52908.7310.310.86 (0.71, 1.03)510914.1415.160.97 (0.84, 1.13)0.29 Myocardial infarction (excluding silent)54383.523.880.90 (0.68, 1.21)52016.046.680.94 (0.75, 1.17)0.85 Cardiovascular death (includes vascular and unknown deaths)56481.642.330.76 (0.52, 1.13)52174.075.440.75 (0.58, 0.98)0.95 Stroke56320.670.591.18 (0.60, 2.34)52091.481.690.92 (0.58, 1.46)0.56 All-cause death56482.032.880.75 (0.53, 1.07)52174.776.650.73 (0.57, 0.93)0.89Safety endpoints Major bleeding (study criteria)49835.254.681.10 (0.84, 1.44)493111.8311.431.05 (0.88, 1.26)0.82 Major or minor bleeding (study criteria)48427.766.351.22 (0.97, 1.54)484714.5913.961.07 (0.91, 1.25)0.34 Non-CABG major bleeding (study criteria)52703.142.381.32 (0.92, 1.90)49332.782.791.07 (0.74, 1.56)0.43 Fatal or life-threatening major bleeding (study criteria)51732.252.011.18 (0.79, 1.76)49625.776.110.95 (0.75, 1.22)0.37 Other major bleeding (study criteria)51783.102.851.02 (0.72, 1.45)49456.505.681.16 (0.91, 1.49)0.55 Major bleeding (TIMI criteria)51023.422.791.21 (0.86, 1.70)49528.048.390.97 (0.79, 1.20)0.28 Major or minor bleeding (TIMI criteria)49905.244.561.12 (0.85, 1.47)493311.7811.161.08 (0.91, 1.29)0.85 Non-CABG major bleeding (TIMI criteria)53161.881.191.66 (1.01, 2.72)49522.051.841.19 (0.76, 1.87)0.34 GUSTO severe bleeding52861.161.490.75 (0.45, 1.26)49462.752.461.13 (0.77, 1.65)0.22 GUSTO moderate or severe bleeding51343.933.301.13 (0.82, 1.55)49457.185.961.19 (0.95, 1.51)0.78Kaplan–Meier (KM) rates 350 days after day 10 post-randomization.\nFigure 2Efficacy endpoints stratified by management strategy—Kaplan–Meier estimates of time to first occurrence of: (A) primary endpoint, (B) all-cause death, from 10 days post-randomization onward.\nFigure 3Bleeding stratified by revascularization—Kaplan–Meier estimate of (A) time to major bleeding according to the PLATO criteria from day 10 post-randomization, and (B) time to non-CABG major bleeding.\n\nBODY.DISCUSSION:\nIn this subgroup analysis, ticagrelor compared with clopidogrel reduced the composite endpoint of cardiovascular death, myocardial infarction, and stroke as well as the individual endpoints of cardiovascular death, myocardial infarction, and all-cause death without any significant difference in major bleeding in patients with an entry diagnosis of NSTE-ACS. The event curves for the primary composite endpoint and total mortality separated continuously for the duration of the trial. The benefits with ticagrelor were observed both in patients who underwent and in those who did not undergo early revascularization, regardless whether angiography was performed or not. These results are consistent with the previously reported subgroup analysis of all ACS patients with an intended strategy (prior to randomization) of no revascularization, where about one-fifth still underwent PCI and about 4% had CABG surgery before discharge.10 Although current guidelines advocate early invasive management in NSTE-ACS,1,2 a large proportion of patients are managed non-invasively.11,12 Patients who are managed without revascularization usually have more comorbidities, higher risk of bleeding, and inferior outcome than patients who are revascularized.13 The optimal platelet inhibition strategy in these patients has been uncertain. P2Y12 inhibition has previously been shown to reduce ischaemic events in NSTE-ACS patients managed without revascularization. In the Clopidogrel in Unstable Angina to Prevent Recurrent Events (CURE) trial, which randomized NSTE-ACS patients to receive either clopidogrel or placebo (on background aspirin treatment), 64% of patients did not undergo revascularization after randomization. There were almost identical relative reductions in CV death/MI/stroke with clopidogrel in the non-invasive and invasive subgroups.14 The Clopidogrel for High Atherothrombotic Risk and Ischemic Stabilization, Management and Avoidance (CHARISMA) trial studied the addition of clopidogrel to aspirin in a stable, more heterogeneous population at risk for atherothrombotic events. Overall, the combination was not more effective than aspirin monotherapy, but a trend towards benefit was noted in those with symptomatic atherosclerotic disease, whereas a trend towards harm was seen in those included based only on multiple cardiovascular risk factors.15 This issue was also recently studied in the TRILOGY-ACS trial, in which NSTE-ACS patients intended for management without revascularization were prospectively randomized to receive either prasugrel or clopidogrel (both irreversible P2Y12 inhibitors). The results showed no significant overall benefit of prasugrel over clopidogrel during 24 months, even though the Kaplan–Meier curves for the efficacy endpoints tended to separate after 1 year,8 and with a reduction of the primary outcome in the subgroup who underwent angiography.16 In the no revascularization subgroup of the current PLATO NSTE-ACS substudy, there were consistent benefits with ticagrelor when compared with clopidogrel concerning both mortality and non-fatal ischaemic events. Although the no revascularization subgroup of the present PLATO substudy seems similar to the TRILOGY-ACS population, it is impossible to compare the effect of the respective new P2Y12 inhibitor vs. clopidogrel across the two studies. Patients with intention for treatment without revascularization were prospectively studied in TRILOGY-ACS, while the present PLATO substudy was a post hoc stratification with subgroups of revascularization/no revascularization defined post randomization and post procedures. The TRILOGY-ACS population also was of higher risk with more prevalent comorbidities such as hypertension, dyslipidaemia, and diabetes, and had higher rates of previous myocardial infarction and revascularization procedures. In ACS patients managed with revascularization, more potent P2Y12 inhibition has been associated with better outcomes, as shown in the intention for invasive management subgroup analysis of PLATO for ticagrelor17 and in TRITON for prasugrel.7 In the Clopidogrel and Aspirin Optimal Dose Usage to Reduce Recurrent Events–Seventh Organization to Assess Strategies in Ischemic Syndromes (CURRENT OASIS-7) trial also, intensified P2Y12 inhibition with double dose clopidogrel for the first 7 days showed no significant difference in outcome in the overall population,18 but a reduction in cardiovascular events, including stent thrombosis in the pre-specified (albeit post-randomization) subgroup undergoing PCI.19 With more potent platelet inhibition, bleeding complications have usually been increasing. In TRITON-TIMI 38, where prasugrel vs. clopidogrel was evaluated in ACS, prasugrel demonstrated a reduction in thrombotic events, but at the cost of significantly increased rates of major bleeding, including fatal bleeding, particularly in patients at high bleeding risk defined as high age and low body weight.7 In TRILOGY-ACS on the other hand, which included a lower maintenance dose, prasugrel did not cause any increased major bleeding rate, including patients >75 years of age. In the present analysis, there was no significant difference in PLATO-defined total major bleeding with ticagrelor compared with clopidogrel. However, the incidence of non-CABG-related major bleeding was significantly higher in the ticagrelor group. There was no significant difference in life-threatening or fatal bleeding or major bleeding as defined by the TIMI criteria, but the composite of PLATO major/minor bleeding was increased with ticagrelor.\n\nBODY.DISCUSSION.STUDY LIMITATIONS:\nThere are several limitations to this work. The sample size is large, which gives high power to detect even relatively small differences in effect that may or may not be clinically important. The revascularization/no revascularization analyses were post hoc investigations of subgroups identified post-randomization, which makes the analyses subject to potential bias. Because landmark analyses were used, the risk of time-dependent confounding is acknowledged. As sensitivity analyses, we also performed landmark analyses at 30 days instead of 10 days, with consistent results. Nevertheless, the findings reported with regard to revascularization status should be interpreted strictly as exploratory and hypothesis generating. The present results based on actual revascularization strategy support and complement those of our previous analysis based on pre-randomization intention to treat with invasive or conservative management, and the results based on performed revascularization or not are consistent with the overall NSTE-ACS results, as well as the overall PLATO results.\n\nBODY.CONCLUSIONS:\nIn this substudy of the PLATO trial, ticagrelor compared with clopidogrel consistently reduced the rates of ischaemic events and mortality without any difference in overall major bleeding in patients with an entry diagnosis of NSTE-ACS, and this effect was independent of whether or not early revascularization was performed. These results harmonize with the European Society of Cardiology (ESC) NSTE-ACS guidelines, which recommend ticagrelor in all patients at moderate-to-high risk of ischaemic events, regardless of initial treatment strategy.1\n\nBODY.SUPPLEMENTARY MATERIAL:\nSupplementary material is available at European Heart Journal online.\n\nBODY.FUNDING:\nThis work was supported by AstraZeneca, who funded the PLATO trial. Funding to pay the Open Access publication charges for this article was provided by Astra Zeneca. Conflicts of interest: D.L.: Institutional research grant from AstraZeneca to perform this research; and lecture fees from AstraZeneca. C.V.: research grant from AstraZeneca to perform this research; member of the Speakers' Bureaus for AstraZeneca, Eli Lilly & Company, and The Medicines Company. C.P.C.: research grants/support from Accumetrics, AstraZeneca, CSL Behring, Essentialis, GlaxoSmithKline, Merck, Regeneron, Sanofi, and Takeda; on advisory boards for Alnylam, Bristol-Myers Squibb, Lipimedix, and Pfizer (funds donated to charity); and holds equity in Automedics Medical Systems. R.A.H.: consulting/advisory board fees from Bristol-Myers Squibb, Sanofi, Portola Pharmaceuticals, Johnson & Johnson and Merck; grant support from Eli Lilly/Daiichi Sankyo., Merck, Portola Pharmaceuticals, Sanofi, Johnson & Johnson, Bristol-Myers Squibb, The Medicines Company, and AstraZeneca. A.H.: reports being an employee of AstraZeneca. J.M.: reports being an employee of AstraZeneca and having equity ownership. S.H.: advisory board member for AstraZeneca, Bristol-Myers Squibb, Pfizer, and Bayer; research support from GlaxoSmithKline, Pfizer, and Sanofi-Aventis. P.G.S.: research grant (to INSERM U698): NYU School of Medicine, Sanofi, Servier. Speaking or consulting: Amarin, AstraZeneca, Bayer, Boehringer-Ingelheim, Bristol-Myers-Squibb, Daiichi-Sankyo, GSK, Lilly, Medtronic, Otsuka, Pfizer, Roche, sanofi, Servier, The Medicines Company, Vivus. Stockholding: Aterovax. J.H.C.: advisory board fees from BMS, AstraZeneca, Eli Lilly/Daiichi Sankyo; consultancy fees from Merck and Servier. R.F.S.: research grants from AstraZeneca, Eli Lilly/Daiichi Sankyo, and Merck; research support from Accumetrics; honoraria from AstraZeneca, Eli Lilly/Daiichi Sankyo, Merck, Iroko, Accumetrics, and Medscape; consultancy fees from AstraZeneca, Merck, Novartis, Accumetrics, Sanofi-Aventis/Regeneron, Bristol-Myers Squibb, Eisai, Roche and Daiichi Sankyo. S.R.S.: reports no conflict of interest. L.W.: research grants from AstraZeneca, Merck & Co, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline; consultant for Merck & Co, Regado Biosciences, Evolva, Portola, C.S.L. Behring, Athera Biotechnologies, Boehringer-Ingelheim, AstraZeneca, GlaxoSmithKline, and Bristol-Myers Squibb/Pfizer; lecture fees from AstraZeneca, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; honoraria from Boehringer Ingelheim, AstraZeneca, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; travel support from AstraZeneca and Bristol-Myers Squibb/Pfizer. S.K.J.: research grant from AstraZeneca, Eli Lilly, Bristol-Myers Squibb, Terumo Inc, Medtronic, and Vascular Solutions; honoraria from The Medicines Company, AstraZeneca, Eli Lilly, Bristol-Myers Squibb, and IROKO; consultant/advisory board from AstraZeneca, Eli Lilly, Merck, Medtronic, and Sanofi.\n\nBODY.SUPPLEMENTARY MATERIAL:\n\n\nSupplementary Data\n\n**Question:** Compared to clopidogrel what was the result of ticagrelor on the rate of life-threatening or fatal bleeding or the rate of intracranial bleeding?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1290
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Ticagrelor vs. clopidogrel in patients with non-ST-elevation acute coronary syndrome with or without revascularization: results from the PLATO trial\n\n ABSTRACT.AIMS:\nThe optimal platelet inhibition strategy for ACS patients managed without revascularization is unknown. We aimed to evaluate efficacy and safety of ticagrelor vs. clopidogrel in the non-ST-elevation acute coronary syndrome (NSTE-ACS) subgroup of the PLATO trial, in the total cohort, and in the subgroups managed with and without revascularization within 10 days of randomization.\n\nABSTRACT.METHODS AND RESULTS:\nWe performed a retrospective analysis of the primary endpoint of cardiovascular death/myocardial infarction/stroke. Among 18 624 PLATO patients, 11 080 (59%) were categorized as NSTE-ACS at randomization. During the initial 10 days, 74% had angiography, 46% PCI, and 5% CABG. In NSTE-ACS patients, the primary endpoint was reduced with ticagrelor vs. clopidogrel [10.0 vs. 12.3%; hazard ratio (HR) 0.83; 95% confidence interval (CI) = 0.74–0.93], as was myocardial infarction (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99), cardiovascular death (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93), and all-cause death (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90). Major bleeding rate was similar between treatment groups (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19), but ticagrelor was associated with an increase in non-CABG major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56). Within the first 10 days, 5366 (48.4%) patients were managed without revascularization. Regardless of revascularization or not, ticagrelor consistently reduced the primary outcome (HR 0.86 vs. 0.85, interaction P = 0.93), and all-cause death (HR 0.75 vs. 0.73, interaction P = 0.89) with no significant increase in overall major bleeding.\n\nABSTRACT.CONCLUSION:\nIn patients with NSTE-ACS, benefit of ticagrelor over clopidogrel in reducing ischaemic events and total mortality was consistent with the overall PLATO trial, independent of actually performed revascularization during the initial 10 days.\n\nBODY:\nSee page 2055 for the editorial comment on this article (doi:10.1093/eurheartj/ehu202)\n\nBODY.INTRODUCTION:\nDual antiplatelet therapy with aspirin and a P2Y12 inhibitor is standard of care in acute coronary syndromes (ACS).1,2 Ticagrelor is the first reversibly binding direct P2Y12 inhibitor. As opposed to clopidogrel and prasugrel, it does not require enzymatic activation, and causes faster, greater, and more consistent platelet inhibition compared with clopidogrel.3,4 The Platelet Inhibition and Patient Outcomes (PLATO) trial compared ticagrelor with clopidogrel in patients with ACS. Ticagrelor was superior in preventing ischaemic events (the composite of death from vascular causes, myocardial infarction, and stroke), as well as death from any cause, without a significant increase in all-cause major bleeding. PLATO included both invasively and non-invasively managed patients. The decision on which management strategy to pursue was made by the investigator.5 The third-generation thienopyridine prasugrel is, like clopidogrel, an irreversible P2Y12 inhibitor, but with faster and more consistent platelet inhibition.6 Prasugrel, when compared with clopidogrel, reduced ischaemic events in patients with ACS planned for PCI in the Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition with Prasugrel (TRITON),7 but in non-ST-elevation acute coronary syndrome (NSTE-ACS) patients planned for management without revascularization, prasugrel showed no benefit over clopidogrel in the Targeted Platelet Inhibition to Clarify the Optimal Strategy to Medically Manage Acute Coronary Syndromes (TRILOGY-ACS) trial.8 The aim of the present study was to explore the effect of ticagrelor vs. clopidogrel in the total NSTE-ACS subgroup of the PLATO trial and also stratified by initial management with or without revascularization.\n\nBODY.METHODS.STUDY DESIGN:\nThe PLATO trial (ClinicalTrials.gov No. NCT00391872) was an international randomized double-blind, double-dummy phase III study comparing ticagrelor with clopidogrel in ACS. Details of the study design and overall results have been published previously.5,9 Briefly, a total of 18 624 patients were randomized to receive either ticagrelor or clopidogrel on background treatment with aspirin. The study included patients with ST-elevation MI intended for primary PCI, as well as patients with non-ST-elevation ACS, regardless if aimed for an initial invasive or non-invasive treatment strategy. It adhered to the Declaration of Helsinki, to the specifications of the International Conference of Harmonization, and to Good Clinical Practice; and was approved by national and institutional regulatory authorities and ethics committees. All participants provided written informed consent.\n\nBODY.METHODS.PATIENTS:\nTwo or more of the following inclusion criteria were required for patients enrolled without ST-elevation ACS at admission: (1) ST-segment changes on ECG indicating ischaemia [ST-segment depression or transient elevation (≥1 mm) in at least two contiguous leads]; (2) positive biomarker indicating myocardial necrosis (troponin I or T or CK-MB above the upper limit of normal); (3) one of the following: ≥60 years of age, previous myocardial infarction or coronary artery bypass surgery, coronary artery disease with ≥50% stenosis in ≥2 vessels, previous ischaemic stroke, transient ischaemic attack (TIA), carotid stenosis, cerebral revascularization, diabetes mellitus, peripheral artery disease, or chronic renal dysfunction. In the current analysis, NSTE-ACS was in the data-base defined as absence of either persistent ST-segment elevation of 1 mV for 20 min in two contiguous leads or new (or presumed new) left bundle branch block in entry ECG. In addition to the overall NSTE-ACS population, we also analysed outcomes in the subgroups of NSTE-ACS patients who initially underwent revascularization and those who were treated without early revascularization. Both cohorts required endpoint-free survival for 10 days post-randomization to determine revascularization status. The no-revascularization subgroup was defined as NSTE-ACS patients who did not undergo any revascularization procedure (PCI or CABG) with or without angiography during the first 10 days.\n\nBODY.METHODS.ENDPOINTS:\nThe primary efficacy endpoint of the present study was the composite of cardiovascular death, myocardial infarction, (excluding silent infarctions) and stroke. Each component alone and all-cause death were secondary efficacy endpoints. Major bleeding by PLATO criteria9 and life-threatening/fatal bleeding were the primary safety endpoints. Secondary safety endpoints were CABG-related major bleeding, non-CABG-related major bleeding, minor bleeding, intracranial bleeding, and fatal bleeding. Additional secondary safety outcomes were bleeding events as defined by TIMI (major, minor, and non-CABG related major) and GUSTO (severe and moderate). An independent central adjudication committee, unaware of treatment assignments, assessed all endpoints. Major bleeding as defined by TIMI was recorded from the electronic case report form, where a drop of 50 g/L in haemoglobin was used as a cut-off, but this did not necessarily require clinical evidence of bleeding.\n\nBODY.METHODS.STATISTICAL ANALYSES:\nPatient characteristics and medical history, in-hospital procedures and medications, discharge ACS status, and post-discharge medications are presented by treatment in the NSTE-ACS patient cohort as well as by revascularization status subgroup. Continuous variables are presented as median and 25th–75th percentiles; categorical variables are presented as number and percentage. The treatment effect of ticagrelor vs. clopidogrel on the primary and secondary endpoints was compared within the subgroup of NSTE-ACS patients. Kaplan–Meier estimated event rates at 360 days and total number of observed events during the study were presented for each endpoint. Hazard ratios (HR), confidence intervals (CI), and P-values from unadjusted Cox proportional hazards regression models were presented. Kaplan–Meier estimated event rates were plotted by treatment for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. To examine whether the effect of ticagrelor in patients with NSTE-ACS differed based on early revascularization, Cox proportional hazards models were fitted for each endpoint using a treatment-by-revascularization interaction term. In these models, we adjusted for region of the world to account for differences in revascularization practice and the time-to-event was measured from a landmark at 10 days post-randomization. Kaplan–Meier rates 350 days post-landmark are presented for each treatment/revascularization category along with HR for ticagrelor vs. clopidogrel in the revascularization and medical management subgroups. Interaction P-values assess whether the treatment effect is different depending on revascularization status. Kaplan–Meier event rates adjusted for region are plotted by the four treatment/revascularization categories for the primary efficacy endpoint, all-cause death, major bleeding, and non-CABG related major bleeding. For consistency with the PLATO design paper,9 we also performed a 30-day landmark analysis; and as a sensitivity analysis, we also performed a 10-day landmark analysis of the full study population. Finally, the association of treatment with the primary efficacy endpoint was evaluated within several patient cohorts using Cox proportional hazards models, separately within the revascularization and non-revascularization subgroups. Forest plots present the HR for ticagrelor vs. clopidogrel in each subgroup cohort. Endpoints are defined using intent to treat and a P-value of 0.05 is used to denote statistical significance. Data were analysed using SAS version 9.2 for all analyses.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nThe PLATO trial included 18 624 patients with ACS, of which 11 080 patients were classified as NSTE-ACS at randomization. Of these, 5581 were randomized to ticagrelor and 5499 to clopidogrel (Figure 1). The baseline and in-hospital characteristics were similar between groups in the overall NSTE-ACS population (Table 1). In the overall NSTE-ACS group, 74% of patients had a coronary angiography performed, 46% underwent PCI, and 5% CABG. The discharge diagnosis (in this population categorized as NSTE-ACS at admission) was NSTEMI in 65%, STEMI in 8%, and unstable angina/other in 27%.\nTable 1Baseline characteristics and invasive procedures by randomized treatment in the overall NSTE-ACS population, and by initial treatment strategy (within the first 10 days after randomization) Overall NSTE-ACSNSTE-ACS with revascularizationNSTE-ACS without revascularizationTicagrelor (N = 5581)Clopidogrel (N = 5499)Ticagrelor (N = 2873)Clopidogrel (N = 2841)Ticagrelor (N = 2708)Clopidogrel (N = 2658)Demographics Age, median (25th–75th percentile), years64 (56–72)64 (56–72)63 (55–71)63 (55–71)65 (57–73)65 (57–73) Age ≥ 75 years, n (%)955 (17.1)1024 (18.6)420 (14.6)460 (16.2)535 (19.8)564 (21.2) Female gender, n (%)1746 (31.3)1746 (31.8)706 (24.6)716 (25.2)1040 (38.4)1030 (38.8) Body weight <60 kg, n (%)398 (7.2)389 (7.1)165 (5.8)172 (6.1)233 (8.6)217 (8.2) Body mass index, median (25th–75th percentile), kg/m227.5 (24.8–30.8)27.4 (24.8–30.5)27.5 (24.9–30.8)27.5 (24.9–30.5)27.5 (24.7–30.6)27.3 (24.6–30.5) GRACE risk score nomogram130 (112–150)130 (112–149)128 (110–145)127 (110–145)133 (114–154)134 (116–153)Cardiovascular risk factors, n (%) Current smoking1636 (29.4)1640 (29.9)1000 (34.8)960 (33.8)636 (23.6)680 (25.6) Hypertension3915 (70.2)3835 (69.8)1881 (65.5)1886 (66.4)2034 (75.3)1949 (73.4) Dyslipidaemia2885 (51.8)2852 (51.9)1515 (52.8)1569 (55.2)1370 (50.7)1283 (48.3) Diabetes mellitus1608 (28.9)1520 (27.7)756 (26.3)747 (26.3)852 (31.6)773 (29.1)Medical history, n (%) Angina pectoris2932 (52.6)2890 (52.6)1337 (46.5)1356 (47.7)1595 (59.1)1534 (57.8) Myocardial infarction1400 (25.1)1410 (25.7)609 (21.2)608 (21.4)791 (29.3)802 (30.2) Congestive heart failure397 (7.1)429 (7.8)99 (3.4)103 (3.6)298 (11.0)326 (12.3) Percutaneous coronary intervention (PCI)944 (16.9)918 (16.7)537 (18.7)517 (18.2)407 (15.1)401 (15.1) Coronary artery bypass graft (CABG)434 (7.8)474 (8.6)214 (7.4)218 (7.7)220 (8.1)256 (9.6) Transient ischaemic attack185 (3.3)189 (3.4)82 (2.9)74 (2.6)103 (3.8)115 (4.3) Non-hemorrhagic stroke246 (4.4)243 (4.4)81 (2.8)98 (3.5)165 (6.1)145 (5.5) Peripheral arterial disease400 (7.2)413 (7.5)174 (6.1)204 (7.2)226 (8.4)209 (7.9) Chronic renal disease273 (4.9)279 (5.1)117 (4.1)108 (3.8)156 (5.8)171 (6.4)Physical findings, median (25th–75th percentile) Heart rate (bpm)72 (64–80)72 (64–81)71 (62–80)72 (63–80)72 (64–82)72 (64–82) Systolic blood pressure (mmHg)135 (120–150)134 (120–150)135 (120–150)135 (120–150)134 (120–150)132 (120–150) Diastolic blood pressure (mmHg)80 (70–89)80 (70–87)80 (70–89)80 (70–87)80 (70–89)80 (70–87)Risk indicators, n (%) Troponin positive, n (%)4356 (80.8)4323 (81.3)2522 (89.6)2486 (89.3)1834 (71.3)1837 (72.4) ST depression (≥1 mm)3158 (56.8)3201 (58.4)1547 (54.0)1535 (54.2)1611 (59.8)1666 (62.8) TIMI risk score > 24838 (89.7)4785 (89.8)2584 (91.3)2566 (91.8)2254 (88.0)2219 (87.7)Type of ACS at discharge, n (%) STEMI449 (8.1)437 (8.0)330 (11.5)305 (10.7)119 (4.4)132 (5.0) NSTEMI3605 (64.8)3525 (64.3)2045 (71.2)2025 (71.3)1560 (58.0)1500 (56.7) UA/other1509 (27.1)1524 (27.8)497 (17.3)510 (18.0)1012 (37.6)1014 (38.3)Antithrombotic treatment during index hospitalization, n (%) Aspirin5386 (96.6)5316 (96.8)2797 (97.4)2779 (97.8)2589 (95.9)2537 (95.8) Unfractionated heparin2910 (52.1)2856 (51.9)1845 (64.2)1845 (64.9)1065 (39.3)1011 (38.0) Low molecular weight heparin3181 (57.0)3084 (56.1)1568 (54.6)1501 (52.8)1613 (59.6)1583 (59.6) Fondaparinux194 (3.5)191 (3.5)87 (3.0)76 (2.7)107 (4.0)115 (4.3) Bivalirudin140 (2.5)133 (2.4)135 (4.7)122 (4.3)5 (0.2)11 (0.4) GP IIb/IIIa inhibitor1170 (21.0)1117 (20.3)982 (34.2)963 (33.9)188 (6.9)154 (5.8)Other drug from randomization to end of study, n (%) Beta-blocker4768 (85.4)4685 (85.2)2536 (88.3)2512 (88.4)2232 (82.4)2173 (81.8) ACE-inhibitor and/or ARB4693 (84.2)4602 (83.8)2447 (85.2)2433 (85.6)2246 (83.2)2169 (81.9) Statin5196 (93.1)5109 (92.9)2777 (96.7)2745 (96.6)2419 (89.3)2364 (88.9) Proton-pump inhibitor2776 (49.7)2613 (47.5)1618 (56.3)1536 (54.1)1158 (42.8)1077 (40.5) Calcium channel inhibitor1553 (27.8)1515 (27.6)720 (25.1)723 (25.4)833 (30.8)792 (29.8) Diuretic2393 (42.9)2280 (41.5)1073 (37.3)1037 (36.5)1320 (48.9)1243 (46.9)Invasive procedures Coronary angiography During first 10 days, n (%)4143 (74.5)4072 (74.2)2873 (100.0)2841 (100.0)1270 (47.2)1231 (46.6) After first 10 days, n (%)357 (6.4)335 (6.1)0 (0)0 (0)357 (13.3)335 (12.7) PCI During first 10 days, n (%)2590 (46.4)2550 (46.4)2590 (90.1)2550 (89.8)0 (0)0 (0) After first 10 days, n (%)279 (5.0)291 (5.3)11 (0.4)3 (0.1)268 (9.9)288 (10.8) CABG During first 10 days, n (%)296 (5.3)305 (5.5)296 (10.3)305 (10.7)0 (0)0 (0) After first 10 days, n (%)382 (6.8)375 (6.8)55 (1.9)55 (1.9)327 (12.1)320 (12.0)\nFigure 1Trial profile—patients classified by entry ECG and by treatment during the first 10 days after randomization. At 10 days post-randomization, 5366 patients were alive and had not undergone revascularization. There were regional differences in the proportion of patients undergoing revascularization during the initial 10 days (Asia/Australia 44%, Central/South America 40.3%, Europe/Middle East/Africa 49.8%, North America 71.6%). Non-revascularized and revascularized patients were of similar age (63 vs. 65), but patients who did not undergo revascularization were more likely to be of female gender (39 vs. 25%), and more likely to have comorbidities (e.g. previous myocardial infarction, heart failure, renal disease). Although non-revascularized patients had more comorbidities, they were less likely to be troponin I positive (Table 1). The proportion of patients with TIMI risk score >2 was slightly higher in the revascularization group (92 vs. 88%). During the first 10 days, 47% of non-revascularized patients underwent coronary angiography. When including only those who had angiography during the initial 10 days, female gender was more common in the non-revascularized (36%) than revascularized (25%). No significant coronary artery disease at angiography was noted in 32 and 0.7% of the non-revascularized and revascularized, respectively.\n\nBODY.RESULTS.EFFICACY IN THE OVERALL NON-ST-ELEVATION ACUTE CORONARY SYNDROME POPULATION:\nEfficacy and safety outcomes of the overall NSTE-ACS population are summarized in Table 2, and Kaplan–Meier curves are shown in Supplementary material online. The incidence of the primary composite endpoint was reduced with ticagrelor vs. clopidogrel (10.0 vs. 12.3%; HR 0.83; 95% CI = 0.74–0.93; P = 0.0013) (see Supplementary material online, Figure S1A). Cardiovascular death occurred less often in the ticagrelor group than in the clopidogrel group (3.7 vs. 4.9%; HR 0.77; 95% CI = 0.64–0.93; P = 0.0070), and myocardial infarction was also less common with ticagrelor vs. clopidogrel (6.6 vs. 7.7%; HR 0.86; 95% CI = 0.74–0.99; P = 0.0419), whereas stroke incidence did not differ significantly between treatment arms (1.3 vs. 1.4%; HR 0.95; 95% CI 0.69–1.33; P = 0.79). All-cause death was reduced in those treated with ticagrelor vs. clopidogrel (4.3 vs. 5.8%; HR 0.76; 95% CI = 0.64–0.90; P = 0.0020) (see Supplementary material online, Figure S1B).\nTable 2Efficacy and safety outcomes in patients with NSTE-ACSTicagrelor % (n)Clopidogrel % (n)HR (95% CI)P-valueEfficacy endpoints CV death/MI (excluding silent)/stroke10.0 (533)12.3 (630)0.83 (0.74, 0.93)0.0013 All-cause death/MI(excl. silent)/stroke10.5 (557)13.0 (664)0.82 (0.73, 0.92)0.0006 CV death/MI(all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event15.5 (824)17.8 (918)0.88 (0.80, 0.96)0.0058 Myocardial infarction (excluding silent)6.6 (345)7.7 (392)0.86 (0.74, 0.99)0.0419 Cardiovascular death (includes vascular and unknown deaths)3.7 (194)4.9 (247)0.77 (0.64, 0.93)0.0070 Stroke1.3 (69)1.4 (71)0.95 (0.69, 1.33)0.79 All-cause death4.3 (224)5.8 (290)0.76 (0.64, 0.90)0.0020Safety endpoints Major bleeding (study criteria)13.4 (660)12.6 (618)1.07 (0.95, 1.19)0.26 Major or minor bleeding (study criteria)18.2 (900)16.3 (794)1.14 (1.03, 1.25)0.0078 Non-CABG related major bleeding (study criteria)4.8 (225)3.8 (176)1.28 (1.05, 1.56)0.0139 Fatal bleeding0.3 (13)0.4 (18)0.72 (0.35, 1.47)0.37 Life threatening or fatal bleeding (study criteria)6.6 (331)6.5 (315)1.05 (0.90, 1.22)0.56 Intracranial bleeding0.3 (14)0.2 (7)2.01 (0.81, 4.99)0.13 Other major bleeding7.2 (344)6.6 (318)1.08 (0.93, 1.25)0.34 Major bleeding (TIMI criteria)9.2 (452)8.7 (422)1.07 (0.94, 1.22)0.33 Major or minor bleeding (TIMI criteria)13.2 (653)12.3 (602)1.08 (0.97, 1.21)0.16 Non-CABG related major bleeding (TIMI criteria)2.9 (137)2.2 (99)1.39 (1.07, 1.80)0.0131 GUSTO severe bleeding3.1 (146)3.2 (151)0.96 (0.77, 1.21)0.74 GUSTO moderate or severe bleeding8.6 (416)7.8 (382)1.08 (0.94, 1.25)0.25Each treatment group is summarized as Kaplan–Meier rates at 360 days and total number of events during the study. P-values and hazard ratios (95% CI) come from unadjusted Cox models testing ticagrelor vs. clopidogrel.\n\nBODY.RESULTS.SAFETY:\nWith ticagrelor when compared with clopidogrel, there was no significant difference in PLATO major bleeding (13.4 vs. 12.6%; HR 1.07; 95% CI = 0.95–1.19; P = 0.26), but a higher rate of non-CABG-related major bleeding (4.8 vs. 3.8%; HR 1.28; 95% CI = 1.05–1.56; P = 0.0139) (see Supplementary material online, Figure S2A and B). There was no significant difference in the rate of life-threatening or fatal bleeding (6.6 vs. 6.5%; HR 1.05; 95% CI = 0.90–1.22, P = 0.56), nor any significant difference in the rate of intracranial bleeding with ticagrelor compared with clopidogrel (0.3 vs. 0.2%; HR 2.01; 95% CI = 0.81–4.99; P = 0.13). The composite of major or minor bleeding (by PLATO criteria) occurred more often in the ticagrelor group (18.2 vs. 16.3%; HR 1.14; 95% CI = 1.03–1.25, P = 0.0078). When assessed by TIMI criteria, there was no significant difference in major or minor bleeding (13.2 vs. 12.3%; HR 1.08; 95% CI = 0.97–1.21; P = 0.16). TIMI major bleeding, GUSTO severe, and GUSTO moderate or severe bleeding also did not appear to differ significantly between ticagrelor and clopidogrel, whereas TIMI non-CABG-related major bleeding was more common in the ticagrelor group (Table 2).\n\nBODY.RESULTS.EFFICACY AND SAFETY ACCORDING TO TREATMENT STRATEGY:\nEvent rates were considerably higher in NSTE-ACS patients treated without revascularization compared with patients undergoing revascularization during the initial 10 days. For both revascularized and non-revascularized patients, there were similar proportional reductions of the primary endpoint with ticagrelor compared with clopidogrel (HR 0.86 vs. 0.85, interaction P = 0.93) (Figure 2A, Table 3) consistent with the overall trial. There was also a consistent reduction in all-cause death (HR 0.75 vs. 0.73; interaction P = 0.89) (Figure 2B). No significant difference in overall major bleeding was seen with ticagrelor vs. clopidogrel within each treatment strategy (revascularization/no revascularization) (Figure 3A). There was a higher incidence of non-CABG-related major bleeding with ticagrelor vs. clopidogrel in patients with NSTE-ACS with no significant interaction by invasive treatment strategy (HR 1.32 vs. 1.07; interaction P = 0.43) (Figure 3B). The primary outcome was reduced in major subgroups in both revascularized and non-revascularized patients (see Supplementary material online, Figures S3 and S4). The results were consistent with a 30-day landmark for revascularization (see Supplementary material online, Table S1), as well as in the full study population (see Supplementary material online, Table S2). In patients who underwent angiography during the initial 10 days with or without significant coronary disease, the effect of ticagrelor vs. clopidogrel was consistent (see Supplementary material online, Table S3).\nTable 3Interaction of ticagrelor treatment and revascularization within 10 days (adjusting for region) NSTE-ACS with revascularizationNSTE-ACS without revascularizationInteraction PNTicagrelor KM rateClopidogrel KM rateHR (95% CI)NTicagrelor KM rateClopidogrel KM rateHR (95% CI)Efficacy endpoints CV death/MI (excluding silent)/stroke54165.116.100.86 (0.68, 1.09)51899.6311.600.85 (0.72, 1.01)0.93 All-cause death/MI (excl. silent)/stroke54165.446.600.85 (0.67, 1.06)518910.1512.530.84 (0.71, 0.99)0.94 CV death/MI (all)/stroke/severe recurrent ischaemia/recurrent ischaemia/TIA/arterial thrombotic event52908.7310.310.86 (0.71, 1.03)510914.1415.160.97 (0.84, 1.13)0.29 Myocardial infarction (excluding silent)54383.523.880.90 (0.68, 1.21)52016.046.680.94 (0.75, 1.17)0.85 Cardiovascular death (includes vascular and unknown deaths)56481.642.330.76 (0.52, 1.13)52174.075.440.75 (0.58, 0.98)0.95 Stroke56320.670.591.18 (0.60, 2.34)52091.481.690.92 (0.58, 1.46)0.56 All-cause death56482.032.880.75 (0.53, 1.07)52174.776.650.73 (0.57, 0.93)0.89Safety endpoints Major bleeding (study criteria)49835.254.681.10 (0.84, 1.44)493111.8311.431.05 (0.88, 1.26)0.82 Major or minor bleeding (study criteria)48427.766.351.22 (0.97, 1.54)484714.5913.961.07 (0.91, 1.25)0.34 Non-CABG major bleeding (study criteria)52703.142.381.32 (0.92, 1.90)49332.782.791.07 (0.74, 1.56)0.43 Fatal or life-threatening major bleeding (study criteria)51732.252.011.18 (0.79, 1.76)49625.776.110.95 (0.75, 1.22)0.37 Other major bleeding (study criteria)51783.102.851.02 (0.72, 1.45)49456.505.681.16 (0.91, 1.49)0.55 Major bleeding (TIMI criteria)51023.422.791.21 (0.86, 1.70)49528.048.390.97 (0.79, 1.20)0.28 Major or minor bleeding (TIMI criteria)49905.244.561.12 (0.85, 1.47)493311.7811.161.08 (0.91, 1.29)0.85 Non-CABG major bleeding (TIMI criteria)53161.881.191.66 (1.01, 2.72)49522.051.841.19 (0.76, 1.87)0.34 GUSTO severe bleeding52861.161.490.75 (0.45, 1.26)49462.752.461.13 (0.77, 1.65)0.22 GUSTO moderate or severe bleeding51343.933.301.13 (0.82, 1.55)49457.185.961.19 (0.95, 1.51)0.78Kaplan–Meier (KM) rates 350 days after day 10 post-randomization.\nFigure 2Efficacy endpoints stratified by management strategy—Kaplan–Meier estimates of time to first occurrence of: (A) primary endpoint, (B) all-cause death, from 10 days post-randomization onward.\nFigure 3Bleeding stratified by revascularization—Kaplan–Meier estimate of (A) time to major bleeding according to the PLATO criteria from day 10 post-randomization, and (B) time to non-CABG major bleeding.\n\nBODY.DISCUSSION:\nIn this subgroup analysis, ticagrelor compared with clopidogrel reduced the composite endpoint of cardiovascular death, myocardial infarction, and stroke as well as the individual endpoints of cardiovascular death, myocardial infarction, and all-cause death without any significant difference in major bleeding in patients with an entry diagnosis of NSTE-ACS. The event curves for the primary composite endpoint and total mortality separated continuously for the duration of the trial. The benefits with ticagrelor were observed both in patients who underwent and in those who did not undergo early revascularization, regardless whether angiography was performed or not. These results are consistent with the previously reported subgroup analysis of all ACS patients with an intended strategy (prior to randomization) of no revascularization, where about one-fifth still underwent PCI and about 4% had CABG surgery before discharge.10 Although current guidelines advocate early invasive management in NSTE-ACS,1,2 a large proportion of patients are managed non-invasively.11,12 Patients who are managed without revascularization usually have more comorbidities, higher risk of bleeding, and inferior outcome than patients who are revascularized.13 The optimal platelet inhibition strategy in these patients has been uncertain. P2Y12 inhibition has previously been shown to reduce ischaemic events in NSTE-ACS patients managed without revascularization. In the Clopidogrel in Unstable Angina to Prevent Recurrent Events (CURE) trial, which randomized NSTE-ACS patients to receive either clopidogrel or placebo (on background aspirin treatment), 64% of patients did not undergo revascularization after randomization. There were almost identical relative reductions in CV death/MI/stroke with clopidogrel in the non-invasive and invasive subgroups.14 The Clopidogrel for High Atherothrombotic Risk and Ischemic Stabilization, Management and Avoidance (CHARISMA) trial studied the addition of clopidogrel to aspirin in a stable, more heterogeneous population at risk for atherothrombotic events. Overall, the combination was not more effective than aspirin monotherapy, but a trend towards benefit was noted in those with symptomatic atherosclerotic disease, whereas a trend towards harm was seen in those included based only on multiple cardiovascular risk factors.15 This issue was also recently studied in the TRILOGY-ACS trial, in which NSTE-ACS patients intended for management without revascularization were prospectively randomized to receive either prasugrel or clopidogrel (both irreversible P2Y12 inhibitors). The results showed no significant overall benefit of prasugrel over clopidogrel during 24 months, even though the Kaplan–Meier curves for the efficacy endpoints tended to separate after 1 year,8 and with a reduction of the primary outcome in the subgroup who underwent angiography.16 In the no revascularization subgroup of the current PLATO NSTE-ACS substudy, there were consistent benefits with ticagrelor when compared with clopidogrel concerning both mortality and non-fatal ischaemic events. Although the no revascularization subgroup of the present PLATO substudy seems similar to the TRILOGY-ACS population, it is impossible to compare the effect of the respective new P2Y12 inhibitor vs. clopidogrel across the two studies. Patients with intention for treatment without revascularization were prospectively studied in TRILOGY-ACS, while the present PLATO substudy was a post hoc stratification with subgroups of revascularization/no revascularization defined post randomization and post procedures. The TRILOGY-ACS population also was of higher risk with more prevalent comorbidities such as hypertension, dyslipidaemia, and diabetes, and had higher rates of previous myocardial infarction and revascularization procedures. In ACS patients managed with revascularization, more potent P2Y12 inhibition has been associated with better outcomes, as shown in the intention for invasive management subgroup analysis of PLATO for ticagrelor17 and in TRITON for prasugrel.7 In the Clopidogrel and Aspirin Optimal Dose Usage to Reduce Recurrent Events–Seventh Organization to Assess Strategies in Ischemic Syndromes (CURRENT OASIS-7) trial also, intensified P2Y12 inhibition with double dose clopidogrel for the first 7 days showed no significant difference in outcome in the overall population,18 but a reduction in cardiovascular events, including stent thrombosis in the pre-specified (albeit post-randomization) subgroup undergoing PCI.19 With more potent platelet inhibition, bleeding complications have usually been increasing. In TRITON-TIMI 38, where prasugrel vs. clopidogrel was evaluated in ACS, prasugrel demonstrated a reduction in thrombotic events, but at the cost of significantly increased rates of major bleeding, including fatal bleeding, particularly in patients at high bleeding risk defined as high age and low body weight.7 In TRILOGY-ACS on the other hand, which included a lower maintenance dose, prasugrel did not cause any increased major bleeding rate, including patients >75 years of age. In the present analysis, there was no significant difference in PLATO-defined total major bleeding with ticagrelor compared with clopidogrel. However, the incidence of non-CABG-related major bleeding was significantly higher in the ticagrelor group. There was no significant difference in life-threatening or fatal bleeding or major bleeding as defined by the TIMI criteria, but the composite of PLATO major/minor bleeding was increased with ticagrelor.\n\nBODY.DISCUSSION.STUDY LIMITATIONS:\nThere are several limitations to this work. The sample size is large, which gives high power to detect even relatively small differences in effect that may or may not be clinically important. The revascularization/no revascularization analyses were post hoc investigations of subgroups identified post-randomization, which makes the analyses subject to potential bias. Because landmark analyses were used, the risk of time-dependent confounding is acknowledged. As sensitivity analyses, we also performed landmark analyses at 30 days instead of 10 days, with consistent results. Nevertheless, the findings reported with regard to revascularization status should be interpreted strictly as exploratory and hypothesis generating. The present results based on actual revascularization strategy support and complement those of our previous analysis based on pre-randomization intention to treat with invasive or conservative management, and the results based on performed revascularization or not are consistent with the overall NSTE-ACS results, as well as the overall PLATO results.\n\nBODY.CONCLUSIONS:\nIn this substudy of the PLATO trial, ticagrelor compared with clopidogrel consistently reduced the rates of ischaemic events and mortality without any difference in overall major bleeding in patients with an entry diagnosis of NSTE-ACS, and this effect was independent of whether or not early revascularization was performed. These results harmonize with the European Society of Cardiology (ESC) NSTE-ACS guidelines, which recommend ticagrelor in all patients at moderate-to-high risk of ischaemic events, regardless of initial treatment strategy.1\n\nBODY.SUPPLEMENTARY MATERIAL:\nSupplementary material is available at European Heart Journal online.\n\nBODY.FUNDING:\nThis work was supported by AstraZeneca, who funded the PLATO trial. Funding to pay the Open Access publication charges for this article was provided by Astra Zeneca. Conflicts of interest: D.L.: Institutional research grant from AstraZeneca to perform this research; and lecture fees from AstraZeneca. C.V.: research grant from AstraZeneca to perform this research; member of the Speakers' Bureaus for AstraZeneca, Eli Lilly & Company, and The Medicines Company. C.P.C.: research grants/support from Accumetrics, AstraZeneca, CSL Behring, Essentialis, GlaxoSmithKline, Merck, Regeneron, Sanofi, and Takeda; on advisory boards for Alnylam, Bristol-Myers Squibb, Lipimedix, and Pfizer (funds donated to charity); and holds equity in Automedics Medical Systems. R.A.H.: consulting/advisory board fees from Bristol-Myers Squibb, Sanofi, Portola Pharmaceuticals, Johnson & Johnson and Merck; grant support from Eli Lilly/Daiichi Sankyo., Merck, Portola Pharmaceuticals, Sanofi, Johnson & Johnson, Bristol-Myers Squibb, The Medicines Company, and AstraZeneca. A.H.: reports being an employee of AstraZeneca. J.M.: reports being an employee of AstraZeneca and having equity ownership. S.H.: advisory board member for AstraZeneca, Bristol-Myers Squibb, Pfizer, and Bayer; research support from GlaxoSmithKline, Pfizer, and Sanofi-Aventis. P.G.S.: research grant (to INSERM U698): NYU School of Medicine, Sanofi, Servier. Speaking or consulting: Amarin, AstraZeneca, Bayer, Boehringer-Ingelheim, Bristol-Myers-Squibb, Daiichi-Sankyo, GSK, Lilly, Medtronic, Otsuka, Pfizer, Roche, sanofi, Servier, The Medicines Company, Vivus. Stockholding: Aterovax. J.H.C.: advisory board fees from BMS, AstraZeneca, Eli Lilly/Daiichi Sankyo; consultancy fees from Merck and Servier. R.F.S.: research grants from AstraZeneca, Eli Lilly/Daiichi Sankyo, and Merck; research support from Accumetrics; honoraria from AstraZeneca, Eli Lilly/Daiichi Sankyo, Merck, Iroko, Accumetrics, and Medscape; consultancy fees from AstraZeneca, Merck, Novartis, Accumetrics, Sanofi-Aventis/Regeneron, Bristol-Myers Squibb, Eisai, Roche and Daiichi Sankyo. S.R.S.: reports no conflict of interest. L.W.: research grants from AstraZeneca, Merck & Co, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline; consultant for Merck & Co, Regado Biosciences, Evolva, Portola, C.S.L. Behring, Athera Biotechnologies, Boehringer-Ingelheim, AstraZeneca, GlaxoSmithKline, and Bristol-Myers Squibb/Pfizer; lecture fees from AstraZeneca, Boehringer-Ingelheim, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; honoraria from Boehringer Ingelheim, AstraZeneca, Bristol-Myers Squibb/Pfizer, GlaxoSmithKline, and Merck & Co.; travel support from AstraZeneca and Bristol-Myers Squibb/Pfizer. S.K.J.: research grant from AstraZeneca, Eli Lilly, Bristol-Myers Squibb, Terumo Inc, Medtronic, and Vascular Solutions; honoraria from The Medicines Company, AstraZeneca, Eli Lilly, Bristol-Myers Squibb, and IROKO; consultant/advisory board from AstraZeneca, Eli Lilly, Merck, Medtronic, and Sanofi.\n\nBODY.SUPPLEMENTARY MATERIAL:\n\n\nSupplementary Data\n\n**Question:** Compared to clopidogrel what was the result of ticagrelor on major or minor bleeding (by TIMI criteria)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1295
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A Comparison of Two Delivery Modalities of a Mobile Phone-Based Assessment for Serious Mental Illness: Native Smartphone Application vs Text-Messaging Only Implementations\n\n ABSTRACT.BACKGROUND:\nMobile phone–based assessment may represent a cost-effective and clinically effective method of monitoring psychotic symptoms in real-time. There are several software options, including the use of native smartphone applications and text messages (short message service, SMS). Little is known about the strengths and limitations of these two approaches in monitoring symptoms in individuals with serious mental illness.\n\nABSTRACT.OBJECTIVE:\nThe objective of this study was to compare two different delivery modalities of the same diagnostic assessment for individuals with non-affective psychosis—a native smartphone application employing a graphical, touch user interface against an SMS text-only implementation. The overall hypothesis of the study was that patient participants with sewrious mental illness would find both delivery modalities feasible and acceptable to use, measured by the quantitative post-assessment feedback questionnaire scores, the number of data points completed, and the time taken to complete the assessment. It was also predicted that a native smartphone application would (1) yield a greater number of data points, (2) take less time, and (3) be more positively appraised by patient participant users than the text-based system.\n\nABSTRACT.METHODS:\nA randomized repeated measures crossover design was employed. Participants with currently treated Diagnostic and Statistical Manual (Fourth Edition) schizophrenia or related disorders (n=24) were randomly allocated to completing 6 days of assessment (four sets of questions per day) with a native smartphone application or the SMS text-only implementation. There was then a 1-week break before completing a further 6 days with the alternative delivery modality. Quantitative feedback questionnaires were administered at the end of each period of sampling.\n\nABSTRACT.RESULTS:\nA greater proportion of data points were completed with the native smartphone application in comparison to the SMS text-only implementation (β = -.25, SE=.11, P=.02), which also took significantly less time to complete (β =.78, SE= .09, P<.001). Although there were no significant differences in participants' quantitative feedback for the two delivery modalities, most participants reported preferring the native smartphone application (67%; n=16) and found it easier to use (71%; n=16). 33% of participants reported that they would be willing to complete mobile phone assessment for 5 weeks or longer.\n\nABSTRACT.CONCLUSIONS:\nNative smartphone applications and SMS text are both valuable methods of delivering real-time assessment in individuals with schizophrenia. However, a more streamlined graphical user interface may lead to better compliance and shorter entry times. Further research is needed to test the efficacy of this technology within clinical services, to assess validity over longer periods of time and when delivered on patients' own phones.\n\nBODY.INTRODUCTION:\nSchizophrenia is a major public health problem affecting approximately 1% of the general population. It represents both a substantive emotional burden for those involved and a socioeconomic burden in the United Kingdom [1]. There is now increasing emphasis on treating patients within the community [2]. However, symptoms, mood, and functioning can change suddenly, potentially leading to relapse, unscheduled acute care, or self-injury. Monitoring of psychotic symptoms usually relies on interview assessments infrequently conducted by clinical staff, with limited resources, reducing the capacity to detect sudden change. Additionally, interviews relying on retrospective accounts introduces bias and averaging, thereby losing clinical information [3]. A more sensitive approach would be to closely monitor patients during the course of their everyday lives. Real-time assessment of psychotic symptoms is attractive in that it can lead to early and immediate intervention, potentially preventing the deterioration of an individual's mental state. Over the past decade, assessment technologies have been increasingly employed in the monitoring of psychosis [4]. Typically, patients complete self-report questions at several different times of the day in real-world settings [5]. Early research in this field focused on the use of Personal Digital Assistants (PDAs) [6,7], but these technologies are becoming increasingly obsolescent and now occupy a very small share of the commercial market. Mobile phones have the advantage that they allow the automatic and wireless uploading of information to a central computer. As mobile phone technology becomes increasingly widespread, software applications can operate from users' own phones, omitting the need for them to carry an additional device. In a recent study, we observed that 83% (n=30/36) of a sample with psychosis currently owned a mobile phone, suggesting that usage in individuals with psychosis is comparable to the general population in the United Kingdom. Other studies have also reported high levels of familiarity with and access to mobile phone technology in individuals with serious mental illness [8]. There are multiple ways in which mobile phone-based assessment can be employed in health care. Text messages have the advantage that they are relatively easy and inexpensive to deliver and are not limited by the model or make of an individual's phone [9]. Individuals will likely be familiar with sending and receiving text messages. To date, two studies have used text messaging in the clinical care of psychotic illness. Španiel and colleagues [10] used weekly text-based assessments in order to alert consulting psychiatrists to early signs of relapse in patients with psychosis, which significantly reduced the number of inpatient admissions over one year. Additionally, Granholm and colleagues [11] have used text messages to promote cognitive behavioral therapy techniques and health-promoting behaviors. Pilot data showed a significant increase in social interaction and a reduction in hallucinations, but no significant effect on medication use. Native software applications, with graphical user interfaces, uploaded onto smartphone devices pose another viable option for conducting real-time assessment. For the purpose of this research, a smartphone will be defined as having (1) computing power, (2) a touch screen, (3) third party application development and distribution, and (4) the option of 3G connectivity and high-speed data transfer. Native smartphone applications can be purpose built allowing for greater flexibility and ease of use [9] and are now increasingly being used in the treatment of physical health problems (eg, diabetes) [12]. Our research group has recently developed one such software application for use on Android smartphones. This software enables users to respond to self-report questions relating to their symptoms on touch-screen analogue scales. Initial piloting of this technology showed little dropout and high levels of data points completed in 44 individuals experiencing psychosis, sampled for 1 week. 13 self-assessment scales representing different dimensions of psychotic illness were validated against commonly employed, gold-standard clinical interviews. However, further information is required to assess the benefits and limitations of native smartphone applications in comparison to other methods of mobile phone-based data collection. The objective of this study was to compare two different delivery modalities of the same diagnostic assessment for individuals with nonaffective psychosis—a native smartphone application employing a graphical, touch user interface against an short message service (SMS) text-only implementation. It was thought that both approaches would have certain advantages in the real-time assessment of psychosis but that the native smartphone application would display greater usability and functionality. Understanding the appropriate technology, user interface, procedures, and methods for administering mobile phone–based assessment is an important step in making it both cost- and clinically effective, with the eventual aim of integrating it into long-term illness management. The overall hypothesis of the study was that patient participants with serious mental illness would find both the smartphone application and SMS text-only implementation feasible and acceptable to use, measured by quantitative postassessment feedback questionnaire scores, the number of data points completed, and the time taken to complete the assessment. We also had three more specific predictions: (1) that the participants would complete a significantly greater number of entries on a native smartphone application when compared to a SMS text-only implementation, (2) that entries would take significantly longer to complete on a SMS text-only implementation, and (3) that the smartphone application would be appraised more favorably in quantitative feedback scores. We also asked participants about the maximum length of time that individuals thought they would be willing to complete assessment over for each delivery modality.\n\nBODY.METHODS.PARTICIPANTS:\n24 community-based patients meeting or having met the criteria for a Diagnostic and Statistical Manual (Fourth Edition) diagnosis of schizophrenia (n=22) or schizoaffective disorder (n=2) as made by the clinical service and checked against DSM-IV criteria by the researcher were admitted to the study. All participants were aged 18-50 and provided informed consent to take part. Participants were required to own and have access to a mobile phone with which to use the SMS text-only implementation. Organic or substance induced psychosis were exclusion criteria. The sample was predominantly male (79%; n=19) and white British (71%; n=17) with a mean age of 33.0 (SD 9.5, min=18, max=49). Patients were recruited through Community Mental Health Teams (63%; n=15), Early Intervention Services (33%; n=8) and through a supported-living organization (4%; n=1). The mean number of past hospital admissions in the sample was 2.3 (SD 2.5, min = 0, max = 9) and the majority of the sample were currently taking antipsychotic medication (n=20). The Positive and Negative Syndrome Scale (PANSS [13]) is taken as a gold standard-clinical measure commonly used to assess psychosis. PANSS scores in this sample (at baseline) varied considerably between individuals, suggesting a range of symptom severities (positive subscale: mean 15.0 (SD 4.5, min = 7, max = 28); negative subscale: mean 12.3 (SD 3.5, min = 7, max = 21); general subscale: mean 28.8 (SD 6.8, min = 18, max = 50).\n\nBODY.METHODS.EQUIPMENT:\nThe two different modalities for delivering real-time assessment, namely a smartphone application and an SMS text-only implementation, were designed such that protocol for the monitoring procedure was common to each. The core functionality for the protocol that is common to both modalities is shown in Table 1. From the end user's perspective, the two modalities differ only in the way the participant interacts with the device. These differences are summarized in Table 2. The native smartphone application was specifically developed for Android mobile phones (Figures 1 and 2). Android is an operating system created by Google, which runs on mobile phones from different manufacturers. Although this software was developed to run on any Android phone, for this study we used the Orange San Francisco device. For the purpose of this study, it was not wirelessly enabled and all answers were stored on the mobile phone handset for downloading at the end of the sampling procedure. However, this made no difference to the user's perspective of the software. The SMS text-only implementation was driven by openCDMS [14], an open source, secure online platform, which facilitated both the sending of questions and the storing of responses (Figure 3). openCDMS is a Web-enabled server-side application written in Java developed for electronic data collection in clinical studies and trials. It had pre-existing capabilities for SMS text messaging, individual participant records, and study questionnaires. Where new functionality was needed for the text message system, it was specifically developed.\n\nBODY.METHODS.SEMISTRUCTURED INTERVIEWS:\nThe PANSS [13] was conducted by an experienced administrator before and after each period of sampling. The PANSS is a semistructured interview assessing positive (7 items), negative (7 items), and global (16 items) symptoms of psychosis and has been extensively validated [13,15]. In this study, the PANSS was employed to provide convergent validity to the adapted mobile phone assessment depression items and to determine which form of delusions should be assessed.\n\nBODY.METHODS.QUANTITATIVE FEEDBACK QUESTIONNAIRE:\nA purpose-designed quantitative feedback questionnaire was developed to assess the acceptability and feasibility of the native smartphone application and SMS text-only implementations (see Tables 3 and 4). This included reactivity to the methodology and whether it had been successfully integrated into an individual's everyday routine. Three items were taken from a previous study assessing the use of PDAs in individuals with psychosis [16]. These were \"Overall, this was stressful\", \"Overall, this was challenging\", and \"Overall, this was pleasing\". All items were rated from 1 (Not at all) to 7 (Very much so). In order to gauge the feasibility of the two methods of mobile phone assessment in the long-term management of patient's symptoms, participants were also asked to the maximum length of time within which they hypothetically would be willing to complete questions with each delivery modality. At the end of the study, participants were also asked which delivery modality they found easier to use and which they preferred (smartphone application, SMS text-only implementation, or no preference). Table 1 Core functionality common to both the native smartphone application and text-message systems. Configurable number and times of question sets each day At the start of the study, this can be configured for the desired number of questions sets and the times of these questions. During this study, 4 question sets per day were used. Configurable questions The wording of the questions is set up at the start of the study. It is easily configurable to support multiple studies with different questions. In addition, delusion questions are configured at the point when the researcher meets with the participant, through an administrative dialogue. Multiple question sets Multiple sets of questions are supported, and the software will switch between these at each consecutive alarm. So, if there are 2 question sets, it will ask set 1 at the first alarm, set 2 at the second, set 1 at the third, etc. The only exception to this is if the participant fails to answer some of the questions. In that case, the same question set is presented again at the next alarm. Question branching The next question displayed to the user can depend on the answer to one or more previous questions. This allows the questions asked to match the participant’s symptoms or situation. For example, if a participant does not endorse the first question about a particular psychotic symptom, all remaining questions about it will be skipped. This means that the participant does not waste time answering unnecessary questions. Questionnaire timeout A time window is enforced, within which the questionnaire has to be completed. No further answers will be accepted outside of this time window. Logging The time at which each question was answered is recorded to the nearest second. This can be used to analyze the time taken to answer each question as well as the time to complete the whole questionnaire. Branching logic A range of different branching logic types is available. In the following list, the first three items apply to branching based on an answer to a single question. All other items are applicable when branching is based on answers to multiple questions. \n Less than Greater than Equal to Greater than sum Less than sum One is less One is greater All are less All are greater \n Table 2 Human-machine interface difference between native smartphone application and SMS text-only implementations of the common diagnostic assessment. \n \n Native smartphone application SMS text message Alerts Reuses Android’s Alarm Manager so user definable alerts are available; delivered at semirandom intervals during the data collection period; users can snooze the alert to be reminded 5 minutes later. Only a single alert for each question set. Alert is the phone’s SMS alert, triggered when a SMS question is received; delivered at semirandom intervals during the data collection period. As each question is delivered as an SMS text message, an alert is triggered for each question. A reminder SMS is sent after 5 minutes if no response is received. Questions Presented as one question per page in the application. User is able to navigate through the pages of questions. Delivered as SMS text messages one question per SMS. User must respond with an SMS text message and wait for the next question in the set. Data input Continuous slider bar, user slides with finger touching screen. Position of slider mapped to 7-point Likert scale. User enters number between 1 and 7 as a response. Saving data Fully automatic, no user input required User must send SMS text message containing the response value in reply to the question to record their response. Figure 1A typical question from the Android app implementation, showing the full screen with the question and the slider for data entry. Figure 2The start screen shown to the user in the Android app at the start of each set of questions (from which the users may proceed or delay [\"snooze\"] for a further 10 minutes they wish). Figure 3SMS configuration screen implemented in openCDMS.\n\nBODY.METHODS.DIAGNOSTIC ASSESSMENT ITEMS:\nSeven different symptom dimensions were assessed using previously validated mobile phone assessment scales. In order to minimize the burden on participants, these were split into two alternating sets. In set one, hopelessness (2-4 items), depression (2-6 items), and hallucinations (2-8 items) were assessed; and in set two, anxiety (1-4 items), grandiosity (2-3 items), paranoia (3-6 items), and delusions (0-8 items) were examined. Therefore, although there were four assessment points, each set of questions was assessed only twice per day. All questions related to the period of time that had elapsed since the last entry. Items were branched so that the questions changed depending on an individual's previous responses. The depression subscale was adapted from the original validation study in an effort to increase its association with the PANSS depression scale, since the correlation was moderately low. The new subscale consisted of the items: \"I have felt miserable\" (new), \"I have had no interest in seeing other people\" (new), \"I have felt worthless\" (new), \"I have felt sad\", \"My mood has affected my appetite or sleep\", and \"I have had thoughts about harming myself\". As a result, the correlation between the mobile phone assessment scale and the PANSS was increased from rho = 0.45 (P=.01) in the original research to rho = 0.56 (P=.01) in the current study (calculated for the week of sampling with the native smartphone application). A strong correlation indicates that the interview and mobile phone–based assessments are tapping into similar concepts. The adapted depression assessment scale also showed considerable variability across time (mean squared successive difference: 2.2 (SD 2.9); within participant standard deviation: 0.8 (SD 0.6), suggesting that it was sensitive to subtle fluctuations in mood. A wide range of delusions have been reported in the psychosis literature [17,18]. Therefore, during the initial briefing session, the researcher selected which delusion items were presented to the participant through the administrative dialogue. The choice of items was based on consultation with clinical staff and an initial PANSS interview to assess symptoms. For those individuals experiencing three or greater delusions, those two with the greatest conviction and distress were entered. The frequency of the different delusions were: \"I have felt like other people were reading my thoughts\" (n=5), \"I have felt that my thoughts were being controlled or influenced\" (n=4), \"I have felt like I could read other people's thoughts\" (n=3), \"I have felt like things on the TV, in books or magazines had a special meaning for me\" (n=3), \"I have felt like something bad was about to happen\" (n=2), \"I have felt distinctly concerned about my physical health\" (n=2), and \"I have felt like my thoughts were alien to me in some way\" (n=1). The delusion items were kept constant for each participant across the two conditions of the trial.\n\nBODY.METHODS.PROCEDURE:\nA randomized repeated measures crossover design was employed. Participants were randomly allocated to either completing 6 days of sampling using the native smartphone application implementation on a smartphone provided to them for the purpose of the study, or via SMS text-only implementation using their own phone. There was then a 7-day rest period in order to reduce carryover effects before the individual completed a further 6 days of sampling with the alternate delivery modality. Each participant, therefore, completed two periods of sampling: 1 week with the native smartphone application implementation and 1 week with the SMS text-only implementation. Randomization was achieved through the openCDMS software. openCDMS uses a permuted block randomizer, and in this case we used a minimum block size of 4 and maximum of 6. The researcher initially met with participants to obtain written consent and demographic information, complete clinical interviews, provide training in the first delivery modality, and administer practice questions. The participant number and delusion items were set by the researcher on either a password-protected administrators' page (on the native smartphone application) or through the openCMDS website (on the SMS text-only implementation). The volume of the alarm prompts was also set according to the preference of the participant when using the native smartphone application implementation. On each day of sampling, participants could complete a maximum of four sets of questions, available at pseudorandom times (selected by a random number generator at least 1 hour apart) when prompted by the mobile phone device between 09:00 and 21:00 hours. All participants had 15 minutes from the first alarm within which to complete the questions. A forced entry time was thought to prevent a self-selection bias (ie, answering the questions only when asymptomatic) [19]. The researcher telephoned once or twice (as per the participant's preference) during the week in order to encourage compliance, answer questions, and to ascertain any problems with the software. The researcher attempted to keep the number of calls made to participants balanced over the two conditions: smartphone: mean, 1.7 (SD 0.5); text-messages: mean, 1.7 (SD 0.6). Upon completion of the first week of sampling, the researcher met with the participant to re-administer the clinical interviews and to gather quantitative feedback on the first device. This procedure was then repeated the following week with the alternate device. At the end of the study, participants rated which device they preferred and found easier to use. Qualitative interviews were also conducted (the results of which are provided in a separate manuscript). Participants on pay-as-you-go tariffs received £50 worth of phone credit, which they topped up prior to the week of text-message questions. If the participant was on a contract tariff (ie, direct debit) but did not have unlimited free texts, then they were reimbursed £50 at the end of the study. All participants were reimbursed an additional £30 upon completion of both sampling procedures.\n\nBODY.METHODS.STATISTICS:\nAll analysis was conducted in Stata 10.0 [20]. First, it was important to determine whether there was an interaction between period and delivery modality allocation. Delivery modality order (smartphone application then SMS text-only implementation or SMS text-only implementation then smartphone application) was entered into regression analysis as a predictor of the total score for each of the three outcome measures (mean time taken for each entry, number of completed data points, or quantitative feedback score) summed across the two conditions. A Spearman correlation was used to assess the similarity between scores across the two conditions. Subsequent analysis was performed in a nested (long) form of the data. This data structure violates the assumption of independent observations meaning that additional constraints need to be placed on the statistical models. Multilevel modelling (xtreg) was therefore used to investigate whether delivery modality (smartphone application or SMS text-only implementation) or time-point (week 1 or week 2) significantly predicted any of the outcome variables. The outcomes were the mean time taken to complete each entry, the number of completed data points, and quantitative feedback scores. As the highest level of clustering, participant number was entered as the random effect. Bootstrapping was used to account for the use of non-normal distributed variables in all analysis. This has been suggested as an appropriate alternative when parametric assumptions are not met [21]. The variables were manually standardized in order to aid the interpretation of the results.\n\nBODY.RESULTS:\nWe were not able to keep a systematic record of how many individuals were approached by their care coordinators for participation and declined. However, it was the impression of clinical teams that the refusal rate was around 30%. Of the 38 individuals who were referred to the research team, 8 changed their mind, 3 were unable to be contacted, and 3 were deemed ineligible prior to consent. This provided a final sample of 24 individuals, all of whom completed the feedback assessments (ie, no participants withdrew from the study). One individual, however, did ask for the SMS text-only assessment to be ended 2 days early because she found it was making her ruminative.\n\nBODY.RESULTS.TESTING FOR AN INTERACTION BETWEEN SAMPLING PERIOD AND METHOD OF ASSESSMENT:\nFirst, we assessed whether there was an interaction between the sampling period and device type. Regression analysis showed that the order of the two conditions did not significantly predict the total number of entries an individual completed (β =.06, SE.22, P=0.78), nor the length of time it took to complete each entry (β = -.06, SE.20, P=.78). However, starting with the SMS text-only implementation did show nonsignificantly increased negative appraisals of the devices (β = -.35, SE.20, P=.08).\n\nBODY.RESULTS.COMPARING NATIVE SMARTPHONE APPLICATION AND SMS TEXT-ONLY IMPLEMENTATION:\nThe number of entries completed during sampling was strongly correlated between the two delivery modalities (rho = 0.61, P=.002). When controlling for order effect, participants completed significantly greater numbers of entries on the native smartphone application when compared to SMS text-only implementation (Tables 3 and 4). Past studies have defined compliance as completing at least one-third of all possible data -points [22]. Using this criterion, 88% (n=21) of individuals were compliant when using the native smartphone application, whereas 71% (n=19) were compliant when using SMS text-only implementation. Participants completed a significantly greater number of entries in week one (mean = 16.4, SD 16.4) when compared to week two of sampling (mean = 13.7, SD 6.5; β = -.22, SE=.11, P=.04). When broken down by day, participants completed a mean of 3.4 (SD 0.8) entries on day 1, 3.8 (SD 0.5) on day 2, 3.0 (SD 1.2) on day 3, 3.4 (SD 1.1) on day 4, 3.0 (SD 1.3) on day 5, and 3.1 (SD 1.2) entries on day 6 when using the smartphone application. When using the SMS system this was lower at 2.3 (SD 1.4) entries on day 1, 2.4 (SD 1.3) entries on day 2, 2.6 (SD 1.5) entries on day 3, 2.6 (SD 1.3) entries on day 4, 2.0 (SD 1.4) entries on day 5, and 1.9 (SD 1.7) entries on day 6. The length of time taken to complete each entry was highly correlated between the delivery modalities (rho = 0.62, P=.02), suggesting that those individuals who took longer on the native smartphone application also took longer to complete the SMS text-only implementation. However, as can be seen in Tables 3 and 4, individuals took an average 68.4 seconds (SD 39.5) to complete a full set of questions on the native smartphone application and 325.5 seconds (SD 145.6) on the SMS text-only implementation (β = 0.78, SE 0.09, P<.001). Thus, questions sets took 4.8 times longer on the SMS text-only implementation, than on native smartphone application. There was no significant difference in the length of time (seconds) it took participants to complete entries in week 1 (mean = 210.4, SD 171.3), when compared to week 2 (mean = 183.5, SD 166.3; β = -.08, SE=.09, P=.36) of sampling. The total quantitative feedback scores for each device were also strongly correlated (rho = 0.48, P=0.02), suggesting that appraisals of the procedure were similar across both delivery modalities. No significant difference was observed for the total quantitative feedback score when comparing the two delivery modalities. Although none of the individual quantitative feedback items were significantly different, nonsignificantly higher scores (suggesting greater agreement) were observed for SMS text-only implementation on the items \"Were there times where you had to stop doing something in order to answer the questions?\", \"Were there times when you felt like not answering?\", and \"Was filling in the questions inconvenient?\" (Table 3). When considering an order effect, participants scored higher on the items \"Were there times when you felt like not answering?\" (week 1 mean 2.2, SD 1.1), week two mean 3.1, SD 2.2; β = 0.26, SE=.12, P=.03) and \"Was it difficult to keep the device with you or carry it around?\" (week 1 mean 1.8, SD 1.2; week 2 mean 2.6, SD 1.9; β = 0.26, SE=.12, P=.03) in the second, when compared to the first week of sampling. No other significant order effects were observed on the quantitative feedback scores. Participants also reported the maximum length of time within which they would hypothetically be willing to complete questions on each delivery modality, which is displayed in Table 5. At the end of the final period of sampling, 67% (n=16) of individuals preferred the native smartphone application, 13% (n=3) of people preferred SMS text-only implementation, and 21% (n=5) of people had no preference. Additionally, 71% (n=17) of individuals found the native smartphone application easier to use, 17% (n=4) of people found SMS text-only implementation easier to use, and 13% (n=3) of people had no preference. Worthy of note is that 2 out of the 3 individuals who preferred their own phone currently owned a smartphone, which they used to complete the SMS text-only implementation. Table 3 Quantitative feedback scores for the native smartphone application and SMS text-only implementation, and summary statistics.   Native smartphone application SMS text-only implementation       Mean SD Min Max Mean SD Min Max β P \n SE                       Time taken to complete questions (seconds) 68.4 39.5 18.8 179.7 325.5 145.6 118.8 686.9 0.78 <.001 \n 0.09 Number of entries completed 16.5 5.5 4.0 24.0 13.5 6.6 0.0 24.0 -0.25 .02 \n 0.11 Did answering the questions take a lot of work? 1.8 1.1 1 5 2.3 1.6 1 6 0.17 0.13 Were there times when you felt like not answering? 2.3 1.3 1 5 3.0 2.1 1 7 0.21 .07 \n 0.12 Did answering the questions take up a lot of time? 1.7 0.9 1 4 2.3 1.6 1 7 0.24 0.14 Were there times where you had to stop doing something in order to answer the questions? 3.4 1.7 1 7 4.1 1.7 1 7 0.20 .10 \n 0.12 Was it difficult to keep track of what the questions were asking you? 1.6 1.2 1 7 1.9 1.7 1 7 0.11 0.15 Were you familiar with using this type of technology? 4.7 2.3 1 7 5.3 2.2 1 7 0.14 0.14 Was it difficult to keep the device with you or carry it around? 1.9 1.4 1 6 2.4 1.8 1 6 0.16 0.12 Did you ever lose or forget the device? 1.7 0.9 1 4 1.8 1.4 1 6 0.06 0.13 Was using the key pad/touch screen difficult to use? 2.0 1.3 1 5 1.8 1.4 1 6 -0.08 0.15 Do you think other people would find the software easy to use? 5.3 1.8 2 7 5.9 1.4 3 7 0.19 0.16 Do you think you could make use of this approach in your everyday life? 4.0 1.8 1 7 3.9 2.2 1 7 -0.02 0.13 Do you think that this approach could help you or other service users? 5.3 1.9 1 7 5.6 1.2 3 7 0.11 0.15 Overall, this experience was stressful. 1.8 1.1 1 5 1.8 1.3 1 6 -0.04 0.18 Overall, this experience was challenging. 2.2 1.6 1 7 2.7 1.7 1 6 0.16 0.13 Overall, this experience was pleasing. 3.7 2.0 1 7 3.7 1.7 1 7 0.01 0.11 Did filling in the questions make you feel worse? 1.8 1.1 1 5 2.1 1.4 1 5 0.14 0.15 Did filling in the questions make you feel better? 2.8 1.5 1 6 3.0 1.6 1 7 0.08 0.13 Did you find the questions intrusive? 2.2 1.2 1 4 2.6 1.8 1 7 0.15 0.14 Was filling in the questions inconvenient? 2.0 1.0 1 4 2.5 1.4 1 5 0.23 .09 \n 0.14 Did you enjoy filling in the questions? 3.6 2.0 1 7 3.7 1.6 1 7 0.01 0.12 Total quantitative feedback score (positive items reversed): 53.0 11.2 33 76 56.2 14.2 27 88 0.13 0.10 Table 4 Quantitative feedback scores for the native smartphone application and SMS text-only implementation–momentary assessment symptom scores. Momentary assessment symptom scores Native smartphone application SMS text-only implementation \n \n Mean SD Min Max Mean SD Min Max Hallucinations 2.7 1.8 1.0 6.5 2.5 1.8 1.0 6.0 Anxiety 2.8 2.0 1.0 6.3 2.1 1.4 1.0 6.4 Grandiosity 2.3 1.5 1.0 6.0 2.3 1.4 1.0 6.0 Delusions 2.0 1.3 1.0 5.1 1.9 1.1 1.0 4.7 Paranoia 2.9 1.8 1.1 6.5 2.6 1.7 1.0 7.0 Hopelessness 3.2 1.4 1.0 5.7 3.0 1.3 1.0 5.1 Table 5 Maximum length of time willing to complete questions on the two implementations. Maximum length of time willing to complete questions Smartphone Text messages \n \n n (%) n (%) U<2 weeks 2 (8%) 3 (13%) 2-3 weeks 10 (42%) 10 (42%) 3-4 weeks 1 (4%) 5 (21%) 4-5 weeks 3 (13%) 1 (4%) 5+ weeks 8 (33%) 5 (21%) \n\nBODY.DISCUSSION:\nThe objective of this study was to compare two different delivery modalities of the same diagnostic assessment for individuals with nonaffective psychosis—a native smartphone application employing a graphical, touch user interface against an SMS text-only implementation. The overall hypothesis of the study was that participants with serious mental illness would find both systems feasible and acceptable to use, measured by the quantitative postassessment feedback questionnaire scores, the number of data points completed, and the time taken to complete the assessment. We also predicted that participants would complete a significantly greater number of data points, in less time, when using a native smartphone application when compared to the text messages and that the former software would be more positively appraised. The length of time that participants would hypothetically be willing to complete questions was also assessed in order to gauge the feasibility of longer-term assessment. In line with the hypotheses, participants completed a significantly greater number of entries on the native smartphone application (69%, mean = 16.5), when compared to the text message interface (56%, mean = 13.5). It is desirable to keep missing data to a minimum in order to generate a representative picture of an individual's symptoms over time. Other studies have observed similarly high rates of compliance to the native smartphone application when using PDAs (ie, 69-72%) [7,23]. The increased usability and streamlined, graphical interface seen in purpose built software applications may encourage greater levels of compliance than those of SMS text-only implementations. On average, sets of questions on the SMS text-only implementation took 4.8 times longer to complete when compared to the native smartphone application, which may have contributed to the reduced rates of compliance seen with this method of sampling. Although just failing to reach statistical significance, the quantitative feedback highlighted greater disruption to activities and inconvenience and less inclination to complete the questions when participants used the SMS text-only implementation. This is perhaps understandable given the greater time investment incurred by this method. Past research has suggested that the movement of mobile technology from the foreground to the background of an individual's life represents an important step in the accommodation of these technologies [24], which may be less likely if it perceived as burdensome. Somewhat surprisingly, the two delivery modalities did not differ in any of the other quantitative feedback items or the total quantitative feedback score. The mean scores suggest that these appraisals were generally positive across both conditions. Thus, both forms of technology were deemed acceptable and well integrated and may represent suitable methods for facilitating real-time assessment. However, in the forced choice questions, the majority of the sample stated that they preferred the native smartphone application and that they found it easier to use, suggesting that this may be a more attractive delivery modality. Of the three individuals who preferred the SMS text-only implementation, two currently owned a smartphone, which they used for completing the text messages. These mobile phones may have allowed for a more streamlined text system (eg, threaded messages) reducing the differences between the two types of devices. While the mean quantitative feedback scores suggested generally positive appraisals of both delivery modalities, these varied considerably between individuals. For example, some participants stated that they felt this technology could help them, whereas others were more skeptical about its advantages. Some participants reported mild negative reactivity to the method. It is possible that mobile phone assessment is less suitable in certain subgroups of patients, and in its future application, reactivity should be carefully monitored. As this technology makes the transition from research to real-world clinical application, it will be vital to assess the feasibility and uptake of this software over longer periods of time and the factors influencing nonparticipation and withdrawal. In this study, participants completed fewer entries in the second week of sampling. Additionally, only a third of participants said they would be willing to complete the procedure for 5 or more weeks with the native smartphone application, with an even lower percentage for the SMS text-only implementation (21%). In the future, it may be necessary to employ machine learning in order to tailor the choice of questions and sampling rates to the service user [25], while placing particular emphasis on symptoms of primary concern to their clinical team. Person-tailored sampling could increase the feasibility of conducting longer-term real-time assessment. Automated and clinician-delivered feedback, or monetary incentives, may also promote acceptance and compliance. In conclusion, this study provides data to suggest that both native smartphone applications and SMS text-only implementation represent acceptable technologies for facilitating real-time assessment in individuals with nonaffective psychosis. However, the native smartphone application was found to be preferable to SMS text-only implementation in terms of greater data point completion and shorter response times. Limitations of this study include the relatively modest length of the sampling procedure and moderate rates of nonparticipation by those approached to take part. In the future, it will be important to upload software applications onto individual's own phone rather than issuing them with an additional device.\n\n**Question:** Compared to completing 6 days of assessment (four sets of questions per day) with the SMS text-only implementation what was the result of completing 6 days of assessment (four sets of questions per day) with a native smartphone application on proportion of data points?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1305
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Oxaliplatin and Bolus-Modulated 5-Fluorouracil as a Second-Line Treatment for Advanced Pancreatic Cancer: Can Bolus Regimens Replace FOLFOX When Considered for Second Line?\n\n ABSTRACT:\n\nObjective. Comparing activity of 2 regimens combining oxaliplatin to bolus modulated fluorouracil as second line treatment in advanced pancreatic adenocarcinoma pretreated with gemcitabine-containing schedule. Methods. Forty eight patients with advanced pancreatic adenocarcinoma were randomly assigned to receive either FU 500 mg/m2 IV bolus weekly ×6 weeks plus leucovorin 500 mg/m2 IV weekly for 6 weeks during each 8-week cycle plus oxaliplatin 85 mg/m2 IV on weeks 1, 3, and 5 of each 8-week (FLOX) OR receive weekly intravenous infusions of oxaliplatin 40 mg/m2, 5-FU 500 mg/m2, and leucovorin 250 mg/m2 (3 weeks on, 1 week off). Results. Non progression(PR+SD) was found in 33.5% for first regimen and 29% for second regimen, and 37.5% had clinical benefit (FLOX regimen) compared to 50% in 3-weeks regimen. The median TTP was 3.9,4 months respectively. Median OS was 8, 9 months for both regimens. Only one case in 3-weeks arm suffered from grade IV diarrhea. Two cases > grade 2 neutropenia were observed; one in each treatment groups. Grade 3 anemia was recorded in 3 patients (2 in FLOX arm, one in 3-weeks arm). Conclusions. Both regimens showed encouraging efficacy, acceptable toxicity, and clinical benefit.\n\nBODY.1. INTRODUCTION:\nDue to the fact that the majority of pancreatic cancers are unresectable upon diagnosis, curative intent is rarely a goal of treatment, rather increasing survival, time to progression, and quality of life are more realistic goals. Without treatment, median survival for patients with an advanced stage of disease ranges from 3 to 4 months, whereas in patients receiving chemotherapy with single-agent gemcitabine, median survival times between 4.9 and 7.2 months have been reported in randomized phase III studies [1]. Gemcitabine has been the solo player in the field of pancreatic cancer, treatment after replacing 5-FU since 1997, and is still regarded as one standard of care for the first-line systemic chemotherapeutic treatment of patients with advanced pancreatic cancer worldwide. So far, only two randomized phase III trials have demonstrated a significant prolongation of survival with the use of gemcitabine-based combination therapy with either erlotinib or capecitabine [2]. Eventually, progression will occur and the real challenge will be how to treat a patient with advanced pancreatic cancer failing to respond or progressing after gemcitabine. There is no evidence-based treatment recommendation for these patients. The National Comprehensive Cancer Network guidelines for pancreatic adenocarcinoma currently recommend second-line chemotherapeutic treatment after gemcitabine failure in selected patients using, for example, single-agent capecitabine or a combination therapy of fluorouracil, leucovorin and oxaliplatin (FOLFOX-) like regimen [3]. To date, there is no large randomized trial confirming the survival advantages of second-line chemotherapy over best supportive care, yet the preliminary results from a small randomized German study comparing BSC alone versus 5-FU, folinic acid, and oxaliplatin plus BSC after gemcitabine failure showed a prolongation of median survival by approximately 2.6 months with the use of chemotherapy (2.3 versus 4.9 months) [4]. These data were supported by a Japanese study that reported a median survival time of approximately 1.9 months after failure of first-line gemcitabine in 74 patients with pancreatic cancer (of whom 97% received no second-line treatment) [5]. Many protocols containing oxaliplatin, 5FU, and leucovorin, as FOLFOX, FLOX, and 3-week bolus 5FU plus leucovorin and oxaliplatin, are known. Preclinical data suggested that the 5-FU plus oxaliplatin combination is more cytotoxic when 5-FU is given as a short exposure [6], which gives a rationale for exploring the toxicity and efficacy of such protocols in advanced pancreatic cancer. In the current study, we conducted a randomized trial to compare two protocols; FLOX and the 3-weeek bolus protocol regarding toxicity, response rate, and time to progression as primary end points, then overall survival as secondary endpoints.\n\nBODY.2. PATIENTS AND METHODS:\nPatients with advanced unresectable or metastatic pancreatic adenocarcinoma were enrolled under the following Eligibility criteria. \nInclusion Criteria. (I) Patients with histologically or cytologically proven locally advanced or metastatic pancreatic adenocarcinoma, (II) with at least 1 bidimensionally measurable lesion (World Health Organization (WHO) criteria); (III) Eastern Cooperative Oncology Group (ECOG) PS of 1-2; (IV) tumor progression after first line gemcitabine (whether gemcitabine pretreated or gemcitabine resistance); (V) absence of severe uncontrolled cardiovascular, metabolic, infectious, or neurological diseases; (VI) adequate bone marrow reserve (neutrophil count > 1.5 × 109/L, platelet count > 100.000/mm3 and Hb > 10 g/dL); (VII) adequate liver function (serum bilirubin < 1.5 mg/dL, serum transaminases < 2x the upper limit of normal); (VIII) adequate renal function (serum creatinine < 1.5 mg/dL); (IX) and age between 18 and 75 years. All participating patients were required to give written informed consent, and ethical approval from MOC committee was obtained before the start of the whole procedure. \nExclusion Criteria\nHistologic types other than adenocarcinoma.Neuropathy ≥ CTCAE grade 1.Ototoxicity > CTCAE grade 2.Serious, active comorbidity, including any of the following: unstable angina and/or NYHA class II–IV congestive heart failure requiring hospitalization within the past 12 months, transmural myocardial infarction within the past 12 months, acute bacterial or fungal infection requiring IV antibiotics, chronic obstructive pulmonary disease exacerbation or other respiratory illness requiring hospitalization or precluding study therapy, hepatic insufficiency resulting in clinical jaundice and/or coagulation defects, active gastrointestinal (GI) ulcers, GI bleeding, inflammatory bowel disease, or GI obstruction, Inadequately controlled hypertension, defined as systolic BP > 150 mm Hg and/or diastolic BP > 90 mm Hg on antihypertensive medications, serious cardiac arrhythmia on medication (well-controlled atrial fibrillation on medication allowed), and history of hypertensive crisis or hypertensive encephalopathy.\n\n\nBODY.2. PATIENTS AND METHODS.2.1. PRETREATMENT EVALUATION:\nAll patients were subjected to staging procedures consisted of medical history, physical examination, echocardiography, serum chemistry panel, complete blood picture, CEA, and CA 19-9. Extent of disease was determined by chest X-rays, computed tomography and/or nuclear magnetic resonance, and endoscopy as needed. Patients underwent followup examinations until death.\n\nBODY.2. PATIENTS AND METHODS.2.2. RANDOMIZATION PROCEDURES:\nPatients were randomly assigned to one of the treatment regimens (block randomization at 4), where 24 patients were enrolled for each treatment group.\n\nBODY.2. PATIENTS AND METHODS.2.3. TREATMENT.2.3.1. FLOX REGIMEN:\nOxaliplatin 85 mg/m2 was administered as a 2-hour infusion before LV and FU on days 1, 15, and 29 of the treatment cycle. LV 500 mg/m2 was administered as a 2-hour intravenous infusion weekly for 6 consecutive weeks (on days 1, 8, 15, 22, 29, and 36 of the treatment cycle), followed by a 2-week rest period. FU 500 mg/m2 was administered as an intravenous bolus 1 hour after the LV infusion was begun and was administered weekly for 6 weeks (on days 1, 8, 15, 22, 29, and 36 of the treatment cycle), followed by a 2-week rest period.\n\nBODY.2. PATIENTS AND METHODS.2.3. TREATMENT.2.3.2. 3-WEEK BOLUS REGIMEN:\n2-hour intravenous infusion of oxaliplatin 40 mg/m2 was followed by bolus leucovorin 250 mg/m2 and bolus 5-FU 500 mg/m2. Each course consisted of weekly administrations for 3 consecutive weeks followed by a week of rest. Therapy continued until disease progression, unacceptable toxicity, patient's refusal, or a maximum of 6 courses. All patients received intravenous dexamethasone 8 mg, Ondansitrone 8 mg as antiemetic prophylaxis. Therapy was withheld in case of a platelet count of less than 100.000/mm3 or a neutrophil count of less than 1.500/mm3 or for bilirubin greater than 1.5 times the upper reference level (URL) or transaminases greater than 3 times the URL. During the entire study period, patients received full supportive care to control pain or other symptoms, with careful recording of the treatment.\n\nBODY.2. PATIENTS AND METHODS.2.4. TOXICITY:\nAdverse events were graded according to the National Cancer Institute common toxicity. Oxaliplatin was reduced in the event of persistent paresthesia/dysesthesia between cycles or with pain lasting for >7 days according to staff physician's decision. When paresthesia/dysesthesia with either pain or functional impairment persisted between cycles, Oxaliplatin was discontinued.\n\nBODY.2. PATIENTS AND METHODS.2.5. EVALUATION AND STATISTICAL METHODS:\nMeasurable disease response was assessed by RECIST criteria [7]. Partial response (PR), stable disease (SD), and progressive disease (PD) were determined according to these criteria. The sum of PR and SD was reported as disease control rate (DCR). OS was estimated from the date of first treatment to the date of death or the last followup. Clinical benefit assessment was based on patients and physician-reported improvement of cancer-related symptoms and/or stabilization of improvement of PS. The TTP was calculated from the first treatment infusion to the first objective evidence of disease progression assessed by CT scan measurements or early death or date of clinical deterioration and patient not assessable for response. All patients with at least 1 chemotherapy administration were assessed for toxicity. Efficacy assessments were performed on patients who received at least 1 course of therapy. TTP and OS since the start of treatment were estimated on an intent-to-treat basis and analyzed according to the Kaplan-Meier method. Comparison between survival curves was done through log rank test to estimate P value utilizing GraphPad prism version 5 software. The required number of patients for this phase II study was determined according to a Jehan phase II optimal design [8] for a goal of 20% true clinical benefit; with α- and β-error probability of 0.05 and 0.20, respectively, an accrual of 24 patients assessable for response was planned.\n\nBODY.3. RESULTS:\nForty-eight patients with unresectable or metastatic pancreatic cancer pretreated with gemcitabine (including gemcitabine resistance or gemcitabine pretreated) in Ain Shams University Hospitals were included along the period between October 2008 and September 2011. The patients' characteristics encountered in the current study were outlined in Table 1. Median age for both groups was 56 years and 54 years, respectively. Sixteen males out of total twenty-four cases were encountered in FLOX arm compared to 17 in 3-week bolus arm. Thirty patients in both groups received prior Gemcitabine as a single agent.\n\nBODY.3. RESULTS.3.1. TOXICITIES:\nGrade 3 or 4 toxicities experienced by at least 5% of patients according to treatment arm are summarized in Table 2. Only one case in 3-week arm suffered from grade IV diarrhea. Two cases of neutropenia exceeding grade 2 (but no febrile neutropenia) were observed; one in each treatment groups. Grade 3 anemia was recorded in 3 patients (2 in FLOX arm, one in 3-week arm). Most nonhematological side effects were less than grade 3.\n\nBODY.3. RESULTS.3.2. EFFICACY:\nNo complete response was registered among all assessable 48 patients throughout the study duration for FLOX regimen, three patients (12.5%) had partial response, five patients (21%) had stable disease, and three out of 8 patients with pain at presentation (37.5%) had clinical benefit. The median time to progression was 3.9 months (95% CI, 2–4.6) (range: 1.5–5.5). Median survival time was 8 months (95% CI, 4.0–12). \nFor 3-week regimen, two patients (8%) had partial response, five patients (21%) had stable disease, and four out of 8 patients with pain at presentation (50%) had clinical benefit as shown in Table 3. The median time to progression was 4 months (95% CI, 1.8–5) (range: 1.2–6). Median survival time was 9 months (95% CI, 3.5–13) as shown in Figures 1(a) and 1(b). There was no statistical significance in progression-free survival between the 2 regimens (P value by log rank test = .4619), and so was the situation in overall survival (P-value by log rank test = .5248).\n\nBODY.3. RESULTS.3.3. COST COMPARISON:\nAlthough it was not planned as a target for the current study, yet it was an interesting issue to compare cost of chemotherapy per patient for every 8 weeks of treatment for each regimen. For FLOX regimen this cost was approximately 1200 USD versus 1400 USD for the 3-weeks regimen (due to mainly the amount of discarded oxaliplatin in every injection time that was more in the second regimen) as in Table 4.\n\nBODY.4. DISCUSSION:\nAdvanced pancreatic cancer remains a rapidly lethal cancer, with a median survival of 6 months with currently approved therapies [8]. The role of second-line chemotherapy after failure of first-line therapy in such cases is not well established, but a theoretical possibility exists in which salvage chemotherapy after the failure of first-line treatment may influence the survival. For the scale of patients with good performance status, progressing after first-line gemcitabine therapy, NCCN recommends fluoropyrimidine-based chemotherapy [9]. But for time being there is still a debate to treat or best supportive care? A phase III trial after failure of first-line gemcitabine compared BSC plus with biweekly oxaliplatin combined with weekly 5-FU as 24 hours infusion plus leucovorin, versus BSC alone [4]. After the first 46 patients out of 165 planned, the BSC arm had to be closed because BSC alone was no longer accepted by participating centers, with a possible survival benefit for second-line chemotherapy: 21 weeks (95% CI; .7; 23.3) versus 10 weeks (95% CI; 7.7; 12.3). So a second question is as the following: what is the best option of treatment? Adding oxaliplatin to continuous infusion fluoropyrimidine as a second-line salvage therapy for this category has been investigated in some phase II trials [10–12]. In a series of unselected patients the FOLFOX4 regimen yielded a 14% PR rate with 38% of patients showing SD for a DCR of 57%. Median duration of PR was 5.2 months, while median time to progression and overall survival was 4 and 6.7 months, respectively [10], but all the regimens of continuous infusion necessitate either hospitalization or pump application with their financial load upon health care system. This was the rationale to investigate regimens including oxaliplatin and bolus fluorouracil, with the theoretical premise of being as active as continuous infusion regimens, as well simpler in administration, less in cost, and better in toxicity profile. In the current study, two regimens of oxaliplatin and bolus fluorouracil have been investigated, FLOX regimen that was used in metastatic colorectal cancer with adequate efficacy and acceptable toxicity profile [12]. The current study revealed nonprogression (PR + SD) in 33.5% for first regimen and 29% for second regimen, and 37.5% had clinical benefit (FLOX regimen) compared to 50% in 3-week regimen. The median time to progression was 3.9 months and 4 months, respectively. Median survival time was 8 months and 9 months for both regimens, respectively, with no statistically significant difference in progression-free or overall survival. Regarding toxicity, only one case in 3-week arm suffered from grade IV diarrhea. Two cases of neutropenia exceeding grade 2 (but no febrile neutropenia) were observed; one in each treatment group. Grade 3 anemia was recorded in 3 patients (2 in FLOX arm, one in 3-week arm). Most nonhematological side effects were less than grade 3. So it is the time for the 3rd question; are these results comparable to those of FOLFOX regimens (infusion fluorouracil)? Gebbia et al., 2007, carried out a retrospective study including 42 patients who received standard FOLFOX4 regimen biweekly until progression or unacceptable toxicity. The study revealed six partial responses (14%) and 16 stabilizations (38%) were recorded for a tumor growth control rate of 57%. The median time to progression (TtP) was 4 months (range 1–7 months), and median overall survival (OS) was 6.7 months (range 2–9 months). A stabilization of performance status (PS) and a subjective improvement of cancer-related symptoms was recorded in 27 patients [13]. The good nonprogression rate in this study may be attributed to the high percentage of responding patients in this study to the first line therapy (50%) compared to 12% in our study. Tsavaris et al., 2005, in a prospective Phase II study evaluated a second-line combination regimen of oxaliplatin together with leucovorin-modulated 5-FU in 30 patients and revealed an encouraging response rate of 23% with a corresponding disease-control rate of 53%. Median overall survival was 5.8 months in this patient population [14]. These data correlate well with the retrospective analysis from Italy, which found a response rate of 14% together with a disease-control rate of 52% with the use of a FOLFOX-4 regimen in gemcitabine-pretreated patients [14]. Another phase II trial of oxaliplatin plus capecitabine in a series of 41 patients reported a PR in one case and SD in eight patients with a median OS of 5.8 months, and a 6-month and 1-year survival rate of 48% and 22%, respectively [9]. Toxicity was, however, significant. Preliminary results of another trial of OXP/5-FU in a series of 23 patients have shown an OS of 4 months [15]. Novarino et al., 2009, in a study on 23 gemcitabine pretreated patients with advanced pancreatic cancer revealed no objective response in all 17 assessable patients and 4 patients had stable disease, whereas 13 had tumor progression. Median duration of stable disease was 14 weeks. Median time to progression of disease (TTP) was 11.6 weeks. Seven patients experienced grade 3-to-4 toxicity. The regimen was associated with 36% clinical benefit [16]. In conclusion, combining oxaliplatin to bolus fluorouracil (either in FLOX or 3-week regimens) as a second line in gemcitabine pretreated patients with advanced or metastatic pancreatic adenocarcinoma showed encouraging efficacy, acceptable toxicity, and some clinical benefit specially when palliation or good quality of life is a target keeping in mind the simplicity in administration, the no need for hospitalization, and the less financial load specially with FLOX. Further studies with large number of patients investigating the efficacy and tolerability of such bolus regimens in gemcitabine-pretreated pancreatic cancer patients are warranted.\n\n**Question:** Compared to receive weekly intravenous infusions of oxaliplatin 40?mg/m2, 5-FU 500 mg/m2, and leucovorin 250?mg/m2 (3 weeks on, 1 week off) what was the result of receive FU 500 mg/m2 IV bolus weekly ×6 weeks plus leucovorin 500?mg/m2 IV weekly for 6 weeks during each 8-week cycle plus oxaliplatin 85?mg/m2 IV on weeks 1, 3, and 5 of each 8-week (FLOX) on progression-free survival and overall survival?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1318
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A controlled study of community-based exercise training in patients with moderate COPD\n\n ABSTRACT.BACKGROUND:\nThe effectiveness of clinic-based pulmonary rehabilitation in advanced COPD is well established, but few data exist for less severe patients treated in alternative settings. The purpose of this study was to investigate whether a novel, community-based exercise program (CBE) was feasible and effective for patients with moderate COPD.\n\nABSTRACT.METHODS:\nNineteen patients with moderate COPD (mean FEV1 62%) and self-reported exercise impairment were randomized to 12-weeks of progressive endurance and strength training at a local health club under the guidance of a certified personal trainer, or to continuation of unsupervised habitual physical activity. Outcomes assessed at baseline and 12 weeks included session compliance, intensity adherence, treadmill endurance time, muscle strength, dyspnea, and health status.\n\nABSTRACT.RESULTS:\nCompliance was 94% and adherence was 83%. Comparisons between CBE and control groups yielded the following mean (SEM) differences in favor of CBE: endurance time 134 (74) seconds versus -59 (49) seconds (P = 0.041) and TDI 5.1 (0.8) versus -0.2 (0.5) (P < 0.001). The CBE group increased muscle strength (weight lifted) by 11.8 kilograms per subject per week of training (P < 0.001). SGRQ was not significantly changed.\n\nABSTRACT.CONCLUSIONS:\nWe demonstrated the feasibility and effectiveness of a novel community-based exercise program involving health clubs and personal trainers for patients with moderate COPD.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov Identifier NCT01985529.\n\nBODY.BACKGROUND:\nChronic obstructive pulmonary disease (COPD) is a progressive debilitating disease that leads to worsening symptoms and declining exercise capacity [1]. Progressive weight loss is also common [2] and associated with increased mortality and compromised quality of life [3]. Skeletal muscle dysfunction is recognized to account for an appreciable portion of the exercise impairment in COPD patients [4]. Evidence for this particular abnormality includes reduced muscle aerobic enzyme content [5,6] and a tendency to develop premature lactic acidosis during exercise [1]. Furthermore, calf muscle cross-sectional area is reduced compared with controls [7] and muscle strength is also impaired [8]. Importantly, COPD is now regarded as preventable and treatable [9]. One could argue that successful treatment requires earlier recognition of the decline in exercise performance and implementation of a structured exercise program to reverse its deleterious effects. Structured exercise training is strongly endorsed for COPD in the context of a pulmonary rehabilitation program recognizing that these programs increase functional capacity, decrease symptoms, reduce utilization of health-care resources and improve quality of life [10-12]. The Global Initiative on Obstructive Lung Disease (GOLD) recommends that patients with COPD should be referred for pulmonary rehabilitation once their FEV1 falls below normal [13]. This equates to the transition from mild to moderate disease and corresponds with the time when patients begin to significantly reduce activities of daily living [14]. Furthermore, the American Thoracic Society/European Respiratory Society Statement on Pulmonary Rehabilitation suggests all patients suffering from respiratory disease associated with diminished functional capacity and reduced health related quality of life may benefit from pulmonary rehabilitation [12]. While pulmonary rehabilitation is practiced as a multi-disciplinary therapy, evidence-based analysis identifies exercise training as the most effective component [15]. Despite guidelines, a national survey of 283 pulmonary rehabilitation programs in 1995 identified considerable differences in program content [16]. Lack of consistency in the approach to aerobic and resistance training in chronic pulmonary disease is likely to result in sub-optimal outcomes in many cases. A paradigm shift is needed to make exercise training programs for COPD patients more scientifically rigorous and cost-effective. We have studied the feasibility of a community-based exercise program under the supervision of personal trainers appropriately certified in exercise physiology and trained to manage pulmonary patients safely and with confidence. We examined the hypothesis that in patients with moderate COPD and reduced aerobic capacity, the combination of aerobic exercise training plus nutritional counseling would result in improvement in exercise endurance, muscle strength, activity levels and quality of life compared with control group having only nutritional counseling. A secondary aim was to demonstrate the feasibility of community-based exercise rehabilitation delivered through health clubs and personal trainers at an earlier stage in the progression of COPD.\n\nBODY.METHODS.SUBJECTS:\nWe recruited 19 men and women with moderate COPD as defined by the GOLD [17] criteria (FEV1/FVC, % <70%; FEV1 < 70% and >50% predicted), a 10 pack-year smoking history and self-reported functional impairment. Exclusion criteria included current smokers, pulmonary diseases other than COPD, use of supplemental oxygen, musculoskeletal disease that impaired exercise performance and unstable coronary artery disease or congestive heart failure. The study was approved by the UCLA Office for the Protection of Research Subjects (IRB#11-000175) and all subjects gave written informed consent.\n\nBODY.METHODS.STUDY DESIGN:\nThis was a small parallel-group, randomized controlled study to assess the feasibility of implementing a community-based exercise program for patients with moderate COPD (see Figure 1). At the screening visit, spirometry was performed to assess eligibility in terms of the GOLD staging followed by body plethysmography, diffusing capacity and a maximal, symptom-limited, incremental exercise test on a treadmill ergometer using a standardized protocol [18]. Then, after a resting period of 2 hours, they performed a constant work-rate exercise test at 90% of the maximum work rate calculated from the incremental test. Figure 1Study design showing time frame and procedures at each visit. Within two weeks of the screening visit, subjects returned to repeat the constant work-rate exercise test also on a treadmill at the same constant work rate as at the first visit. This was intended to overcome any learning effect, and the best value for endurance time from the two constant load tests was used for future comparison. Subjects had assessments of body composition, dyspnea (Baseline Dyspnea Index, BDI) and quality of life (St. George's Respiratory Questionnaire, SGRQ) [19,20]. Although not felt to be malnourished, all subjects were given nutritional guidance from a certified nutritionist by way of best clinical practice and also so as to be able to offer some benefit to those subjects randomized to the control group. The nutritional counseling was based on a Food Frequency Questionnaire [21] administered every two months to assess diet and specifically to ascertain the habitual protein intake of the subject. Detailed, written recommendations were then given on an individualized basis with the goal of achieving a daily protein intake of 1.5 g per kilogram of body weight plus a daily energy intake of 35 kilocalories per kilogram of body weight. Subjects were then randomized 1:1 into two groups: (a) community based exercise program (CBE) or (b) usual habitual activity (control). Both groups continued standard COPD management based on published guidelines and appropriate for the stage of their disease. Subjects randomized to exercise training had measures of lower extremity strength by 1-repetition maximum (1-RM) for leg press at the time of the first training session. At 12 weeks, all patients were asked to repeat baseline measurements. Minor exacerbations, defined as an increase in two or more symptoms (dyspnea, sputum volume and sputum purulence) for three or more days [22] but not requiring hospitalization, were referred to the subjects' primary care physician for intensification of bronchodilator therapy and/or an antibiotic. Details of minor exacerbations were recorded but subjects continued in the study. Major exacerbations, requiring use of systemic corticosteroids or hospital admission, resulted in withdrawal from the study.\n\nBODY.METHODS.COMMUNITY-BASED EXERCISE PROGRAM:\nFollowing randomization, those subjects assigned to the exercise program were assigned to a personal trainer operating out of one of three local health clubs. They met at mutually convenient times, twice per week for 12 weeks. Each of the clubs provided facilities for aerobic exercise and resistance training. Two facilities provided two trainers each and a fifth trainer was in solo practice. All five trainers possessed Bachelors and/or Masters degrees in exercise science or related fields. Three were -Certified Strength and Conditioning Specialist (CSCS) from the National Strength and Conditioning Association and two held Exercise Specialist certifications from the American College of Sports Medicine. Specific methodologies were thoroughly reviewed with each personal trainer prior to the study along with general information regarding COPD and its effects on exercise tolerance and physical function. Initial, individualized aerobic and exercise prescriptions were developed for each subject based on findings from the maximal incremental exercise test performed at the screening visit. Information derived from this test was also used to define a safe ceiling for exercise intensity for each individual subject. Each session included a warm-up and cool-down period. Subjects then performed 30 minutes of continuous aerobic exercise in the target heart rate range based upon an initial exercise prescription developed by the investigators. The trainers were entrusted with progressing the exercise prescription at reinforcement visits in order to maintain the appropriate exercise intensity. Subjects also performed resistance training designed upon basic principles, starting with one set of each of eight conventional free-weight or machine-based exercises for the large muscle groups of the upper and lower body and progressing to two sets (typically after 10-12 sessions) at the discretion of the personal trainer. The stipulated goal was to maintain 12 repetitions of each exercise in correct form before muscle fatigue (typically 70-75% of the maximal 1-RM weight). Subjects assigned to the control group were told to continue their activities of daily living. They were contacted by telephone every month to see assess their progress.\n\nBODY.METHODS.OUTCOMES:\nThe co-primary outcome measures were endurance time for the constant work rate exercise test and change in muscle strength as calculated by total weight lifted per week. Secondary outcomes were lean body mass, percentage body fat, dyspnea (Transition Dyspnea Index, TDI) and SGRQ. Compliance was calculated as the percentage of sessions attended (out of 24), and adherence was calculated as the percentage of time subjects spent exercising in or above their target heart rate zone.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe sample size for this study was estimated using an anticipated effect size for change in exercise endurance time derived from experience with bronchodilator therapy and other interventions in COPD patients. For example, acute administration of inhaled ipratropium bromide increased endurance time by 1.1-2.8 minutes and more recent work with the long-acting bronchodilator tiotropium bromide increased exercise endurance time for a similar work rate protocol by 105 seconds compared with placebo [23]. Importantly, changes in endurance time of this magnitude were felt to be associated with clinically meaningful perceptions of improvement on the part of the investigators and the patients. In a study of repeated constant work rate exercise testing in severe COPD patients (FEV1 = 40%) in clinically stable state, O'Donnell et al [24] showed that exercise endurance time was highly reproducible with a standard deviation of 60 seconds. From our database of submaximal CWR tests in moderate COPD patients repeated during the same week, we found a standard deviation for endurance time of 88 seconds. Our power analysis, based on a standard deviation of 88 seconds, α = 0.05 and 10 subjects in each group indicates a power of 86% (β = 0.14) to detect a 120 second difference in endurance time between the two groups after 3 months. We used a two-sample t-test to evaluate whether exercise endurance time was longer in the subjects treated with exercise training and nutritional counseling versus nutritional counseling alone. We also carried out regression analysis to adjust for baseline covariates including age, gender and habitual activity level. Similar statistical methods as employed for the primary outcome measure were also applied to compare body mass index, TDI and SGRQ between groups.\n\nBODY.RESULTS:\nBetween August 2008 and December 2010, 29 patients were consented for the trial. Five patients failed screening. Twenty-four patients were randomized. Five patients in the exercise group withdrew from the study after randomization (one with worsening knee pain, one with an unrelated ankle injury, one with exacerbation of COPD, one with newly diagnosed lung cancer and one who was lost to follow-up). The baseline characteristics for both the control and exercise groups are listed in Table 1. The two groups were balanced in terms of age, anthropometrics, body composition, resting pulmonary function and breathlessness. The average FEV1 of all subjects was 62%, indicating moderate severity of COPD. Those randomized to the exercise group completed 202 of 216 total sessions, resulting in 94% compliance with the CBE. These subjects performed endurance exercise in or above the prescribed target heart rate zone 83% of the time. Table 1 Baseline characteristics of the control and community-based exercise groups   Control group CBE group No. of subjects 10 9 Age (year) 72.0 (10.1) 66.8 (8.1) Male sex (%) 60 33 Height (m) 1.70 (0.1) 1.72 (0.1) Weight (kg) 76.2 (10.9) 76.6 (23.4) FEV 1 (%) 60.8 (10.9) 63.6 (7.6) Resting heart rate (/min) 82.7 (8.3) 76.4 (9.9) O 2 saturation (%) 96 (2) 96 (2) BDI 6.5 (0.6) 6.6 (0.9) Body fat (%) 27.1 (8.2) 32.5 (3.1) Lean body mass (kg) 55.8 (10.4) 56.4 (16.7) Values are mean (standard deviation). FEV 1 : forced expiratory volume in one second. O 2 saturation: oxygen saturation. BDI: Baseline Dyspnea Index. m: meters. s: seconds. kg: kilograms. As shown in Table 2 and Figure 2, at the end of 12 weeks, endurance time increased 134 seconds in the CBE group and decreased 59 seconds in the control group (P = 0.041). Patients who participated in the CBE arm lifted significantly more weight at the end of the exercise program compared with the beginning as shown in Figure 3: 3432 versus 1775 kilograms (P < 0.001). This represents a 93% overall improvement over 12 weeks, averaging an increase of 11.8 kilograms of weight lifted per subject for every week of exercise training. Figure 4 shows that SGRQ total scores decreased by 0.7 in the control group and 4.6 in the CBE group (P = 0.611). Whilst the differences between groups were not significant, the average fall in SGRQ total score within the exercise training group exceeds the MCID of 4 units which was not the case with the control group. TDI focal score, shown in Figure 5, increased by 5.1 in the CBE group versus a decrease of 0.2 in the control group, indicating less breathlessness (P < 0.001). There was no difference in body composition as determined by skin folds between the CBE and control group after intervention (change in body fat -1.1% versus -0.1%, P = 0.285; change in lean body mass +1.2 kilograms versus +0.1 kilograms, P = 0.206). Table 2 Change in exercise performance, quality of life and dyspnea in control patients versus those in the community-based exercise program   Control group CBE group   P-value   Pre Post Change Pre Post Change   Endurance time (s) 408 (67) 350 (89) -59 444 (121) 578 (147) +134 0.041 SGRQ total score 35.1 (3.8) 34.4 (6.2) -0.7 32.1 (5.4) 27.6 (4.6) -4.6 0.611 TDI focal score NA -0.2 (0.5) -0.2 NA 5.1 (0.8) +5.1 <0.001 Total weight lifted (kg/week) NA NA NA 1775 (303) 3432 (361) +1657 <0.001 Values are mean (SEM). SGRQ: St. George’s Respiratory Questionnaire. TDI: Transitional Dyspnea Index. s: seconds. Kg/wk: kilograms per week. NA: not applicable. Figure 2Mean treadmill endurance times for the two groups at baseline and after 12 weeks. Mean changes were a decrease of 59 seconds for the control group compared with an increase of 134 seconds for those completing the community-based exercise program (P = 0.041). Figure 3Changes in total weight lifted for individual subjects from baseline to completion of training. Total weight lifted per week is calculated as the product of load (kilograms) × repetitions × sets for two days in week one and two days at end of study. Diagonal lines represent individual changes. Horizontal lines represent group means and error bars are SEM (P < 0.001). Figure 4Mean SGRQ scores for the two groups at baseline and after 12 weeks. Mean changes were a decrease of 0.7 for the control group and a decrease of 4.6 for those completing the community-based exercise program (P = 0.611). Decreased scores represent improvement in quality of life where a change of 4.0 is considered clinically meaningful. Figure 5Mean TDI focal score for the two groups after 12 weeks. Mean changes were a decrease of 0.2 for the control group and an increase of 5.1 for those completing the community-based exercise program (P < 0.001). Increased scores represent improvement in dyspnea where a change of 1.0 is considered clinically meaningful.\n\nBODY.DISCUSSION:\nWe successfully conducted this small, randomized, feasibility study of a community-based exercise program versus habitual activity in patients with moderate COPD. We found increases in exercise endurance time and muscle strength, along with improved health-related quality of life, and reduced dyspnea in those who underwent exercise training compared to those who received nutritional counseling only. The average improvement in endurance time was 193 seconds at 90% of maximum work rate at baseline which compares favorably with changes in endurance time between 100 and 120 seconds in therapeutic trials of inhaled bronchodilators [23,25-27]. Furthermore, compliance and adherence to this community-based exercise program surpassed published rates for clinic- or hospital-based exercise programs [28]. We were unable to demonstrate a statistically significant and clinically meaningful difference in health status between the two groups as measured by SGRQ total score. There are likely two reasons for this: (1) with relatively small numbers of subjects the study was underpowered to prove the SGRQ changes statistically significant and (2) the difference between the groups was narrowed by a Hawthorne effect in the control group. Taking into account these shortcomings of our analysis, we feel that the reduction of SGRQ of -4.6 units in the training group is likely a real effect and brings our results, in terms of the effects of an exercise program on health status, into better alignment with the results of other studies. Other studies on community based exercise programs have demonstrated improvements in functional capacity, dyspnea, and disease specific quality of life [29-32]. However, these studies have focused on patients with severe COPD (FEV1 < 50%) whereas our subjects were less severe (mean FEV1 62%). We specifically sought to follow GOLD [33] and ATS/ERS [12] guidelines by recruiting patients with less severe disease. We reasoned that these patients would find a community-based exercise program, close to their home or workplace, more convenient than the typical clinic-based or hospital-based program. We also surmised that less severe patients would feel comfortable exercising in a health club environment alongside apparently normal individuals. The compliance with recommended training sessions of 99% attests to the success of this strategy. Furthermore, knowing that supervised exercise training is significantly more successful than unsupervised training, we utilized personal trainers to improve subject adherence to exercise prescriptions. The 83% adherence to the target heart rate range for aerobic training is impressive and clearly associated with marked improvements in the prespecified outcome measures of exercise performance. We are aware of two other studies that examined outpatient rehabilitation in patients with moderate COPD. Cambach et al [34] found improved exercise tolerance and quality of life in patients with moderate COPD after 6 months of rehabilitation, however, this sample of patients also included asthmatics whose disease state might have been more fluctuant. Additionally, this rehabilitation program was conducted in various medical practices under the guidance of clinically trained physiotherapists, rather than in a local health club with certified personal trainers as in our community-based program. In another study, van Wetering, et al[35] randomized 199 patients with moderate COPD (mean FEV1 60%) to an interdisciplinary community-based COPD management program (INTERCOM) or usual care. This study was conducted in the Netherlands where community-based physical therapists can be assigned to visit patients in their homes. The investigators reported improvements in health related quality of life, breathlessness, exercise performance, muscle strength, and body composition, but were unable to show improvements in muscle strength. Although this rehabilitation program was conducted in the community, it utilized clinically trained physical therapists and other medical personnel. Despite the promising data in favor of community based rehabilitation programs, it remains clear that there is not yet any consensus regarding the ideal structure for such programs. The traditional medical model of pulmonary rehabilitation is focussed on exercise encouragement in more severely disabled patients. The goals are often limited to tolerance of dyspnea and modest gains in exercise performance are considered acceptable. Our program is the first to use certified personal trainers, based in health clubs, which are becoming increasingly prevalent in the community. This health club model is more directed to maximizing performance using basic principles of reconditioning and muscle strengthening. Whilst there is undoubtedly considerable overlap in these approaches, we think that the personal trainers can potentially bring a new dimension to pulmonary rehabilitation. The major limitation of our study was its small size. However, this was a pilot study and one of the main purposes was to demonstrate the feasibility of CBE training in moderate COPD. Our results should prompt further research to standardize procedures and define optimal interventions. A further limitation is that, although subjects were randomly assigned to two groups (exercise training or usual physical activities), the individual subjects and the trainers obviously could not be blinded to their treatment allocation. We attempted to reduce observer bias by having the baseline and final assessments performed in the research laboratory, distant from the training location, with the technician blinded and specifically instructed not to inquire about group allocation. Our exercise program was novel, but there was incomplete standardization of the exercise facilities involved in the study and the variability in the training and experience of the personal coaches. The exercise facilities included a nationally distributed chain, a small privately-owned facility, and a large privately-owned facility. The exercise coaches were all certified personal trainers and they received an educational session and information on exercise limitations in COPD patients. A future study might attempt to standardize the approach to training even further.\n\nBODY.CONCLUSIONS:\nIn summary, our data confirm that community based exercise programs are not only feasible, but also effective, in patients with moderate of COPD. Such programs improve functional capacity, dyspnea, and health related quality of life, and should therefore be considered in the routine management of these patients. Furthermore, our findings contribute to a growing body of evidence that calls for a paradigm shift in the philosophy of pulmonary rehabilitation. As a treatment strategy, community-based exercise programs should be introduced earlier in the progression of COPD so as to forestall the development of comorbidities associated with the lack of regular exercise and deconditioning. Future efforts should be made to establish uniform guidelines to ensure that community-based exercise training programs for COPD patients are scientifically rigorous and cost-effective.\n\nBODY.ABBREVIATIONS:\nCOPD: Chronic obstructive pulmonary disease; GOLD: Global initiative on obstructive lung disease; FEV1: Forced expiratory volume in one second; FVC: Forced vital capacity; BDI: Baseline dyspnea index; SGRQ: St. George's Respiratory Questionnaire; CBE: Community-based exercise; 1-RM: 1-repetition maximum; TDI: Transition dyspnea index; ATS/ERS: American Thoracic Society/European Respiratory Society; INTERCOM: Interdisciplinary Community-based COPD Management Program; SEM: Standard error of the mean.\n\nBODY.COMPETING INTERESTS:\nThe community-based exercise training program was implemented at Equinox Fitness Club, Century City, CA. TWS and CBC serve as an uncompensated members of the Equinox Health Advisory Board. None of the other authors have any financial or non-financial competing interests.\n\nBODY.AUTHORS’ CONTRIBUTIONS:\nSA drafted the manuscript. MA was involved in subject testing, data collection and analysis. MQ was involved in data collection. TWS was involved in the study design, assisted with data collection, and participated in manuscript preparation. C-HT performed the statistical analysis. CBC conceived the study, was involved in its design, assisted with data analysis and participated in manuscript preparation. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2466/14/125/prepub\n\n**Question:** Compared to continuation of unsupervised habitual physical activity what was the result of 12-weeks of progressive endurance and strength training at a local health club on Mean SGRQ scores?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1327
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \n\n ABSTRACT:\nRecent evidence suggests that the fat mass and obesity-associated gene (FTO) genotype may interact with dietary intakes in relation to adiposity. We tested the effect of FTO variant on weight loss in response to 2-year diet interventions. FTO rs1558902 was genotyped in 742 obese adults who were randomly assigned to one of four diets differing in the proportions of fat, protein, and carbohydrate. Body composition and fat distribution were measured by dual-energy x-ray absorptiometry and computed tomography. We found significant modification effects for intervention varying in dietary protein on 2-year changes in fat-free mass, whole body total percentage of fat mass, total adipose tissue mass, visceral adipose tissue mass, and superficial adipose tissue mass (for all interactions, P < 0.05). Carriers of the risk allele had a greater reduction in weight, body composition, and fat distribution in response to a high-protein diet, whereas an opposite genetic effect was observed on changes in fat distribution in response to a low-protein diet. Likewise, significant interaction patterns also were observed at 6 months. Our data suggest that a high-protein diet may be beneficial for weight loss and improvement of body composition and fat distribution in individuals with the risk allele of the FTO variant rs1558902.\n\nBODY:\nThe prevalence of overweight and obesity has increased substantially in the U.S. and worldwide, and the health burden of obesity-related complications has grown accordingly (1–3). Obesity is primarily determined by both genetic and lifestyle factors, including diet, as well as their interactions (4). In the past few years, genome-wide association studies (GWASs) have identified a group of genetic loci associated with BMI and obesity risk (5–7). Among them, the fat mass and obesity-associated gene (FTO) locus shows the strongest effect (5,8). Accumulating evidence has suggested that this locus is involved in the hypothalamic regulation of appetite and dietary energy intake (9,10). Recently, several studies have examined the effect of the FTO-diet interaction on body weight, but the results are not entirely consistent. Several cross-sectional studies showed that dietary factors such as low fat intake might modify the genetic effect of FTO on BMI or fat distribution (11–13). However, the gene by diet interaction was not univocally observed in randomized intervention trials (13–17), although one study found that a Mediterranean diet intervention modified the association between the FTO variant and weight changes in a population with high cardiovascular risk (18). These intervention trials, however, largely are limited by relatively small sample size or the short term of follow-up. In addition, animal studies have suggested that FTO might differentially affect various body compositions and fat distribution at different depots (19–21). Few studies have evaluated systematically the effect of the FTO variant on these measurements. The POUNDS LOST Trial thus far is the largest 2-year randomized intervention trial that tested the effect of four diets varying in proportions of fat, protein, and carbohydrate on weight loss in overweight or obese subjects (22). By use of the data from this trial, we evaluated whether various weight-loss diets might modify the effect of the FTO variant on weight loss and long-term changes in body composition and fat distribution.\n\nBODY.RESEARCH DESIGN AND METHODS.STUDY POPULATION.:\nThe POUNDS LOST Trial was conducted from October 2004 through December 2007 at two sites as follows: Harvard School of Public Health and Brigham & Women's Hospital in Boston, Massachusetts; and the Pennington Biomedical Research Center of Louisiana State University System, Baton Rouge, Louisiana. The design and sample collection have been described previously in detail (22). In brief, the study population was composed of 811 overweight or obese (BMI ranged from 25 to 40 kg/m2) participants aged 30–70 years. Major criteria for exclusion were the presence of diabetes or unstable cardiovascular disease, the use of medications that affect body weight, and insufficient motivation as assessed by interview and questionnaire. Among the 742 participants who were genotyped successfully, 61% were women, 80% were white, 15% were black, 3% were Hispanic, and 2% were Asian or other ethnic groups by self-report. The participants were assigned randomly to one of four diets constituting a two-by-two factorial design; the target percentages of energy derived from fat, protein, and carbohydrate in the four diets were 20, 15, and 65%; 20, 25, and 55%; 40, 15, and 45%; and 40, 25, and 35%. After 2 years, 645 participants (80% of total population) completed the trial. The study was approved by the human subjects committee at each institution and by a data and safety monitoring board appointed by the National Heart, Lung, and Blood Institute. All participants provided written informed consent.\n\nBODY.RESEARCH DESIGN AND METHODS.MEASUREMENTS.:\nIn the morning before breakfast, body weight and waist circumference (WC) were measured on 2 days at baseline; 6, 12, and 18 months; and 2 years. BMI was calculated as weight by height squared (kg/m2). A dual-energy X-ray absorptiometry (DEXA) scan was performed on 50% of a random sample from the total study participants (n = 424), including 242 (57.1%) women, using a Hologic QDR 4500A (Waltham, MA) (23). Total fat mass (kg), total fat-free mass (FFM; kg), whole body total percentage of fat mass (FM%) and percentage of trunk fat were obtained once at baseline, 6 months, and 2 years. Computed tomography (CT) was used in 50% of a random sample from those participants who had DEXA scans, resulting in a sample of 25% of the total participants (n = 195), including 113 (58.2%) women. Total adipose tissue (TAT) mass, visceral adipose tissue (VAT) mass, deep subcutaneous adipose tissue (DSAT) mass, and superficial adipose tissue (SAT) mass within the abdomen were measured by standard methods (24), once at baseline, 6 months, and 2 years. Because of radiation exposure, premenopausal women would not subject themselves to CT scans. A series of eight single-slice images were obtained every 10 cm from 2 below and 5 above the fourth and fifth lumbar vertebrae interspaces. These contiguous cross-sectional images were analyzed, and then the total volume was calculated from the individual slices. In this analysis, we only included data at baseline, 6 months, and 2 years with all the outcomes because the DEXA and CT scans were only performed at these three time points.\n\nBODY.RESEARCH DESIGN AND METHODS.GENOTYPING.:\nDNA was extracted from the buffy coat fraction of centrifuged blood using the QIAmp Blood Kit (Qiagen, Chatsworth, CA). Single nucleotide polymorphism (SNP) rs1558902 was selected because it had emerged as the top variant of FTO locus for BMI and WC in recent obesity-related GWAS (25,26). The SNP was genotyped successfully in 742 of 811 total participants and 603 of 645 participants who completed the trial using the OpenArray SNP Genotyping System (BioTrove, Woburn, MA). Of the 424 participants who received DEXA scans, 391 were genotyped at baseline, and 224 participants who completed the trial were genotyped. Of the 195 participants who received CT scans, 175 were genotyped at baseline and 105 participants who completed the trial were genotyped. The genotype success rate was 99% in available DNA samples. Replicated quality control samples (10%) were included in every genotyping plate with greater than 99% concordance (27). The allele frequency in two major ethnic groups (white and black) was compatible with Hardy-Weinberg equilibrium (P > 0.05).\n\nBODY.RESEARCH DESIGN AND METHODS.STATISTICAL ANALYSIS.:\nThe primary outcomes were changes in body weight and WC. Secondary outcomes were changes in body composition including total fat mass, FFM, FM% and percentage of trunk fat, and fat distribution (TAT, VAT, SAT, and DSAT). Data were pooled from the diets for the two factorial comparisons: low protein versus high protein and low fat versus high fat (22). Because the majority of the study population were white (80%), we also analyzed the main effects and interactions among white participants separately. The Hardy-Weinberg equilibrium and comparison of categorical variables at baseline were assessed with χ2 test. Differences in continuous variables at baseline were tested using ANCOVA, with adjustment for age, sex, and ethnicity. The main effects of genotype and diet intervention on outcome changes at 6 months and 2 years were analyzed using general linear regression models, with adjustment for covariates including age, sex, ethnicity, carbohydrate, the baseline value for the respective outcome, and baseline BMI. We excluded individuals with missing measures at each time point in the analysis. Moreover, to analyze the potential interactions between genotype and diet intervention, an interaction product term of genotype-diet was included in the models. In a secondary analysis, we used linear mixed models, with time as a repeated measurement factor, to test genetic associations with the trajectory of changes in outcomes in the participants who provided measurements at baseline, 6 months, and 2 years in each of four diet groups over the 2-year intervention by including genotype-time interaction terms. Because an additive genetic effect was reported in the original large-scale GWAS in which the SNP was identified (25,26), additive models were analyzed for genotype. All reported P values were two-sided and a P value of 0.05 was considered statistically significant. All data were analyzed with SAS version 9.1 (SAS Institute, Inc., Cary, NC).\n\nBODY.RESULTS.CHARACTERISTICS OF STUDY POPULATION.:\nBaseline characteristics of participants according to the FTO rs1558902 genotype are presented in Table 1. The minor allele frequency (MAF; A allele) was 0.402 in the total population. The genotype frequencies were significantly different among the sexes and ethnicities. After adjustment for age, sex, and ethnicity, all variables such as weight, BMI, WC, body composition, and fat distribution had no association with genotype at baseline. Baseline characteristics were similar among participants in the four diet groups (Supplementary Table 1). Likewise, no associations between the FTO genotypes and these measures were observed in the white participants (data not shown). TABLE 1 Baseline characteristics of the study participants according to FTO rs1558902 genotype \n\nBODY.RESULTS.EFFECTS OF :\nAfter adjustment for age, sex, ethnicity, baseline BMI, and diet groups, no main effects of the FTO rs1558902 genotype on changes in weight or WC were found in any participants at 6 months and 2 years (data not shown). We next examined the genetic effects on changes in weight and WC following a two-factorial design: low versus high fat and low versus high protein. We found that the risk allele (A) was significantly associated with a 1.51-kg greater weight loss in the high-protein group (P = 0.010), but not in the low-protein group, by the end of intervention (2 years). The changes in weight and WC were less significant at 6 months (Table 2). In subgroups treated by different proportions of dietary fat, we did not find significant genetic effects on changes in weight and WC (all P > 0.05; Supplementary Table 2).Similarly, in the white participants, we found the risk allele was associated with a 1.38-kg greater weight loss in the high-protein group at 2 years (P = 0.028), but not in other subgroups (data not shown). TABLE 2 The effects of the FTO rs1558902 genotype on weight, body composition, and fat distribution response to dietary protein intervention \n\nBODY.RESULTS.THE :\nConsistent with the observations of change in body weight, we found that the rs15589002 risk allele (A) was associated with greater loss of total fat, FFM, FM%, and percentage of trunk fat at 2 years in the high-protein group, but not in the low-protein group (Table 2). Tests for genotype-diet protein interaction were significant on changes in FFM and FM% (for interactions, P = 0.034 and 0.049, respectively) adjusted for age, sex, ethnicity, carbohydrate, baseline BMI, and the baseline value of body composition (Fig. 1). At 6 months, we only observed gene by protein diet interaction on changes in FFM (P = 0.008 for interaction; Fig. 1). The risk allele carriers in the high-protein group had greater loss of FFM than noncarriers, but those in the low-protein group had less loss of FFM compared with noncarriers (Table 2). FIG. 1.Interaction between the FTO rs1558902 genotype and dietary protein intervention on changes in total fat (A), FFM (B), FM% (C), and percentage of trunk fat (D) at 6 months and 2 years. P values are adjusted for age, sex, ethnicity, carbohydrate, baseline values for respective outcomes, and baseline BMI. Data included 52 and 60 (TT), 66 and 73 (TA), and 31 and 28 (AA) participants in the low-protein group and the high-protein group at 6 months, respectively (total n = 310), and 34 and 44 (TT), 46 and 61 (TA), and 19 and 20 (AA) participants in the low-protein group and the high-protein group at 2 years, respectively (total n = 224). We did not find significant genetic effect and interactions between the FTO variant and dietary fat intake on changes in body composition in total participants (Supplementary Table 2 and Supplementary Fig. 1). The results in the white participants were similar (data not shown). We further analyzed body fat distribution measured by CT. At 2 years, we found significant interactions between the FTO rs1558902 genotype and protein diet intervention on changes in TAT, VAT, and SAT (for interactions, P = 0.001, 0.012, and 0.002, respectively; Fig. 2). The risk allele (A) was associated with greater loss of TAT and VAT in the high-protein group but with less loss of TAT and SAT in the low-protein group (Table 2). At 6 months, gene-protein interactions were observed on changes in TAT and SAT (for interactions, P = 0.026 and 0.050, respectively; Fig. 2), and the risk allele carriers in the high-protein group had greater loss of TAT and VAT than noncarriers, but those in the low-protein group had less loss of SAT than noncarriers (Table 2). FIG. 2.Interaction between the FTO rs1558902 genotype and dietary protein intervention on changes in TAT (A), VAT (B), DSAT (C), and SAT (D) at 6 months and 2 years. P values are adjusted for age, sex, ethnicity, carbohydrate, baseline values for respective outcomes, and baseline BMI. Data included 17, 18, 18, and 17 (TT); 30, 35, 35, and 30 (TA); and 10, 13, 13, and 10 (AA) participants in the low-protein group and 18, 22, 22, and 18 (TT); 26, 35, 35, and 26 (TA); and 12, 14, 14, and 12 (AA) in the high-protein group for TAT, VAT, DSAT, and SAT at 6 months (total n = 137); and 12, 15, 15, and 12 (TT); 18, 25, 25, and 18 (TA); and 8, 9, 9, and 8 (AA) participants in the low-protein group and 15, 17, 17, and 15 (TT); 23, 30, 30, and 23 (TA); and 8, 9, 9, and 8 (AA) in the high-protein group for TAT, VAT, DSAT, and SAT at 2 years (total n = 105). We did not find significant genetic effects and interactions on changes in fat distribution in subgroups treated by different dietary fat components (all P > 0.05; Supplementary Table 2 and Supplementary Fig. 2).Similar results were found in the white participants (data not shown).\n\nBODY.RESULTS.THE TRAJECTORY OF CHANGES IN BODY COMPOSITION AND FAT DISTRIBUTION BY :\nIn a secondary analysis, we used linear mixed models to assess the genotype by time effect over the 2-year trial in those treated by the four dietary compositions. We observed significant genotype-time interactions on changes in total fat, FM%, and percentage of trunk fat in response to the high-protein diet. When assigned to the high-protein diet, participants who carried the AA genotype had greater loss in total fat, FM%, and percentage of trunk fat than those without this genotype. No genotype-time interaction on changes in body composition was found in the low-protein group (Fig. 3). FIG. 3.Changes in total fat (A), FFM (B), FM% (C), and percentage of trunk fat (D) in the low-protein and the high-protein diet groups according to the FTO rs1558902 genotype from baseline to 6 months and 2 years. P values are adjusted for age, sex, ethnicity, carbohydrate, baseline values for respective outcomes, and baseline BMI. Data included 198, 149, and 99 in the low-protein group and 193, 161, and 125 in the high-protein group for body composition at baseline, 6 months and 2 years, respectively. (A high-quality color representation of this figure is available in the online issue.) We also observed significant genotype-time interactions on changes in fat distribution in response to the low- and high-protein diets. Participants with the AA genotype had greater decrease in fat distribution in response to the high-protein diet compared with those without this genotype. In contrast, participants with the AA genotype were associated with less loss in TAT and SAT in response to the low-protein diet (Fig. 4). FIG. 4.Changes in TAT (A), VAT (B), DSAT (C), and SAT (D) in the low-protein and high-protein diet groups according to the FTO rs1558902 genotype from baseline to 6 months and 2 years. P values are adjusted for age, sex, ethnicity, carbohydrate, baseline values for respective outcomes, and baseline BMI. Data included values at baseline and at 6 months and 2 years for 80, 57, and 38 participants, respectively, for TAT and SAT and 89, 66, and 49 participants, respectively, for VAT and DSAT in the low-protein group; and 71, 56, and 46 participants, respectively, for TAT and SAT and 86, 71, and 56 participants, respectively, for VAT and DSAT in the high-protein group. (A high-quality color representation of this figure is available in the online issue.) We found genotype-time interactions on changes in total fat, FM%, and percentage of trunk fat in response to the low-fat diet (Supplementary Fig. 3), but no genotype-time interactions on changes in fat distribution were found in response to the low- or high-fat diets (Supplementary Fig. 4).A similar trend was observed in the white population (data not shown).\n\nBODY.DISCUSSION:\nIn the POUNDS LOST Trial, a 2-year, randomized weight-loss intervention, we found that dietary protein intake significantly modified the effect of an FTO variant on changes in body composition and fat distribution. Carriers of the risk allele (A allele) of the rs1558902 genotype had a greater loss of weight and regional fat in response to a high-protein diet compared with noncarriers, whereas an opposite genetic effect was observed regarding changes in fat distribution in response to a low-protein diet. Our data indicate that the modification effects of dietary treatment were more evident with prolonged intervention. We did not observe significant modification of dietary fat intake on the genotype effects. The rs1558902 genotype was reported to show the strongest association with obesity in the European (25,26) and other ethnic populations (28), and it has strong linkage disequilibrium with other obesity-associated FTO variants such as the rs9939609 genotype. In this study, the MAF of the polymorphism in all participants was similar to those in the HapMap CEU population (0.45). At baseline, no significant difference was observed in anthropometrics and metabolic estimates, body composition, or fat distribution across genotypes. The lack of association with baseline BMI is probably largely due to the fact that the participants were all overweight or obese, so that the groups had relatively smaller variances in BMI than the general population. Several cross-sectional studies showed that diets might modify the effect of the FTO variant on obesity, but the data from randomized diet intervention trials are conflicting and limited by small sample size or short term of follow-up (Supplementary Table 3). Two lifestyle intervention studies with follow-up periods of 9 and 12 months did not find significant influence of FTO polymorphisms (rs8050136 and rs9939609) on changes in body weight or fat distribution related to diet among 200 overweight and obese individuals treated by diets with reduced fat and increased fiber or reduced fat and sugar (13,14). In another 10-week, hypo-energetic diet intervention with either low fat or high fat content, the FTO variant had an effect on only changes in resting energy expenditure, insulin release, and sensitivity, not on weight loss (15). Similarly, in the Finnish Diabetes Prevention Study, the FTO variant did not modify weight change by individualized diet intervention with reduced fat and increased fiber during the 4-year follow-up of 255 individuals with impaired glucose tolerance (16). In our study, when macronutrient components of diets were not considered, we found no main effects of the FTO variant on changes in weight and body composition during the intervention. Grau et al. (15,29) reported that dietary fat/carbohydrate content interacted with some genetic variants including the FTO variant on weight reduction or change in obesity-related phenotypes. In our study, we also found significant gene-diet interactions on changes in body composition and fat distribution. However, our data indicate that it is the dietary protein component, rather than dietary fat, that might drive the observed interactions. In previous studies, high-protein intervention has been found to result in a greater weight loss and abdominal fat mass (30–32). Our results suggest that individuals with a certain genetic background may benefit more in weight loss by following a high-protein diet. The mechanism of how protein intake interacts with FTO genotype is unclear. Our data indicate that the genetic effects on certain fat compositions or depots may be more evident than the effects on overall adiposity. Functional studies have shown that the loss or overexpression of FTO in mice led to different changes in fat distribution at different dissected sites (19–21). FTO mRNA expression was fat depot–specific and was found to differ significantly in subcutaneous fat and in visceral fat (33–35). Epidemiological studies also have shown that FTO variants are significantly associated with distribution of fat depots (13,36,37). Taken together, these data suggest that genetic effects of FTO on the change of fat mass at various sites may be different, and changes in anthropometrics may not adequately reflect the effects of an FTO variant. The genetic effect in our study seemed to be more evident at 2 years than that at 6 months. The results were in line with a recent study by Razquin et al. (18) in which it was found that FTO risk allele carriers had the highest weight reduction after 3 years of intervention with a Mediterranean diet compared with several short-term diet interventions (less than 1 year) in which no influence of an FTO variant on weight loss or change in fat distribution was found (13–15,17). Of note, between 6 months and 2 years of intervention in our trial, the participants regained weight. Therefore, it seems that the genetic variant might affect both the reduction and regain of the adiposity measures. These data suggest that the modification effects of diet treatment on an FTO genotype effect are more likely to be identified in long-term interventions. Several limitations need to be considered when interpreting our findings. Even though our study is thus far the largest and longest diet intervention weight-loss trial, the relatively small sample size of the subgroups may limit the power to detect very moderate genetic effects or interactions. We did not adjust for multiple testing according to the recommendation by Rothman (38) and Lai et al. (39) because outcomes and the repeated measurements at 6 months and 2 years were highly correlated in our study. Overadjustment for multiple comparisons may increase the type II error and reduce power to detect significant differences. In addition, the majority of the total participants were white, and further studies are needed to determine whether our findings are generalizable to other ethnic groups. Even though the randomized clinical trial is thought to be the best model to test gene-environment interactions, we acknowledge that replication in diverse populations is needed to verify our findings. In summary, we found that dietary protein intake might modify the FTO variant's effect on changes in body composition and fat distribution. A high-protein diet may be beneficial for weight loss in individuals with the risk allele of an FTO variant. Further studies are warranted to verify our findings and explore the potential mechanisms.\n\n**Question:** Compared to assigned to the dietary fat components without this genotype what was the result of assigned to the dietary fat components who carried the AA genotype of FTO rs1558902 on total fat, FM%, and percentage of trunk fat?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1332
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Rotary instruments in the treatment of chronic periodontitis: A randomized clinical trial\n\n ABSTRACT.BACKGROUND:\nThe study aimed at comparing the effectiveness of rotary instruments with hand instruments in non-surgical treatment of chronic periodontitis.\n\nABSTRACT.MATERIALS AND METHODS:\nThirty-eight patients with chronic periodontitis, enrolled to this randomized, controlled clinical trial, were divided into two groups. Patients in the control group received scaling and root planing with hand instruments, whereas patients in the test group received treatment with rotary instruments. Clinical assessment by plaque index, bleeding on probing, probing depth, gingival recession, and clinical attachment level was made prior to and at 6 months after treatment. Differences in the clinical parameters were analyzed using the Wilcoxon signed rank test and Mann–Whitney U test.\n\nABSTRACT.RESULTS:\nBoth treatments resulted in a significant improvement in all clinical recordings, but no differences in any of the investigated parameters were observed at baseline between the two groups.\n\nABSTRACT.CONCLUSION:\nNon-surgical periodontal therapy with the tested rotary instruments may lead to clinical improvements comparable to those obtained with conventional hand instruments.\n\nBODY.INTRODUCTION:\nThe ultimate objective of all root treatment procedures is to effectively remove plaque and calculus without causing root surface damage and to render the treated root surface biologically compatible with host periodontal tissue. However, complete removal of subgingival calculus with hand or power-driven instruments is difficult to achieve, even when surgical approach is used. Various power-driven and hand instruments are currently available for scaling and root planing. Although hand scalers are frequently used, considerable time and manual dexterity are required for their effective operation.[1] Moreover, hand scalers are unable to reach the deeper root surfaces where the periodontal pockets are more than 4 mm deep.[23] Consequently, ultrasonic scalers have become more widely used in recent years. Although they are simple to use, it is often difficult to achieve smooth and calculus-free root surfaces.[456] Dental plaque adheres more readily to roughened root surface created by the use of ultrasonic scalers.[7] To overcome these challenges associated with the use of ultrasonic scalers and hand instruments, carbide rotary instruments for scaling and root planing have been developed. Some studies suggested that hand instruments produce smoother root surface,[891011] while other studies were in favor of use of ultrasonic instruments to achieve a smooth surfaces, as these were found to be less damaging to the root surfaces than hand curettes.[1213] Clinical studies have shown that attachment gain and pocket probing depth reduction are identical after the use of hand instruments and sonic or ultrasonic scaling.[141516] Most of the data available in literature compare the efficacy of hand curettes and ultrasonic instruments. However, until now, hardly any study is available concerning the clinical outcome following non-surgical periodontal treatment with rotary carbide instruments. Therefore, the aim of the present study is to assess the clinical effectiveness of carbide bur when compared to scaling and root planing with hand instruments.\n\nBODY.MATERIALS AND METHODS:\nAll the subjects in the present study were recruited from the Department of Periodontology and Implantology. The ethical committee of Himachal Institute of Dental Sciences approved the study protocol, and all participating patients signed an informed consent at the beginning of the study.\n\nBODY.MATERIALS AND METHODS.SELECTION OF SUBJECTS:\nThirty-eight patients (20 females and 18 males) with advanced chronic periodontitis (Armitage 1999)[17] were recruited for the study. The inclusive criteria for patient selection were as follows: \nAdults between 18 and 70 years of ageGood level of oral hygiene [as a criterion for good level of oral hygiene, a mean plaque index (PlI) score <1 was chosen] (Lӧe 1967).[18] The exclusion criteria were: \nPatients seeking treatment of periodontitis for the last 2 yearsPatients using antibiotics for 12 months prior to treatmentPatients with any systemic diseasePatients using medicated mouth washesPregnancy.\n All subjects were enrolled in a hygiene program 6 weeks prior to the treatment. They received professional supragingival tooth cleaning with ultrasonic scalers (supragingival ultrasonic tip) and oral hygiene instructions. A supragingival scaling and reinforcement of oral hygiene was performed at baseline as well as 4, 8, 12, 16, 20, and 24 weeks after treatment.\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN AND TREATMENT:\nThe present study was conducted between June 2012 and January 2013, and was performed according to a parallel group design. All patients were treated according to a \"one-stage procedure\" with either (1) the rotary instruments, Desmoclean (Hager, Germany) [Figure 1] or (2) Gracey curettes (Hu-Friedy Co., Chicago, IL, USA) [Figure 2]. Figure 1The rotary instrument along with micro-motor contra-angle handpiece Figure 2Gracey curettes In the test group or rotary instrument group, there were 19 patients (12 females and 7 males). In the test group, carbide burs with non-cutting, elliptical, and hexagonal head were used with micro-motor contra-angle handpiece at 8000 rpm (rotation per minute) with light pressure and water spraying [Figure 1]. In the control group or Gracey curette group, there were 19 patients (8 females and 11 males). Instrumentation for both groups was performed until the root surface was adequately debrided and planed. All the instrumentation was done under local anesthesia by one experienced operator. The curettes were sharpened at the operator's request and no restrictions in instrumentation time were set in any group.\n\nBODY.MATERIALS AND METHODS.CLINICAL MEASUREMENTS:\nThe following parameters were recorded at baseline (after 6 weeks pre-treatment phase) and 6 months after therapy: \nFull mouth plaque score (FMPS) (O'Leary et al. 1972)[19]Probing depth (PD)Gingival recession (GR)Clinical attachment level (CAL)Bleeding on probing (BOP).\n Presence or absence of bleeding on probing was assessed up to 30 s after probing. The measurements were made at six aspects per tooth – mesio-buccal (mb), mid-buccal (b), disto-buccal (db), mesio-lingual (ml), mid-lingual (l), and disto-lingual (dl) – using a manual periodontal probe.\n\nBODY.MATERIALS AND METHODS.RANDOMIZATION OF THE STUDY:\nSixty-eight patients underwent a screening examination which included full mouth probing, plaque index, and a radiographic examination. Thirty-eight patients with generalized advanced chronic periodontitis, who fulfilled all the prerequisites, were finally recruited for the present study. This screening test and final recruitment was conducted by examiner (A). These 38 patients were randomly assigned to two treatment groups using random tables. The randomization list was kept by the second examiner (B), and the patients were given sealed envelopes indicating the treatment modality. Now, examiner (C) opened those envelopes and performed the therapy. Examiner (A), who was unaware of the treatment received, performed the recording of all clinical parameters. And finally, examiner (B) compiled the data received from examiner (A) according to the randomization list. At all time points, the outcome of research was assessed blindly.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nThe Wilcoxon signed ranks test was used to compare the baseline data to those at 6 months for each treatment group. Comparisons between treatment groups at baseline and those at 6 months were accomplished with the Mann–Whitney U test. The α-error was set at 0.05. The power of study, given 1 mm as the significant difference between the groups, was calculated to be 0.99.\n\nBODY.RESULTS.FMPS AND BOP:\nThe FMPS and BOP for both the treatment groups at baseline and 6 months are summarized in Table 1. No statistically significant differences were found between these groups. Table 1 FMPS and BOP: Mean values (in percentage) at baseline and 6 months \n\nBODY.RESULTS.CLINICAL PARAMETERS IN SINGLE-ROOTED TEETH:\nIn the rotary instrument group, 312 single-rooted teeth and in the Gracey curette group, 395 single-rooted teeth were treated. At moderately deep pocket (4-5 mm) sites, in the rotary instrument group, pocket depth at baseline (4.5 ± 0.5) reduced to 3.7 ± 1.2, whereas in the Gracey curette group, the pocket depth reduced from 4.5 ± 0.3 to 3.4 ± 1.1 [Table 2]. At deep pocket site (initial PD > 6 mm), in the rotary instrument group, PD reduced from 7.2 ± 1.5 to 6.5 ± 1.8 and in the Gracey curette group, it reduced from 6.7 ± 0.9 to 5.4 ± 1.8. But while comparing these two groups, the results were statistically nonsignificant. Similarly, other clinical parameters like GR and CAL gave statistically nonsignificant results [Table 3]. Table 2 Single-rooted teeth: Mean PD, GR, and CAL (±SD) at baseline and 6 months at sites with initial pocket depths of 4-5 mm ( n =38) Table 3 Single-rooted teeth: Mean PD, GR, and CAL (±SD) at baseline and 6 months at sites with initial pocket depths >6 mm ( n =38) \n\nBODY.RESULTS.PARAMETERS IN MULTI-ROOTED TEETH:\nIn the rotary instrument group, 256 multi-rooted teeth and in the Gracey curette group, 302 multi-rooted teeth were treated. Clinical improvement was noticed in both moderately deep pocket sites and deep pocket areas. But comparison of groups showed them to be statistically nonsignificant [Tables 4 and 5]. Table 4 Multi-rooted teeth: Mean PD, GR, and CAL (±SD) at baseline and 6 months at sites with initial pocket depths of 4-5 mm ( n =38) Table 5 Multi-rooted teeth: Mean PD, GR, and CAL (±SD) at baseline and 6 months at sites with initial pocket depths >6 mm ( n =38) \n\nBODY.DISCUSSION:\nIn order to provide a root surface compatible for soft tissue healing, the old concept of extensive cementum removal has been questioned by various experimental studies.[2021] Nowadays, the trend in therapeutic methodology in periodontology is minimum aggressive approach in both surgical and non-surgical treatments. In the literature, there are large numbers of studies comparing the effect of hand curettes and ultrasonic scalers on root surface roughness.[456111314] Some studies found that ultrasonic instruments produce rough and irregular root surface as compared to hand curettes,[45] whereas other studies suggested that clean and smooth root surface can be achieved with the use of ultrasonic scalers.[1314] Kishida and Sato stated that ultrasonic instrument produces smooth and regular surface as compared to hand curette.[12] But the use of a diamond bur at high speed increased the risk of root surface damage;[22] similar results were obtained by Evian and Horowitz.[23] Solis Moreno et al. compared the root surface roughness after using Gracey curettes, termination diamond burs (40 μm), a piezoceramic ultrasonic scaler, and a piezosurgery ultrasonic scaler, using confocal microscopy and scanning electron microscopy.[24] They found that the termination diamond burs (40 μm) produced a rougher surface than the ultrasonic instruments and the hand curettes. Whereas Dahiya et al. found a smoother root surface with rotary bur as compared to hand curettes and ultrasonic scalers.[25] The difference in the results of these two studies can be explained on the basis of different characteristics of bur. In the previous study, diamond burs were used, whereas in latter study, non-cutting carbide burs were used. Khatiblou and Ghodossi studied the effect of root roughness on periodontal healing and found that periodontal healing following flap surgery occurred regardless of whether the subgingival root surface was rough or smooth.[26] The present randomized clinical trial evaluated the clinical effectiveness of rotary instrument and hand curettes. In the rotary instrument group, four different carbide burs (I = 23 mm, II = 29 mm, III = 26 mm, and IV = 29 mm) with non-cutting, elliptical, and hexagonal head in four different shaft designs were used according to manufacturer's instructions [Figure 1]. Carbide burs were used at 8000 rpm with light pressure and water spraying, so as to avoid excessive damage due to frictional and thermal insults. Similar parameters were adopted by Dahiya et al. in their study.[25] No restriction in the instrumentation time was set in any group. Instrumentation in both groups was performed until a smooth and hard surface was achieved. Limiting the time might result in under- or over-instrumentation of root surfaces. The smoothness of the root surface was assessed using a periodontal probe (Hu-Friedy PCP 11; Hu-Friedy Co.) and explorer (Hu-Friedy Wilkins-Tufts 17/23; Hu-Friedy Co.). This method of evaluating the root surface smoothness was similar to the method used by Ioannou et al.[16] All clinical parameters were assessed at baseline and after 6 months of therapy because healing following non-surgical therapy is almost complete at 3 months. However, a slower ongoing, but limited healing can continue for a longer time (up to 9 or more months) following the treatment.[27] Ioannou et al.[16] and Sculean et al.[28] also evaluated the clinical parameters at baseline and 6 months after therapy. The results of our study suggested that both hand curettes and rotary instruments are effective treatment modalities for periodontal disease. When interpreting the results, it has been noted that sites where probing depths were initially deep showed more GR, more gain of CAL, and deeper residual PD at 6 months examination than the sites with initial moderate PD. Some clinical studies have demonstrated that the reduction of PD and the improvement of CAL after both surgical and non-surgical periodontal treatments are dependent on the initial PD.[2930] But while comparing carbide bur with hand curettes, in terms of periodontal healing by using PD, CAL, and GR clinical parameters, statistically nonsignificant results were found. The possible explanation for such results could be the enrollment of all the patients in supragingival professional tooth cleaning and oral hygiene reinforcement program at baseline as well as 4, 8, 12, 16, 20, and 24 weeks after treatment. Some clinical studies have shown that root surface debridement with either hand curettes or ultrasonic devices leads to similar clinical and microbial improvement of periodontal conditions.[3132] Oberholzer et al. did not find any significant differences in pocket depth reduction and clinical attachment gain after creating rough or smooth surface during flap operation. This indicates that subgingival roughness does not interfere with healing if there is good supragingival plaque control and that long-term success of this treatment is dependent on the quality of maintenance therapy. But a smooth surface may be advantageous near the gingival margin, since a smooth surface is less likely to accumulate plaque than a rough surface.[33] Because of lack of published data regarding the effect of carbide bur on clinical parameters like PD, CAL, and GR, we could not compare the results of the present study with those of other studies.\n\nBODY.CONCLUSION:\nThe present study shows that non-surgical periodontal therapy with both hand curettes and rotary burs was well tolerated as all the treated pockets showed uneventful healing. Moreover, the use of rotary instruments in the treatment of chronic periodontal disease gave comparable results with traditional approach of scaling and root planing carried out with hand curettes. Our study also emphasizes the importance of scaling and root planing for maintenance of good periodontal health. Consequently, it is also suggested that more studies evaluating healing outcome and patients' compliance after both treatment modalities are needed in order to draw definitive conclusion about the stability of each therapeutic approach.\n\n**Question:** Compared to received scaling and root planing with hand instruments what was the result of received treatment with rotary instruments on Probing depth (PD), Gingival recession (GR), Clinical attachment level (CAL)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1336
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Feto-maternal outcomes and Glycemic control in Metformin versus insulin treated Gestational Diabetics\n\n ABSTRACT.OBJECTIVE:\nTo evaluate and compare feto-maternal outcomes and glycemic control in metformin versus insulin treated gestational diabetics.\n\nABSTRACT.METHODS:\nThe study was conducted in 2010- 2012 as a part of M. Phil at Civil hospital, Lyari General Hospital and Mamji Hospital in Karachi. After written informed consent, 71 GDM diagnosed females with WHO criteria were enrolled. They were divided into two groups. Group-A, 32 females were given oral metformin 500 mg TDS while Group-B, 39 females were given insulin 0.8-0.9 mg/kg/day in two divided doses subcutaneously. Patients were followed till term. Feto-maternal outcomes were evaluated in 25 patients in each group who completed the study.\n\nABSTRACT.RESULTS:\nWhen groups were compared, newborns in Group-B were significantly more in weight (p=0.01). Significant numbers of babies were delivered after 38 weeks of pregnancy in Group-B (P=0.021). There were two intrauterine deaths and significantly higher HbA1C at term in Group-B. (P=0.03). FBS at term was non-significant (p=0.079) and there was more number of cesarean sections due to feto-maternal disproportion in Group-B (28% vs.2%). Results analyzed for glycemic control before and after the treatment revealed that FBS was statistically less in Group-A (p=0.00) whereas for Group-B the value of FBS and HbA1C was statistically high. (p=0.002 & 0.04 respectively).\n\nABSTRACT.CONCLUSION:\nMetformin has produced better effects on feto-maternal outcomes and glycemic control in comparison to Insulin in GDM.\n\nBODY.INTRODUCTION:\nGlucose intolerance occurring during the prenatal period, Gestational Diabetes Mellitus (GDM), is usually identified in the second trimester of pregnancy. The causes of pancreatic beta - cell dysfunction in pregnancy, leading to insulin deficiency in GDM, are not fully understood. Main factors are adiposity in pregnancy, autoimmune beta cell dysfunction, decreased insulin secretion and insulin resistance.1 Pathophysiology is strongly suggestive of circulating placental and maternal hormones leading to a decrease in glucose sensitivity by the tissue receptors and subsequent increases in the carbohydrate tolerance. From 1995 to 2005, gestational diabetes had amplified by 45% overall, with the world wide prevalence from 3.0 to 4.4%.2 Women of South Asia are seems to be at the utmost risk for GDM with the prevalence rate being 3.2-3.5%. GDM is almost 72%more frequently seen in patient's age ≥ 30 years, 50% more common with positive family history and 29% occur more in those with raised body mass index.3 Traditionally Insulin has been known as an effective therapy for GDM as it shows adequate control of raised blood glucose levels.4 It was initially prepared as a lifesaving drug for Type-I insulin deficient diabetic patients but later was also used for gestational diabetes as most of the oral medications were considered to be teratogenic. For decades it has been used for the treatment of diabetes in pregnancy.5 It requires proper storage and is associated with high incidence of maternal morbidity and mortality such as maternal hypoglycemia, macrosomic babies leading to assisted or surgical deliveries and more weight gain in pregnancy.6 Metformin, an oral anti diabetic drug is being used since 1960 in patients with Type-2 diabetes mellitus. It has now been upgraded to a Category B drug as it is not associated with any fetal congenital anomalies.7 It was initially used to reduce insulin resistance in females with the polycystic ovarian syndrome and as it showed remarkable results with no side effects in pregnancy so was considered a better alternative for the management of GDM.8 It can be a logical treatment for pregnant females as it produces the euglycemic state by improving insulin sensitivity, reducing hepatic gluconeogenesis and increasing peripheral glucose uptake and utilization.9 Present study was designed to evaluate and compare the effects of metformin and insulin on feto-maternal outcomes and glycemic control in GDM patients.\n\nBODY.METHODS:\nThis Karachi based study was started in Lyari General Hospital, Mamji Hospital and Civil Hospital, after obtaining approval from Institutional Review Board and Ethical Committee of Dow University, Karachi. It was conducted during 2010-2012 as a part of M.Phil. For this study, high risk patients were recruited from hospitals after obtaining a written informed consent. High risk patients include those with bad obstetric history (previous abortions, intra uterine deaths or still births), strong family history, obese patients and patients with history of gestational diabetes in previous pregnancies. Seventy one diabetic pregnant females were enrolled in the study in second trimester after confirmation for GDM with Oral Glucose Challenge Test and Oral Glucose Tolerance Test, according to WHO criteria. These were divided randomly into two groups. The odd numbers patients (GROUP A=32 patients) were given Tab. Metformin, 500mg thrice daily, with dose escalation therapy prescribed in once daily dose in morning and then increased to thrice daily dose as required, depending upon the glycemic control of the patients. Even number (GROUP B=39 patients) received subcutaneous insulin. A total Insulin dose was calculated using 0.8 units /kg /day in 2nd trimester and 0.9 units /kg/day in 3rd trimester. 2/3 of the dose was prescribed in the morning and 1/3 of the dose was advised in the night. They were counseled well for strict diet control with provided diet charts (1800-2000 kcal /day) and were also advised for 30 minutes of walk at least thrice weekly. These patients were followed fortnightly up till 32 weeks and weekly till term. Maternal outcomes were then re-evaluated at 36 weeks of pregnancy with fetal outcomes and mode of delivery, being noted soon after the child birth.10 Data was subjected to statistical analysis and results were tabulated using SPSS-16. Chi square, independent T-test and Fisher exact tests were applied accordingly and p-value of 0.05 or more was considered significant.\n\nBODY.RESULTS:\nResults were compiled for 50 patients who completed the study with 25 patients in each group. By completion of study 21 patients had dropped out due to various reasons or delivered elsewhere and were excluded from the study. When maternal characteristics were evaluated no difference was observed between age and weight of the patients in Group-A versus Group-B (p=0.09 & p>0.99 respectively). There were statistical differences present between the fasting blood sugar (FBS) and random blood sugar level (RBS) between the groups (p=0.039 & 0.001 respectively). When glycated hemoglobin (HbA1C) was compared statistically, results were non-significant (P=0.182) (Table-I). Table-I Baseline Maternal Characteristics Group-A v/s Group-B (n=50). S. No. Characteristics Group-A (n=25) Group-B (n=25) p-value Mean ± S.D Mean ± S.D 1. Age (years) 29.76±3.41 31.60±4.27 0.099 2. Weight(kg) 77.90±7.65 77.9 ±9.03 NA 3. FBS-1(mg/dl) 104.40±13.12 117.9 ±29.06 0.039 * 4. RBS(mg/dl) 171.16±37.44 239.16±69.78 <0.001 * 5. HbA 1 C--1 (%) 5.28±0.42 5.43±0.34 0.182 Group-A: Metformin with diet control and exercise, Group-B: Insulin with diet control and exercise * Statistically significant (Independent T-test applied), FBS1: fasting blood sugar level at enrollment HbA 1 C-1: Glycated hemoglobin at enrollment. When fetal outcomes were evaluated the babies of Group-B were significantly more in weight as compared to Group-A (p=0.01). Apgar score was statistically non-significant between the groups. On comparing delivery age, the results were statistically significant and more numbers of babies were delivered after 38 weeks of pregnancy in Group-B (P=0.021). All the babies were healthy and alive in Group-A whereas there were two intrauterine deaths at term in Group-B. When maternal outcomes were tabulated there were significant differences between Group-A and B for HbA1C at term, significantly higher in Group-B (P=0.03). FBS at term was non-significant between the groups (p=0.079). There were more number of cesarean sections in Group-B due to feto-maternal disproportional ratio as babies were bigger and heavier (28% vs.2%) (Table-II). Table-II Fetal and Maternal outcomes Group A v/s B (n=50). Fetal outcomes Group-A (n=25) Group-B (n=25) P- value Mean±SD Mean±SD Weight of the baby (kg) 3.14±0.32 3.44±0.46 0.01 * Apgar score 8.64±0.56 8.22±2.43 0.39 n (%) n (%) Delivery age 0.021 *   37 weeks 19(76%) 11(44%)   38 weeks 6(24%) 14(56%) Condition of baby 0.49@   Alive baby 25(100%) 23(96%)   Intrauterine death 0 2(8%)   Still birth 0 0 Maternal outcomes Mean±SD Mean±SD HbA 1 C-2 (%) (36weeks) 5.42±0.34 5.72±0.35 0.003 * FBS -2 (36 weeks) 93.48±11.9 102.08±20.63 0.079 n (%) n (%) Mode of delivery NA   Normal vaginal 7(28%) 5(20%)   Assisted delivery 2(8%) 3(12%)   Cesarean section 16(64%) 17(68%)   Cesarean due to bad obstetric history 14(56%) 10(40%)  Cesarean due to Feto–maternal disproportion 2(8%) 7(28%) Group-A: Metformin with diet control and exercise, Group-B: Insulin with diet control and exercise, HbA 1 C-2: Glycated hemoglobin at 36 weeks, FBS-2: fasting blood sugar at 36 weeks, * statistically significant result (chi square and independent T test applied), NA: chi-square test not applicable, @: Fisher exact test applied. When results were analyzed for glycemic control before and after the treatment in the same groups it was observed that FBS were statistically less in Metformin treated females (p=0.00). HbAIC values were statistically non-significant before and after the treatment in Group-A. For Group-B the value of FBS was statistically high at term (p=0.002). Even HbA1C levels were statistically more at term in Group-B females (p= 0.04) (Table-III). Table-III HbA 1 C and FBS at enrollment and at 36-37 weeks of pregnancy Group A v/s B (n=50). Groups At enrollment At term P- Value Mean ± SD Mean ± SD Group-A FBS 104.40±13.12 93.43±11.93 0.00 * Group-A Hba 1 c 5.284±0.42 5.420±0.346 0.061 Group-B FBS 117.92±29.06 102.08±20.64 0.002 * Group-B Hba 1 c 5.43±0.345 5.68±0.506 0.04 * Group-A: Metformin with diet control and exercise, Group-B: Insulin with diet control and exercise HbA 1 C: Glycated hemoglobin; FBS: fasting blood sugar, * statistically significant result (Paired T-test applied). \n\nBODY.DISCUSSION:\nGestational diabetes mellitus is a growing health issue in many parts of the world. Our population is at higher risk to develop this disorder because of genetic, social, and environmental factors.11 Gestational diabetes has serious, long-term consequences for both baby and mother. Early diagnosis and management can surely improve the outcomes for these women and infants.11 Base line values for maternal age, maternal weight and HbA1C in both the groups showed no statistical difference though they have different FBS and RBS levels. Similar groups were taken by Balani for his research.12 When fetal outcome was evaluated it was observed that babies born to mothers on insulin treatment were significantly more in weight as compare to mother given oral metformin. (3.14 kg vs. 3.44, p=0.01). A randomized clinical trial conducted by Niromanesh concluded that babies born to mother given insulin were heavier than metformin treated GDMs.13 Recent study by Marques declared that there were no differences in fetal outcomes in metformin and insulin groups when compared together except neonatal hypoglycemia which was more in the insulin treated group.14 This study was done with much large sample size and a different study design could be the reason for non-similarity in the results of this study with ours. There was no difference for apgar score between both the groups in our study. Similar were the results documented by Rowan upon comparison of neonates of insulin and metformin treated GDMs. He also has found non-significant results for Apgar score at 5 minutes.15 In our study females who received insulin delivered late as compared to females on metformin therapy and the results were statistically significant (p=0.021). Kelley stated that there was significantly less maternal weight gain and lower gestational age at delivery with metformin compared with insulin therapy.16 Whereas Territi stated that there was no difference in both the groups when delivery at gestational age was tabulated.17 There were two intrauterine deaths in insulin groups in our study whereas all the babies were delivered healthy in metformin group confirms that metformin is a safer and effective drug. Rowan stated that 0.2% of females had intra uterine deaths with insulin whereas none of the patient in metformin group had such an issue.15 However results from Hellmuth provide contradictory results to ours, as more intrauterine deaths were observed by him in metformin treated group then in insulin treated group.18 The probable cause could be that females in his study on metformin started treatment in last trimester, when already a level of irreversible hypoxic changes in placenta and fetus had taken place. Carson stated that insulin infusions for several days to the fetal lamb produce a progressive decline in fetal arterial oxygen content and may be one of the factors for more IUDs with insulin treatment.19 With regard to maternal outcomes, it was observed that mean fasting blood sugar at 36 weeks was lowered numerically and HbAIC was significantly less in metformin treated patients as compared to insulin treated GDMs. So we can say that metformin has fulfilled the demands and needs of an anti-glycemic agent. Results by Riaz are in agreement with ours as he documented that metformin for the glycemic control in gestational diabetes mellitus is significantly superior to insulin.20 With regard to mode of delivery, it is clear from our results that surgical deliveries due to Feto-maternal disproportion were in insulin treated patients than in metformin treated patients with a ratio of 7: 2 respectively. These two females in metformin group had surgical deliveries due to narrow pelvic outlet though the babies even in these two cases were not large in size. Goh has highlighted that the caesarean section rate in women with insulin treated GDM was 45.6% due to Feto-maternal disproportion which is much higher than metformin treated group.21 Kale also stated that patients on insulin therapy for GDM have higher rates for cesarean sections and his results are similar to ours.22 Moore discovered that metformin is an effective alternative to insulin as there was no significant difference present in mode of delivery, fetal weight and neonatal complications between insulin treated and metformin treated female in GDMs which is contradictory to our results.23 Waheed also reported that no significant differences were present in mode of delivery and fetal complications, when patients on insulin and metformin treatment for gestational diabetes were compared.3 We also compared the results in both the groups for glycemic values, at the time of enrollment and after the treatment at 36 weeks to evaluate the effectiveness of the treatment. The results showed that metformin decreases the fasting blood glucose levels before and after the treatment significantly (p=0.00). HbAIC was non-significant when compared between the start of study and at term (p=0.061) clearly indicating that metformin has maintained the sugar levels and there was no predicted rise with the advancement of pregnancy. Whereas insulin group showed significant increase in fasting blood sugar levels as well for HbAIC when compared before and after the treatment, (FBS, p=0.002 &HbAIC, p=0.04) indicating that insulin was unable to show expected results. Mesdaghinia declared similar results that not only end term HbAIC but also maternal and fetal complications were much higher in Insulin group and metformin showed better glycemic control and feto-maternal outcome.24 A recent study by Saleh also confirmed the fact that good glycemic control can be achieved using metformin with better fetal and maternal outcomes.25\n\nBODY.CONCLUSION:\nMetformin has produced better effects on feto-maternal outcomes and glycemic control in patients with gestational diabetes in comparison to insulin. Metformin is used through physiological route, has no storage problem and nor does it requires any dependency. With all these features metformin is a better option than insulin in our population for patients having gestational diabetes mellitus. However multi-centric studies with large sample size are open venues for future research.\n\nBODY.CONCLUSION.SOURCE OF FUNDING:\nSelf and Dow University of Health Sciences.\n\nBODY.CONCLUSION.WORK PLACE:\nDow Diagnostic and Research Laboratory, Dow University of Health Sciences, Karachi, Pakistan.\n\nBODY.CONCLUSION.AUTHORS` CONTRIBUTION:\nRA perceived, designed, conducted research and did main write up of manuscript. SK helped in pathological aspects, write up and editing of manuscript. FS helped in pharmacological aspects, data collection and proof reading of manuscript. NK revised it critically for important intellectual content and is supervisor of the study.\n\n**Question:** Compared to insulin 0.8-0.9 mg/kg/day in two divided doses subcutaneously (Group-B) what was the result of oral metformin 500 mg TDS (Group-A) on HbA1C?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased
1212
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Thromboelastographic changes during laparoscopic fundoplication\n\n ABSTRACT.INTRODUCTION:\nThromboelastography (TEG) is a technique that measures coagulation processes and surveys the properties of a viscoelastic blood clot, from its formation to lysis.\n\nABSTRACT.AIM:\nTo determine the possible hypercoagulability state and the effect of antithrombotic prophylaxis on thromboelastogram results and development of venous thrombosis during laparoscopic fundoplication.\n\nABSTRACT.MATERIAL AND METHODS:\nThe study was performed on 106 patients who were randomized into two groups. The first group received low-molecular-weight heparin (LMWH) 12 h before the operation, and 6 and 30 h after it. The second group received LMWH only 1 h before the laparoscopic fundoplication. The TEG profile was collected before LMWH injection, 1 h after the introduction of the laparoscope and 15 min after the surgery was completed.\n\nABSTRACT.RESULTS:\nThere was no significant difference in thromboelastography R-time between the groups before low-molecular-weight heparin injection. In group I preoperative R-values significantly decreased 1 h after the introduction of the laparoscope, after the end of surgery and on the third postoperative day. K-time values decreased significantly on the third postoperative day compared with the results before low-molecular-weight heparin injection, and after the operation. In group II, preoperative R-values significantly decreased 1 h after the introduction of the laparoscope, and after surgery. K-time values did not change significantly during or after the laparoscopic operation.\n\nABSTRACT.CONCLUSIONS:\nOur study results demonstrated that the hypercoagulation state (according to the TEG results) was observed during and after laparoscopic fundoplication in patients when LMWH was administered 12 h before the operation together with intraoperative intermittent pneumatic compression. The optimal anticoagulation was obtained when LMWH was administered 1 h before fundoplication.\n\nBODY.INTRODUCTION:\nVenous thromboembolism is an important social and health care problem, because 20–30% of patients develop deep vein thrombosis (DVT) after general surgical operations, while 5.5% of patients have this complication when laparoscopic fundoplication is performed without appropriate prophylaxis [1]. The most frequent reason for pulmonary embolism is thrombi forming in the channels of proximal leg veins and deep pelvis veins. Eighty–ninety percent of pulmonary embolism masses are caused by DVT or a thrombus formed in the pelvis [2]. The development of deep venous thrombosis is related to stasis, hypercoagulability, and injury of the venous wall (Virchow's triad). Laparoscopic surgery causes variable serum hypercoagulability; there are data suggesting that dependent positioning in combination with intraoperative pneumoperitoneum decreases venous flow from the lower extremities and possibly increases the risk of DVT development [3]. The increased intra-abdominal pressure associated with pneumoperitoneum and reverse Trendelenburg position during laparoscopic fundoplication generates venous stasis in the lower limb by compressing the retroperitoneal vena cava and iliac veins, which is already present due to general anesthesia [4]. The intermittent pneumatic compression of the calf with an external pressure cuff for the prevention of DVT is a well-established prevention measure [5–8]. Methods that have conventionally used to prevent postoperative deep vein thrombosis during laparoscopic surgery include not only mechanical techniques, but also drug therapy (low-molecular-weight-heparin – LMWH). Some uncertainties are still present regarding LMWH administration time before elective surgery. Some guidelines suggest LMWH prophylactic administration 12 h before surgery, others 1–2 h before the operation and even 4–6 h after surgery. It is crucial to select the optimal time of LMWH administration in order to obtain the best therapeutic ratio for thrombosis prevention and avoiding excessive bleeding. For this reason, it is important to examine the effect of LMWH on some parameters of coagulation as surrogate markers of possible thrombosis development during and after surgery as it will help to guide physicians with the best time of anticoagulant administration. We chose the thromboelastogram in this respect because it is easy to obtain global coagulation assay with efficacy to detect effects of anticoagulants in various clinical scenarios. Thromboelastography (TEG) analysis is more sensitive to qualitative defects in fibrin or platelets than standard laboratory tests [4]. The technique of thromboelastograph hemostasis analysis provides a measurement of dynamic changes in the viscoelasticity property of the blood clot and gives permanent graphic documentation (Figure 1) [9]; TEG tracing provides all phases of the hemostasis process from the formation of the first fibrin strands to the fibrinolytic dissolution of the clot [4]. Figure 1Characteristic thromboelastographic (TEG) tracings. A – Normal TEG tracing. B – Anticoagulation caused by coagulation factor deficiency or inhibition leading to a prolonged R time. C – Platelet dysfunction or pharmacological inhibition leading to a decreased maximum amplitude (MA). D – Hyperfibrinolysis. E – Hypercoagulability leading to shortened R and K times, with a concomitant elevated and angle\n\nBODY.AIM:\nThe aim of this study was to evaluate whether TEG parameters are able to detect the hypercoagulability state in patients undergoing laparoscopic fundoplication and to monitor the effect of the antithrombotic prophylaxis before, during and after laparoscopic fundoplication.\n\nBODY.MATERIAL AND METHODS:\nThis was a prospective randomized clinical study, where 106 patients undergoing elective laparoscopic fundoplication because of gastroesophageal reflux disease, caused by hiatal hernia, were studied. All the patients gave their written informed consent, and The Kaunas Regional Biomedical Research Ethics Committee approved the study (protocol no. BE-2-13). This randomized clinical study was also registered on the ISRCTN registry with trial ID ISRCTN62203940. All the patients were randomized into two groups (Figure 2). The first group of 55 patients received LMWH Bemiparinum (Zibor, Berlin Chemie, Luxembourg) 2500TV 0.2 ml subcutaneously 12 h before the operation, and 6 and 30 h after it. The second group of 51 patients received LMWH Bemiparinum 2500 TV 0.2 ml. subcutaneously 1 h before the operation. Both groups received intermittent pneumatic compression (IPC) during all laparoscopic fundoplication. The IPC was performed using \"Kendall SCD 700 Series\" apparatus. We acquired 5-mm-thick axial CT (GE Light Speed Pro) venograms from the ankle to the iliac wing after injection of 150 ml of 300 mg/ml contrast medium at a flow rate of 3 ml/s through an antecubital vein on the third postoperative day. Figure 2Study design\n\nBODY.MATERIAL AND METHODS.ANESTHESIA:\nAll operations were performed under standardized endotracheal anesthesia. Premedication: all patients received 7.5 mg of midazolam 1 h before transfer to the operating room. In the operating room standard monitoring was established including an electrocardiogram (five electrode system), peripheral oxygen saturation, and non-invasive blood pressure (\"S/5 Compact\" Datex-Ohmeda). Preoxygenation with 100% oxygen via a face mask for 3 min was followed by standardized induction of anesthesia with thiopentone 4–6 mg/kg, fentanyl 2.5 μg/kg, and pipecuronium 0.08 mg/kg, and anesthesia was subsequently maintained with isoflurane 0.6–0.7 of minimal alveolar concentration and fentanyl 1.0–1.25 μg/kg every 20–25 min. Lungs were artificially ventilated with \"S/5 Aespire\" Datex-Ohmeda (FiO2 0.40–0.45), respiration was monitored with \"S/5 Compact\" Datex-Ohmeda, and parameters of artificial lung ventilation were changed according to readings of capnography, keeping end tidal partial pressure of CO2 at 35–40 mm Hg. Neuromuscular blockade was reversed by atropine 1.0 mg and neostigmine 2.0 mg. All patients were extubated in the operating room. Infusion therapy: before insufflation all patients received intravenously 500 ml of Ringer's solution; the total amount of fluid during the operation was 1500 ml of Ringer's solution.\n\nBODY.MATERIAL AND METHODS.OPERATION:\nTwo experienced surgeons performed all the operations. Pneumoperitoneum was induced by inserting a Veress needle above the umbilicus and connecting it to the CO2 insufflator, which achieved and maintained an intraabdominal pressure of 12–14 mm Hg. The patients were moved to the reverse Trendelenburg position (45°) before intraabdominal manipulation. Five trocars were inserted into the peritoneal cavity at the typical sites – at the epigastrium, in the right subcostal area, two in the left subcostal area, and one on the middle abdominal line above the navel. The short gastric arteries were divided; the same procedure was performed on the gastric diaphragm ligament, releasing the gastric fundus. On the side of the small gastric curve, the liver-stomach ligament was cut, and the right diaphragm leg was prepared, followed by the left one. The mobilization of the esophagus was completed through bypassing it in the circular manner, freeing a 3–4 cm-long intraabdominal segment and removing the hernia sack. The diaphragm legs were then sutured using a 3–4-suture non-resorbing Ethibond (Ethicon, Johnson & Johnson Company, USA) behind the esophagus. The next step of surgery was the formation of a 360° (Nissen) antireflux wrap. It was formed by rolling a 2.5 cm long wrap of gastric fundus behind the abdominal portion of the esophagus, thus reinforcing the lower esophageal sphincter. The wrap was then fixed to the esophagus and the stomach was sutured to the front of the esophagus.\n\nBODY.MATERIAL AND METHODS.BLOOD COLLECTION AND PROCESSING:\nVenous blood samples for TEG analysis were taken from the forearm vein four times: before LMWH injection an before the operation;1 h after the introduction of the laparoscope;15 min after extubation;on the third postoperative day. Blood was drawn by a clean venipuncture following a double-syringe technique and placed into vacutainer tubes containing 3.9% (0.129 M) trisodium citrate (Becton Dickinson Vacutainer Systems, Plymouth, UK) in the ratio of one part of citrate to nine parts of whole blood. Whole blood was transferred within 5 min to prewarmed cuvettes of the thromboelastograph. The TEG analysis was performed with TEG 5000 Thromboelastograph Hemostasis Analyzer system (Haemonetics Corporation, UK). The coagulation process was activated using Kaolin and 20 μl of calcium chloride as described in the literature. The TEG system's approach to the monitoring of patient hemostasis is based on the premise that the end result of the hemostasis process is a single product, which is the clot [4]. The TEG system measures the clot's physical properties by the use of a cylindrical cup that holds the 0.36 ml blood as it oscillates through an angle of 4°45' and each rotation lasts for 10 s (Figure 3). As the clot begins to form in the presence of the activator, the TEG records the visceroelastic changes that occur in the sample. Figure 3Technique to perform the TEG test A pin suspended in the blood within a stationary cup is monitored for motion as fibrin-platelet bonding occurs. This motion is related to the strength of the clot and is analyzed by a computer, which creates a graphical representation. The TEG is a point-of-care whole blood coagulation monitor which provides information on specific aspects of coagulation including time to production of initial fibrin strands (R-time), time to develop a clot (R-time, K-time), rate of fibrin build-up and cross linking (a-angle), maximum clot strength (maximum amplitude – bMA) and measures of fibrinolysis (decreasing amplitude post-MA) [10] (Figure 4). Figure 4TEG tracing – normal values of TEG parameters (celite activated whole blood)R-time – reaction time, in minutes (norm: 5.0–10.0 min) – this denotes the time taken from the beginning placement of the blood sample in the cuvette to the initial fibrin formation. K-time – clotting time (norm; 1.0–3.0 min) representing the time of fixed clot formation. &-angle – degrees; closely related to K (norm: 53.0– 72.0°) this represents the rate of clot growth. MA parameter – maximum amplitude, millimeters (norm: 50.0–70.0 mm) – a measurement of maximum strength of the developed clot. The thromboelastographic trace was analyzed for: R-time (reaction time, in minutes) – it is time from the beginning of the trace until an amplitude of 2 mm is reached. The period of latency time is taken from the placement of the blood sample in the cuvette to the initial fibrin formation.K-time (clotting time, in minutes) – it is the time from the end of R until a fixed clot strength is reached, i.e. amplitude of the trace is 20 mm.&-angle (clot formation rate) – it is affected primarily by the rate of thrombin generation.MA (maximum amplitude) – it is a measure of maximum strength of the clot and assesses the function of the platelets and to some extent fibrinogen also.\n\nBODY.MATERIAL AND METHODS.DEEP VEIN THROMBOSIS DIAGNOSIS:\nAll the patients underwent color duplex scan examination preoperatively and spiral CT venography with color duplex scan on the third postoperative day in order to detect possible DVT. One experienced radiologist performed all these examinations. Criteria for DVT diagnosis were: intraluminal filling defect or localized nonopacification of the venous segment.\n\nBODY.MATERIAL AND METHODS.STATISTICAL ANALYSIS:\nStatistical evaluation was conducted using descriptive analysis: normality test (Shapiro-Wilk W-test), the unpaired Mann-Whitney U test to compare data from two study groups, and the Wilcoxon test to investigate differences in TEG values in each group of patients. Data are expressed as mean ± SD. P-value < 0.05 was considered significant.\n\nBODY.RESULTS:\nThe patients in both groups were similar in terms of age, weight, height, gender, duration of surgery, and American Society of Anesthesiologists (ASA) class (Table I). There was no massive or minor bleeding during all laparoscopic operations. No drains were left after the operation. Computed tomography venography revealed popliteal vein thrombosis in 2 (3.6%) group I patients on the third postoperative day (Photo 1). All the patients left the hospital after an uneventful 3–5-day stay. Table I Patients’ demographic characteristics Value Group I ( N = 55) Group II ( N = 51) P -value Age [years] 55.27 ±14.25 55.24 ±14.65 0.886 Male/female ( n / n ) 18/37 16/35 0.881 BMI [kg/m 2 ] 27.03 ±5.04 26.72 ±4.82 0.463 Duration of surgery [min] 136.73 ±45.16 129.71 ±36.84 0.638 ASA class:  I 7 6 0.921  II 29 30 0.921  III 18 15 0.921  IV 1 0 0.661 Postoperative stay [days] 3.98 ±0.913 4.04 ±0.979 0.835 Values are mean ± SD. Photo 1Computed tomography venography revealed v. poplitea dex. thrombosis: A – transverse CT scan, B – coronal CT scan TEG values – R-time, K-time, α-angle and maximum amplitude – were not significantly different between the groups before LMWH injection, during and after laparoscopic fundoplication (Tables II–V). Table II R-time value changes in both groups Study groups Before LMWH injection 1 h after introduction of the laparoscope After extubation Third postoperative day I ( N = 55) 7.41 ±1.51 ♦ ■ ● 6.40 ±2.65 ♦ 6.28 ±1.90 ■ 6.09 ±1.44 ● II ( N = 51) 6.36 ±1.84 * ∞ 5.10 ±1.64 * 5.35 ±2.15 ∞ 5.87 ±2.12 ♦ p = 0.002 ■ p < 0.001 ● p < 0.001 * p = 0.010 ∞ p = 0.001. R-time (5.0–10.0 min). Table III K-time value changes in both groups Study groups Before LMWH injection 1 h after introduction of the laparoscope After extubation Third postoperative day I ( N = 55) 2.68 ±1.50 ♦ 2.59 ±1.97 2.82 ±1.61 ■ 2.22 ±1.04 ♦ ■ II ( N = 51) 2.13 ±1.29 2.01 ±1.57 2.12 ±2.25 2.14 ±1.71 ♦ p = 0.020 ■ p = 0.002. K-time (1.0–3.0 min). Table IV Alpha-angle value changes in both groups Study groups Before LMWH injection 1 h after introduction of the laparoscope After extubation Third postoperative day I ( N = 55) 55.12 ±11.35 55.88 ±14.02 54.93 ±13.91 57.25 ±12.14 II ( N = 51) 59.34 ±13.81 ■ ● 65.24 ±12.68 ■ 64.71 ±15.01 ● 62.35 ±15.12 ■ p = 0.010 ● p = 0.012. Alpha-angle (53.0–72.0°). Table V Maximum-amplitude value changes in both groups Study groups Before LMWH injection 1 h after introduction of the laparoscope After extubation Third postoperative day I ( N = 55) 64.86 ±8.97 66.13 ±9.33 63.17 ±8.20 ♦ 65.98 ±7.29 ♦ II ( N = 51) 63.13 ±11.85 ■ 61.11 ±22.29 64.41 ±7.21 66.20 ±7.03 ■ ♦ p = 0.015 ■ p = 0.028. Max. amplitude (50.0–70.0 mm). In group I, preoperative TEG R-values significantly decreased already 1 h after the introduction of the laparoscope, and remained at the same level until the third postoperative day (Table II). TEG K-time values decreased significantly only on the third postoperative day compared with the results before LMWH injection, during the surgery or after the operation (Table III). In group II, TEG R-time and K-time values did not change significantly during or after the laparoscopic operation (Tables II and III). There were no significant changes in α-angle and maximum amplitude values during or after laparoscopic fundoplication in both groups (Tables IV and V).\n\nBODY.DISCUSSION:\nThromboelastography is a viscoelastic hemostatic assay that measures the global visco-elastic properties of whole blood clot formation under low shear stress. Thromboelastography also is a method to monitor the hemostasis that is easy to use in the operation theatre, and the anesthesiologist is provided with rapid and accurate results. The formation of the clot is a net result of interaction between the cellular components of blood and coagulation proteins. The overall coagulation profile can be quantitatively and qualitatively interpreted in terms of the hypocoagulable, normal or hypercoagulable state of the sample and degree of lysis [11]. The TEG is a technique that gives a global overview of the coagulation cascade including fibrinolysis. Many workers have tried to correlate TEG parameters with the conventional coagulation profile, but it is not possible as the two techniques are different [11]. Commonly used blood tests are often aspecific, whereas the dosage of all plasma factors involved in coagulation and platelet activity is expensive and mostly not useful to individuate the clinical risk of hypercoagulability [12]. But to some extent, R correlates with activated partial thromboplastin time (APTT), and angle correlates with fibrinogen (and platelet function), MA correlates with platelet function (and fibrinogen) [13–18], and whole blood clot lysis index correlates with euglobulin lysis time and fibrin degradation products (FDP) [11]. It is already proven that laparoscopic surgery leads to postoperative activation of the coagulation system, which is a prerequisite for thromboembolic complications. Many factors can cause postoperative changes of hemostasis. These include the type of operation performed and the type of anesthesia, duration of the operation, and transfusion [19]. In laparoscopic surgery, the pneumoperitoneum reduces venous flow from the lower extremities by approximately 30% to 40% [19]. Therefore, the risk of DVT might be increased after laparoscopic surgery compared with open surgery. Topal et al. [19] showed that high intra-abdominal pressure led to a significant variation in hemostasis and coagulation compared with low intra-abdominal pressure. They used 10, 13 and 16 mm Hg intra-abdominal pressure with the standard anesthetic technique in this study. Conventional tests only monitor a single point of LMWH's effect on the coagulation cascade, whilst TEG can be used to assess the full anticoagulant effects of LMWH [20]. The LMWHs are unique in their ability to preferentially inhibit factor Xa more than factor IIa. Low-molecular-weight heparin has multiple potential effects on the coagulation pathway including antifactors IIa, IXa and Xa activity, inhibition of thrombin-induced activation of factors V and VIII, platelet function inhibition, and enhanced fibrinolysis and endothelial secretion of tissue factor pathway inhibitor [20–22]. Klein et al. [22] reported that thromboelastographic R value correlates significantly with the peak and trough levels of anti-Xa activity after the administration of LMWH. Anti-factor Xa activity is not advised as a routine screening test as it does not reliably predict the risk of peri-operative bleeding [23, 24]. Our study compared the occurrence of DVT and thromboelastometric parameter changes during and after laparoscopic fundoplication with different pharmacologic prevention: low-molecular-weight heparin administered 1 h or 12 h before the surgery. Bemiparin is a LMWH indicated for the acute treatment of deep vein thrombosis with or without pulmonary embolism, for the prophylaxis of venous thromboembolism in surgical and nonsurgical patients and for the prevention of clotting in the extracorporeal circuit during hemodialysis [25]. Bemiparin is the second generation LMWH with the lowest molecular weight, the longest half-life [16] and the highest anti-factor Xa/anti-factor IIa activity ratio. Our study results demonstrated that the low-molecular-weight heparin protected from possible operation-induced hypercoagulation when it was administered 1 h before laparoscopic fundoplication – we found no difference in R- and K-time, MA, and α-angle compared with baseline values in thromboelastographic examination. Despite the IPC action, when LMWH was administered 12 h before surgery, significant intraoperative and postoperative shortening of R time and K time on the third postoperative day and no difference in MA and a-angle were detected. These findings suggests that the hypercoagulability state is present, which starts already during laparoscopic fundoplication and continues some days after it. These results of ours nicely correspond with LMWH pharmacokinetic data: bemiparin is rapidly absorbed after subcutaneous administration, attaining maximal plasma anti-Xa activity within 2–6 h. Postoperative administration of bemiparin (6 and 30 h after fundoplication) does not protect from intervention-induced hypercoagulation as R- and K-time in this patient group remained shortened even on the third postoperative day. These results of ours demonstrate that prevention of operation-induced hypercoagulability is of crucial importance and it can be achieved with timely LMWH administration (one hour before laparoscopic surgery). Caprini et al. [26] investigated TEG preoperatively and on the day following the operation in 100 laparoscopic cholecystectomy cases. They found significant postoperative shortening of R and K but no difference in MA. They suggested that postoperative coagulation asthenia should be attributed not to platelets but to coagulation activation. This was considered to reflex excessive enhancement of the activation of coagulation factors. Forfori et al. [12] analyzed the correlation between the variation of normal TEG parameters and the dosage amount of anticoagulant therapy adjusted to body weight. They found no correlation between the variations of all screened TEG parameters from basal to postoperative and pro/kilogram dosage of LMWH administered. The authors suggest that LMWH prophylaxis based on weight produces a feasible but not a more efficient anticoagulation profile than standard LMWH therapy. The exact incidence of thromboembolic complications after laparoscopic fundoplication is unknown, and we could not find studies on the dynamics of intraoperative and postoperative coagulation during laparoscopic fundoplications. Our study identified two cases of calf deep vein thrombosis (3.6%) only in the group in which LMWH was administered 12 h before laparoscopic surgery. Keeping in mind that patients after laparoscopic fundoplication are mobile within a few hours after surgery, we can speculate that calf deep vein thrombosis in these two patients occurred during the operation. According our study results, we can also assume with caution that prophylaxis with intraoperative IPC and LMWH 12 h before surgery is not efficient enough to protect against the hypercoagulation state, present during laparoscopic fundoplication. On the other hand, the main weakness of our study is that the number of patients is too small in both groups, so it is impossible to find a significant difference in the rate of DVT between the groups. Venous stasis in laparoscopic operations can be minimized by reducing intraabdominal pressure during the operation. Combination of LMWH 1 h before laparoscopic surgery and intraoperative IPC generates a hypocoagulation effect more effective to prevent deep-vein thrombosis during or after the laparoscopic operation.\n\nBODY.CONCLUSIONS:\nMonitoring TEG parameters can be a feasible practice to observe the coagulation status in patients undergoing laparoscopic surgery. Our first group patients presented a tendency to the hypercoagulability state, determined mostly by the shortening of TEG R-time. Comparison of TEG values between the study groups indicates that prophylaxis with LMWH 1 h before surgery together with IPC protects patients from surgery-induced hypercoagulation, and this lasts at least for three days. Our recommendation is that LMWH, as a DVT prophylactic measure, has to be administered 1 h before the laparoscopic operation to ensure the drug's optimal effect.\n\n**Question:** Compared to Low-molecular-weight heparin only before the surgery what was the result of Low-molecular-weight heparin before and after the surgery on Alpha-angle values?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
831
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Behavioural recovery after treatment for varicose veins\n\n ABSTRACT.BACKGROUND:\nThe aim of this study was to assess behavioural recovery from the patient's perspective as a prespecified secondary outcome in a multicentre parallel‐group randomized clinical trial comparing ultrasound‐guided foam sclerotherapy (UGFS), endovenous laser ablation (EVLA) and surgery for the treatment of primary varicose veins.\n\nABSTRACT.METHODS:\nParticipants were recruited from 11 UK sites as part of the CLASS trial, a randomized trial of UGFS, EVLA or surgery for varicose veins. Patients were followed up 6 weeks after treatment and asked to complete the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This is a 15‐item instrument that covers eight activity behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven participation behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective.\n\nABSTRACT.RESULTS:\nA total of 798 participants were recruited. Both UGFS and EVLA resulted in a significantly quicker recovery compared with surgery for 13 of the 15 behaviours assessed. UGFS was superior to EVLA in terms of return to full‐time work (hazard ratio 1·43, 95 per cent c.i. 1·11 to 1·85), looking after children (1·45, 1·04 to 2·02) and walks of short (1·48, 1·19 to 1·84) and longer (1·32, 1·05 to 1·66) duration.\n\nABSTRACT.CONCLUSION:\nBoth UGFS and EVLA resulted in more rapid recovery than surgery, and UGFS was superior to EVLA for one‐quarter of the behaviours assessed. The BRAVVO questionnaire has the potential to provide important meaningful information to patients about their early recovery and what they may expect to be able to achieve after treatment.\n\nBODY.INTRODUCTION:\nMinimally invasive treatments for varicose veins such as ultrasound‐guided foam sclerotherapy (UGFS) and thermal ablation techniques have become widely used alternatives to surgery for the treatment of varicose veins. One of the advantages of these techniques is the reported quicker return to normal activities, particularly following UGFS1, 2, 3. However, it is unclear whether thermal ablation, in particular endovenous laser ablation (EVLA), is also associated with a clinically significant quicker return to normal activities compared with surgery; some studies4, 5 have shown an earlier return and others2, 6, 7, 8 no difference. Until recently, there was no standard means of assessing recovery from the patient's perspective. This led to the use of varying definitions such as return to 'normal activities', 'full activity', 'daily activity' or 'basic physical activities' and/or 'return to work' in previous studies. This lack of standardization led the authors to develop a 15‐item questionnaire to assess distinct aspects of normal activities that were identified as important by patients9 – the Behavioural Recovery After treatment for Varicose Veins (BRAVVO) questionnaire. This paper reports behavioural recovery results from a multicentre parallel‐group randomized clinical trial (CLASS, Comparison of LAser, Surgery and foam Sclerotherapy) that compared the clinical efficacy and cost‐effectiveness of three treatment modalities: UGFS, EVLA with delayed foam sclerotherapy to residual varicosities if required, and surgery. Behavioural recovery was one of the prespecified secondary outcomes of the CLASS trial. The clinical and cost‐effectiveness results have been reported elsewhere10, 11.\n\nBODY.METHODS:\nPatients were recruited from 11 centres in the UK between November 2008 and October 2012. This study (ISRCTN51995477) had research ethics committee and Medicines and Healthcare products Regulatory Authority approval. Eight centres randomized participants to one of three treatment options, and three centres offered only UGFS and surgery. Participants were randomized between the treatments with even allocation, using a minimization algorithm that included centre, age (less than 50 years, 50 years or more), sex, great saphenous vein (GSV) or small saphenous vein (SSV) reflux, and unilateral or bilateral disease. Inclusion criteria were: age over 18 years; primary unilateral or bilateral symptomatic varicose veins (Clinical Etiologic Anatomic Pathophysiological (CEAP) grade C2 or above); GSV and/or SSV involvement; and reflux exceeding 1 s on duplex ultrasonography. Exclusion criteria were: current deep vein thrombosis; acute superficial vein thrombosis; a GSV or SSV diameter smaller than 3 mm or larger than 15 mm; tortuous veins considered unsuitable for EVLA or stripping; and contraindications to UGFS or to general/regional anaesthesia that would be required for surgery.\n\nBODY.METHODS.TREATMENTS:\nThe treatments have been described in detail elsewhere9, 10. For UGFS, foam was produced using the Tessari technique12 using a ratio of 0·5 ml sodium tetradecyl sulphate to 1·5 ml air (3 per cent for GSV/SSV truncal veins, 1 per cent for varicosities; maximum 12 ml foam per session). EVLA of GSVs/SSVs was performed under local anaesthetic, and patients were offered UGFS to any residual varicosities at 6‐week follow‐up if required, with the exception of one centre that performed concurrent phlebectomies. Surgery in the form of proximal GSV/SSV ligation and stripping (all GSV) and concurrent phlebectomies was performed under general or regional anaesthetic as a day‐case procedure. Compression stockings were applied after all three treatments.\n\nBODY.METHODS.POST‐TREATMENT ACTIVITY:\nAll participants were given a study patient information leaflet (PIL), which recommended a return to all normal activities as soon as they were able, but that strenuous activity/contact sport should be avoided for 1–2 weeks. The PIL specifically stated that following EVLA or UGFS 'most people are able to return to work within 2–3 days of treatment, but some people go back the following day or even the same day', and that following surgery 'people can return to office or sedentary work after 2–3 days; and that most people will be back at work within a week after surgery to one leg and 2 weeks after surgery to both legs; but there is no reason to remain off work as long if it can be managed with reasonable comfort'. Participants undergoing UGFS or EVLA were advised to wear compression stocking for 10 days constantly (day and night). Those in the surgery group were advised that bandages would be removed the day after operation, following which they should wear a stocking for 10 days, but that it was reasonable to remove the stocking after 4 or 5 days, providing that they were active.\n\nBODY.METHODS.DATA COLLECTION:\nThe participants were asked to complete the BRAVVO questionnaire along with other study questionnaires (Aberdeen Varicose Vein Questionnaire, EQ‐5DTM (EuroQoL, Rotterdam, The Netherlands) and Short Form 36 (QualityMetric, Lincoln, Rhode Island, USA)) at the 6‐week follow‐up appointment. Participants who failed to attend the 6‐week appointment were sent the questionnaire to complete at home. The BRAVVO questionnaire was developed as an instrument to assess the activity and participation components of the World Health Organization International Classification of Disability and Function model13. Variation in activity and participation is not fully explained by impairment and so these constructs are important additional indicators of health outcome. An interview study involving 17 patients who had recently undergone varicose vein treatment was carried out to identify normal activities and 'milestone' behaviours to incorporate into the questionnaire. In addition to sampling from the three treatment options, diversity sampling was used in an attempt to gain a mix of sex, age and rural–urban location. Seventeen interview transcripts were content‐analysed in four stages to identify appropriate items to include in a questionnaire. Full details of this process have been published previously9. The BRAVVO questionnaire assesses the time taken for patients to return to performing 15 behaviours: eight 'activity' behaviours (tasks or actions an individual is capable of doing in an idealized situation) and seven 'participation' behaviours (what the individual does in an everyday, real‐world situation) that were identified to be important from the patient's perspective9. Fig. 1 shows the question layout. Figure 1Question layoutBJS-10081-FIG-0001-c\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData from the BRAVVO questionnaire were analysed within an interval‐censored time‐to‐event framework using flexible parametric survival models14. For each behaviour item, each participant's response was converted into the number of days to return to that behaviour. If a participant indicated that return to the behaviour was on the day of the procedure, this was assumed to be interval‐censored between day 0 and day 1. If a participant indicated return to the behaviour was after a number of weeks, not days, this was assumed to be interval‐censored between the previous week and the week indicated. For example, if a participant reported 5 weeks, it was assumed that the return to the behaviour took place between 28 and 35 days. A participant who indicated that they had not returned to a behaviour that they usually performed was right‐censored at 42 days. Participants who indicated that they did not normally perform a specific behaviour were not included in analysis of that behaviour. No missing data were imputed. Data are reported as the number of days for 50 and 90 per cent of participants to return to each behaviour, estimated from the parametric survival models (the 50 per cent value represents the median time to return to this behaviour). Extrapolation beyond the 42‐day cut‐off was performed for behaviours where 90 per cent of participants had not returned to the behaviour by 42 days. Treatment effects are presented as hazard ratios with associated 95 per cent c.i. All analyses were carried out in Stata® 1215. Flexible parametric survival models were fitted using the stpm package16.\n\nBODY.RESULTS:\nSeven hundred and ninety‐eight participants were recruited, of whom 13 were ineligible (for example because they had recurrent veins or veins larger than 15 mm in diameter) after randomization and were considered postrandomization exclusions (Fig. \n2). The groups were well balanced in terms of demographic characteristics at baseline, but there was an increased incidence of deep venous reflux in the foam group compared with the surgery group (P = 0·005) (Table \n1). Of the 670 participants who completed the 6‐week questionnaire, 655 completed at least one of the BRAVVO questions. Completion rates were slightly lower for the questions about going out socially (74·8 per cent) and sporting activity (66·0 per cent), which may not have been relevant to all participants. For all behaviours, except wearing clothes that show the leg, going out socially and sporting activities, over 95 per cent of participants had returned to normal behaviour within 6 weeks of intervention. Figure 2CONSORT diagram for the trial. Reasons for postrandomization exclusion included: recurrent varicose veins and veins larger than 15 mm. Reasons for withdrawal from follow‐up included: patient decided not to proceed with treatment (and also declined follow‐up), declined follow‐up after treatment or did not wish to complete questionnaires. UGFS, ultrasound‐guided foam sclerotherapy; EVLA, endovenous laser ablationBJS-10081-FIG-0002-c Table 1 Demographic details at recruitment EVLA UGFS Surgery ( n  = 210) ( n  = 286) ( n  = 289) Age (years) * \n 49·7 (18–80) 49·0 (19–78) 49·2 (22–85) Sex ratio (F : M) 120 : 90 162 : 124 163 : 126 Body mass index (kg/m 2 ) * \n 27·0 (17–42) 27·1 (17–44) 27·7 (17–44) Unilateral disease 153 (72·9) 215 (75·2) 196 (67·8) Employment status Self‐employed 21 (10·2) 37 (13·0) 29 (10·3) Employed 120 (58·3) 169 (59·3) 179 (63·5) Other 65 (31·6) 79 (27·7) 74 (26·2) Unknown 4 1 7 Saphenous vein involvement Great saphenous 182 (86·7) 232 (81·1) 239 (82·7) Small saphenous 14 (6·7) 21 (7·3) 21 (7·3) Great and small saphenous 14 (6·7) 33 (11·5) 29 (10·0) Deep vein reflux 28 of 205 (13·7) 47 of 280 (16·8) 25 of 282 (8·9) CEAP classification C2, varicose veins over 3 mm 113 (54·1) 169 (59·1) 147 (51·2) C3, oedema 28 (13·4) 35 (12·2) 39 (13·6) C4, skin/subcutaneous changes 56 (26·8) 74 (25·9) 90 (31·4) C5/C6, healed/active venous ulcer 12 (5·7) 8 (2·8) 11 (3·8) Unknown 1 0 2 Values in parentheses are percentages unless indicated otherwise; * values are mean (range). EVLA, endovenous laser ablation; UGFS, ultrasound‐guided foam sclerotherapy; CEAP, Clinical Etiologic Anatomic Pathophysiologic. \n\nBODY.RESULTS.ULTRASOUND‐GUIDED FOAM SCLEROTHERAPY :\nParticipants randomized to UGFS recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n2\n). The two behaviours for which there was no evidence of a difference in the time to recover between the trial arms were 'having a bath or shower' and 'wearing clothes that show the legs'. In general, the median time to return to the activity behaviours was 5 days or less for those randomized to UGFS and up to 9 days for those randomized to surgery. In both groups, there was greater variation in the median time to return to the participation behaviours than the activity behaviours. Table 2 Behavioural recovery: ultrasound‐guided foam sclerotherapy versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n UGFS Surgery Activity items Bending the legs without discomfort 50 3·0 4·6 1·38 (1·14, 1·67) 90 14·1 21·3 Lifting heavy objects without discomfort 50 4·8 9·8 1·97 (1·59, 2·44) 90 16·9 34·5 Moving from standing to sitting without discomfort 50 1·9 3·7 1·63 (1·35, 1·97) 90 9·3 17·5 Standing still for a long time (> 15 min ) without discomfort 50 3·9 7·1 1·67 (1·36, 2·05) 90 15·8 28·7 Walking short distances (< 20 min ) without discomfort 50 1·9 4·4 2·00 (1·65, 2·42) 90 8·2 19·1 Walking long distances (> 20 min) 50 4·5 8·0 1·76 (1·45, 2·14) 90 15·2 27·1 Having a bath or shower 50 5·4 4·9 0·85 (0·70, 1·03) 90 11·4 10·3 Driving a car 50 4·1 7·0 1·78 (1·45, 2·19) 90 12·4 21·1 Participation items Doing housework 50 2·1 4·5 2·10 (1·72, 2·56) 90 7·3 15·7 Looking after children 50 1·2 3·5 2·20 (1·61, 3·00) 90 6·2 17·9 Wearing clothes that show the legs 50 12·4 12·8 1·03 (0·78, 1·35) 90 56·6 58·7 Partial return to normal work/employment 50 4·4 9·9 2·16 (1·72, 2·72) 90 15·4 34·2 Full return to normal work/employment 50 4·8 11·7 2·56 (2·05, 3·21) 90 14·9 36·2 Going out socially 50 7·1 9·3 1·29 (1·06, 1·57) 90 25·8 34·0 Sporting activity or exercise 50 15·7 21·8 1·33 (1·05, 1·68) 90 62·6 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.RESULTS.ENDOVENOUS LASER ABLATION :\nParticipants randomized to EVLA recalled being able to carry out 13 of the 15 behaviours significantly more quickly than those randomized to surgery (Table \n3). Return to 'having a bath or shower' was quicker after surgery than after EVLA. There was no difference in time to return to the participation behaviour of 'wearing clothes that show the legs'. Table 3 Behavioural recovery: endovenous laser ablation versus surgery Proportion carrying out behaviour (%) Time until specified proportion of participants could carry out behaviour (days) * \n Hazard ratio † \n EVLA Surgery Activity items Bending the legs without discomfort 50 2·7 4·6 1·49 (1·19, 1·75) 90 12·6 21·3 Lifting heavy objects without discomfort 50 5·9 9·8 1·79 (1·39, 2·27) 90 20·5 34·5 Moving from standing to sitting without discomfort 50 2·2 3·7 1·56 (1·27, 1·96) 90 10·4 17·5 Standing still for a long time (> 15 min) without discomfort 50 4·8 7·1 1·41 (1·11, 1·79) 90 20·0 28·7 Walking short distances (< 20 min) without discomfort 50 3·0 4·4 1·30 (1·04, 1·61) 90 13·2 19·1 Walking long distances (> 20 min) 50 5·6 8·0 1·53 (1·06, 1·67) 90 19·8 27·1 Having a bath or shower 50 5·5 4·9 0·74 (0·59, 0·93) 90 12·8 10·3 Driving a car 50 4·4 7·0 1·82 (1·43, 2·33) 90 12·7 21·1 Participation items Doing housework 50 2·5 4·5 1·89 (1·49, 2·38) 90 8·4 15·7 Looking after children 50 1·9 3·5 1·61 (1·15, 2·27) 90 8·8 17·9 Wearing clothes that show the legs 50 14·6 12·8 0·97 (0·69, 1·35) 90 75·1 58·7 Partial return to normal work/employment 50 6·3 9·9 1·75 (1·33, 2·27) 90 21·1 34·2 Full return to normal work/employment 50 7·7 11·7 1·79 (1·37, 2·27) 90 23·5 36·2 Going out socially 50 6·9 9·3 1·41 (1·12, 1·75) 90 23·9 34·0 Sporting activity or exercise 50 14·2 21·8 1·47 (1·12, 1·92) 90 55·5 86·7 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the surgery arm. EVLA, endovenous laser ablation. There were no differences in the time taken to return to 11 of the 15 behaviours between participants randomized to EVLA and those randomized to UGFS (Table \n4). Return to 'walking short distances without discomfort', 'walking long distances', 'looking after children' and 'full return to normal work/employment' took longer for the EVLA group than the UGFS group. Following UGFS or EVLA only one‐third of the specific behaviours could be carried out by 50 per cent of participants by 3 days after treatment. Table 4 Behavioural recovery: endovenous laser ablation versus ultrasound‐guided foam sclerotherapy Proportion carrying out behaviour (%) Time until specified proportion of participants can carry out behaviour (days) * \n Hazard ratio † \n EVLA UGFS Activity items Bending the legs without discomfort 50 2·7 3·0 0·94 (0·75, 1·17) 90 12·6 14·1 Lifting heavy objects without discomfort 50 5·9 4·8 1·11 (0·87, 1·42) 90 20·5 16·9 Moving from standing to sitting without discomfort 50 2·2 1·9 1·12 (0·90, 1·40) 90 10·4 9·3 Standing still for a long time (> 15 min) without discomfort 50 4·8 3·9 1·14 (0·90, 1·44) 90 20·0 15·8 Walking short distances (< 20 min) without discomfort 50 3·0 1·9 1·48 (1·19, 1·84) 90 13·2 8·2 Walking long distances (> 20 min) 50 5·6 4·5 1·32 (1·05, 1·66) 90 19·8 15·2 Having a bath or shower 50 5·5 5·4 1·19 (0·96, 1·48) 90 12·8 11·4 Driving a car 50 4·4 4·1 0·95 (0·74, 1·21) 90 12·7 12·4 Participation items Doing housework 50 2·5 2·1 1·03 (0·82, 1·29) 90 8·4 7·3 Looking after children 50 1·9 1·2 1·45 (1·04, 2·02) 90 8·8 6·2 Wearing clothes that show the legs 50 14·6 12·4 1·17 (0·83, 1·64) 90 75·1 56·6 Partial return to normal work/employment 50 6·3 4·4 1·17 (0·89, 1·52) 90 21·1 15·4 Full return to normal work/employment 50 7·7 4·8 1·43 (1·11, 1·85) 90 23·5 14·9 Going out socially 50 6·9 7·1 0·88 (0·70, 1·10) 90 23·9 25·8 Sporting activity or exercise 50 14·2 15·7 0·80 (0·61, 1·04) 90 55·5 62·6 Values in parentheses are 95 per cent c.i. * The 50 per cent value is equivalent to the median time to return to the behaviour. † A hazard ratio greater than 1·00 shows that return to the behaviour took longer in the endovenous laser ablation (EVLA) arm. UGFS, ultrasound‐guided foam sclerotherapy. \n\nBODY.DISCUSSION:\nThis study showed that both UGFS and EVLA resulted in a more rapid recovery compared with surgery for 13 of the 15 behaviours. UGFS was superior to EVLA in terms of return to full time work, looking after children and walking (both short and long distances). Importantly, the specific behaviours assessed were shown to have a range of different recovery trajectories. Previous randomized clinical trials showed behavioural recovery to be more rapid following UGFS compared with surgery1, 2, but the benefit of EVLA over surgery was less clear2, 4, 8 . In this study, for all but two behaviours (wearing clothes that showed the legs and showering/bathing) the recovery was quicker following UGFS or EVLA compared with surgery. These findings may have arisen as a result of information contained in the study PIL, which recommended that compression hosiery was worn continuously for 10 days following UGFS or EVLA but for 4–5 days routinely after surgery. In the comparison between UGFS and EVLA, behavioural recovery was faster following UGFS for four of the 15 behaviours; there was no difference between the groups for the other behaviours. Two previous randomized trials2, 3 showed earlier return to 'normal activities' in patients undergoing UGFS compared with EVLA. Specifically, the present study showed a quicker return to full‐time work following UGFS, similar to the findings of Rasmussen and colleagues2. The median time taken to return to work following EVLA (7·7 days) was within the ranges reported2, 4, 5, 6, 7, 8. However, Rasmussen and colleagues2 reported earlier return to work after UGFS compared with the present study (median 2·9 versus 4·8 days respectively). A partial explanation of the difference between the two studies may be that, unlike the previous study, the present analysis did not correct for weekends. For other behaviours, the recalled recovery times following both UGFS and EVLA were longer than might be expected from the literature2, 3, 4, 5, 8. This may be explained by the timing of the questionnaire at 6 weeks, and thus it is the nature of the differences between treatment groups rather than the absolute timings taken to return to these activities that the authors wish to highlight in this paper. The extent of this overall delay in recovery is hard to justify, particularly in light of the standard information and advice given in the study PIL. There may have been a number of external influences affecting participants' recollection of their recovery, including misinformation and fear. Although attitudes to recovery and return to normal behaviours have changed in secondary care, this may not have filtered into primary care or 'public knowledge'. Fear of activity or fear of pain caused by activity has been documented following surgery for other conditions17, 18. It is possible that some people undergoing treatment for varicose veins experience similar fears, and this may limit or restrict their activity following treatment. With regard to return to work, there are clearly a number of additional factors that might play a role, such as a person's employment status (employed or self‐employed), the sickness benefits they are entitled to, the type of work they are employed to do, how long they are 'signed off' by the doctor, and the views of their employer on return to work after an operation. It should be noted that this study distinguished between partial and full return to work, and that no difference was noted in partial return to work following UGFS and EVLA. This finding may be of substantial importance to patients, their employers and the economy as a whole. The main strength of this study is that the behaviours investigated were based on systematic investigation of the recovery milestones that are important to patients following treatment for varicose veins. Hence, the findings are of personal importance from a patient perspective. Distinguishing between the behaviours that contribute to 'normal activity' helps build a profile of recovery that may be particularly useful for patients preparing for, or recovering from, treatment. Furthermore, the methodology used to develop the BRAVVO questionnaire could be used in other conditions to provide normative information about behavioural recovery that is relevant to patients. The BRAVVO questionnaire was pilot tested and found to be acceptable to patients, comprehensible and appropriate for self‐completion. Despite this, a potential weakness of the study is that the level of missing data in the BRAVVO questionnaire was higher for two of the questions. Further work to reformat or rephrase the questions or response options may help minimize levels of missing data. A further potential weakness is the choice of assessment time point (6 weeks after treatment). This may have compromised recall, particularly for behaviours that participants were able to return to a short time after treatment; however, any compromise in recall is likely to have affected the three treatment groups equally. Other study outcomes were assessed at 6 weeks, and behavioural recovery was assessed at the same time point to minimize participant burden. Further work is required to determine the optimal timing(s) of this questionnaire. Given that the median time to return to the behaviour was less than 14 days for 13 of the behaviours, and up to 22 days for the other two (wearing clothes that show the legs, sporting activity or exercise), the use of the questionnaire at approximately 2–3 weeks would seem appropriate.\n\n**Question:** Compared to surgery what was the result of Ultrasound?guided foam sclerotherapy on Driving a car?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1222
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effect of acupressure on fatigue in patients on hemodialysis\n\n ABSTRACT.BACKGROUND:\nFatigue is considered as a major problem in hemodialysis patients and can impair their quality of life. The purpose of this study was to investigate the effectiveness of acupressure on fatigue in hemodialysis patients.\n\nABSTRACT.MATERIALS AND METHODS:\nThis is a clinical trial study in which 96 hemodialysis patients participated. Patients were randomly assigned into acupressure, placebo, and control groups (32 subjects fulfilling the inclusion criteria assigned to each group). The measures included the form of demographic characteristics, visual analog scale of fatigue, and Piper Fatigue Scale. Patients in the acupressure and placebo groups received acupressure intervention during the early 2 h of dialysis on six acupoints with massage for 20 min/day, 3 days per week for 4 weeks. In the placebo group, acupressure intervention was performed as mentioned above with a distance of 1 cm away from the actual intervention site. Patients in the control group received routine unit care only. Chi- quare test, Kruskal-Wallis, paired t-test, one-way analysis of variance (ANOVA), and Duncan test were used for data analysis.\n\nABSTRACT.RESULTS:\nOne-way ANOVA tests showed significant differences in the total mean score of fatigue and fatigue mean scores in the behavioral, emotional, sensory, and cognitive dimensions in the acupressure, placebo, and control groups.\n\nABSTRACT.CONCLUSION:\nThe results of this study showed that acupressure may reduce fatigue in hemodialysis patients, and use of this non-pharmacologic technique for hemodialysis nurses is suggested.\n\nBODY.I:\nOne of the very common chronic diseases is chronic renal failure,[1] which may lead to end-stage renal disease (ESRD) as a part of that.[2] Its yearly incidence has been reported 330 cases out of 1 million in the US and 253 cases out of 1 million people in Iran. Based on the last statistics in 2007, there are 16,600 hemodialysis patients in Iran.[3] ESRD patients need renal replacement therapy to survive,[4] and hemodialysis is the most prevalent conventional renal replacement therapy in Iran.[5] ESRD patients undergoing chronic hemodialysis suffer from numerous complications including fatigue.[6] The range of fatigue prevalence has been reported to be 60%-97% among hemodialysis patients.[7] This sign is a complicated concept and its diagnosis and assessment is difficult for the nurses.[8] In ESRD patients, untreated fatigue may highly affect the quality of life and lead to patients' increased dependency on others, weakness, loss of physical and psychological energy, social isolation, and depression. The elements that can affect the level of fatigue include depression, anemia, sleep disorders, and restless leg syndrome.[9] Medicational interventions such as consumption of growth hormone, anti-depressant and anti-anxiety medications, and levocarnitine and erythropoietin stimulating hormone, as well as non-medicational interventions such as nutrition therapy, sleep disorder treatment, stress management, sport, Yoga, depression treatment, drug abuse, and acupressure are used to lower hemodialysis patients' fatigue.[7] Because of medications' side effects and incomplete relief of fatigue after taking anti-fatigue medications, the patients are driven to complementary medicine methods such as acupressure medicine.[10] Based on the reported researches, acupressure medicine is one of the interventions with the highest application by nurses in clinical settings,[11] and is considered as a clinical and comprehensive nursing intervention.[12] This is also among the manual treatments for which the nurses have an excellent position to use.[13] Acupressure medicine, as a healing art, is beautifully integrated with nursing[14] as it is non-invasive and has advantages such as cost efficacy, lack of complications, no need for special tools, and the ease of learning for the patients and their accompanying persons.[12] The only study conducted on hemodialysis patients' fatigue was Cho and Tsay's study (2004) in Taiwan, which showed that acupressure medicine can lower the level of fatigue in patients of experimental group compared to control group.[8] Nasiri et al. (2007) in a study on the association between the effect of acupressure medicine and sleep quality of hemodialysis patients showed a significant difference in the sleep quality of experimental and control groups based on Pittsburgh Sleep Scale.[3] Previous studies include a systematic review on the effect of acupressure medicine on hemodialysis patients' signs including fatigue,[15] and meta-analysis studies on the effect of acupressure medicine on labor pain management in labor unit[16] as well as its effect on post-operation vomiting and nausea in adults.[17] Their results showed that despite poor methodology, high risk of bias, and low number of subjects, it cannot be absolutely indicated that acupressure medicine is effective on patient's signs. Tsay (2004) in a study on the effect of acupressure medicine on hemodialysis patients' fatigue stated that with regard to cultural and lifestyle differences of the patients in various countries, the results of these studies cannot be generalized for the patients in other geographic areas.[18] Therefore, with regard to the difference between the Iranian culture and the culture in other countries,[19] and as no study has been conducted on the effect of acupressure medicine on hemodialysis patients' fatigue on the one hand and due to the high application of acupressure medicine in clinical nursing interventions on the other hand (as this type of medicine can provide the patients with a cost-effective and safe nursing intervention), this study was conducted as there was no need of any special tools but just trained finger tips and it could be conducted within a shorter time. Therefore, the researcher decided to conduct a study with the goal of defining the effect of acupressure on hemodialysis patients' fatigue.\n\nBODY.M:\nThis is a three-group clinical trial. Study population included all ESRD patients undergoing chronic hemodialysis in three hemodialysis centers of Nour, Alzahra, and Shariati, hospitals. The subjects were selected from the study population based on the inclusion criteria. Inclusion criteria were age of over 18, diagnosis of ERSD, undergoing hemodialysis at least for 3 months, the patients with chief complaint of fatigue and having fatigue score ≥5 based on fatigue severity visual analogue scale, lack of any wound or fracture, being in complete psychological and mental health to attend the study and fill the questionnaire, and not having undergone complementary medicine treatment in the past 3 months of the study. Exclusion criteria were patient's absence for two sessions of acupressure intervention and lack of interest to continue the study. The sampling was convenient continuous sampling. The number of the subjects was estimated to be 96. After random subjects' allocation through minimization method, 32 subjects were assigned to each group of the study, placebo and control. The data were collected by a three-section questionnaire whose first section contained demographic characteristics including age, sex, education level, marital status, occupation status, and disease etiology. The second section contained Fatigue Severity Scale (FSS) with a visual analogue scale ranked between 0 and 10, which was scored and commented as fatigue (0), minor fatigue (1-3), moderate fatigue (4-6), and severe fatigue (7-10). If the patients had fatigue severity score of ≥5, they attended the study. FSS is a standard scale whose validity and reliability have been confirmed by Gift,[20] and has been frequently used in various studies. The third section included Piper Fatigue Scale with 27 items on four dimensions of mental fatigue including behavioral, emotional, sensory, and cognitive dimensions. Each item was scored from 0 to 10, with score zero given for lack of fatigue, 1-3 for minor fatigue, 4-6 for moderate fatigue, and 7-10 for severe fatigue. Piper Fatigue Scale is a standard scale whose validity (content and concurrent) and reliability have been confirmed. In the study of Masoodi et al.,[21] its scientific validity was confirmed by the professors of Tarbiat Modares, Tehran, and Iran universities. Its reliability was confirmed in the study of Masoodi et al. by a correlation coefficient of r = 0.89. In the present study, its internal validity was α = 0.77 and its reliability was calculated as 81.2% by test retest. This scale was distributed among the patients once before the intervention and once after the intervention to score the subjects' fatigue severity. Intervention was conducted in the first 2 h of hemodialysis in the experimental and placebo groups. This intervention was carried out among the subjects regularly undergoing hemodialysis, in both legs, hands, and the waist in three weekly sessions for 4 weeks.[18] In the experimental group, the intervention was conducted on the major acupoints K1, GB 34, ST 36, SP 6, BL 23, and HT7.[2223] In the placebo group, it was carried out with 1 cm distance from the major above-mentioned acupoints. Each session lasted for 20 min, of which 2 min were devoted for primary superficial stroking of the acupoints[24] and the rest of the time (18 min) was for acupressure of the determined six acupoints (3 min for each acupoint).[25] The researcher and a male co-researcher were trained under the supervision of the second supervisor, and then the intervention started by her approval. The researcher (female) and her male co-researcher conducted the intervention separately for the female and male subjects, respectively. Determination of acupoints was made based on the second supervisor's guidance on the acupoints' standard location. The amount of needed pressure was practiced by standard scales of 20 g to 6 kg, and the reliability of pressure level was confirmed as 100% after 40 times of practice with mean of 3-4 kg on the scales for both researchers' hands. The details about the process of research, goals, and conditions were given to the patients, and consequently, those interested to join entered the study after filling the written consent form. Sampling and intervention were conducted after obtaining approval from ethical consideration committee of Nursing and Midwifery School of Isfahan University of Medical Sciences. The data were analyzed by chi-square, Kruskal-Wallis, paired t-test, one-way analysis of variance (ANOVA), and Duncan's test through SPSS version 19.\n\nBODY.R:\nMean age of the subjects in the experimental group was 53.4 (13.9) years, in the placebo group 55.4 (11.5) years, and in the control group 54.3 (13.4) years. In the experimental, placebo, and control groups, there were 14 female (43.75%) and 18 male (56.25%) subjects, respectively. The most prevalent etiology of the disease was diabetes in 11 subjects (34.4%) in the study group, hypertension in 14 subjects (43.8%) in the placebo group, and diabetes in 12 subjects (37.5%) in the control group. Before intervention, subjects' matching was conducted concerning demographic characteristics in the three groups of study, placebo, and control through minimization program, and there was no significant difference in the demographic characteristics between the groups (P > 0.05). One-way ANOVA showed that there was no significant difference between the total mean score of fatigue and the mean scores of Piper Fatigue Scale dimensions before intervention in all the three groups, but these means showed a significant difference after intervention in the three groups (P < 0.001). Post-hoc Duncan test showed that in the study group, total score of fatigue and score of fatigue in emotional, cognitive, and behavioral dimensions were significantly less than in the two groups of placebo and control after intervention (P < 0.05), but there was no significant difference between the control and placebo groups (P > 0.05). Mean score of fatigue in the sensory dimension in the study group was significantly less than in the placebo group (P < 0.05), and in the placebo group, it was significantly less that in the control group (P < 0.05). As observed in Table 1, the total mean score of fatigue and the mean scores of fatigue in sensory, behavioral, cognitive, and emotional dimensions in the two groups of study and placebo were less after intervention compared to before intervention, and paired t-test showed a significant difference (P < 0.05). The total mean score of fatigue and the mean score of fatigue in sensory, behavioral, cognitive, and emotional dimensions showed no significant difference before and after intervention (P > 0.05). As observed in Table 2, there was a significant difference in the total mean score change of fatigue and the mean score change of fatigue severity in all dimensions in the three groups after intervention (P < 0.001). Post-hoc Duncan test showed that reduction of mean fatigue score in behavioral and emotional dimensions was more in the study group compared to the other two groups of placebo and control (P < 0.05), but it showed no significant difference in the two groups of control and placebo (P = 0.3). Post-hoc Duncan test showed that mean reduction of total fatigue score and mean reduction of fatigue scores in sensory and cognitive dimensions in the study group were more than in the placebo group, and they were more in the placebo group compared to the control group (P < 0.05). Table 1 Comparison of total mean scores of fatigue, behavioral, emotional, sensory, and cognitive dimensions before and after intervention in the three groups of study, placebo, and control Table 2 Comparison of fatigue score changes and dimensions of piper fatigue scale after intervention in the three groups of study, placebo, and control \n\nBODY.D:\nThe results of acupressure medicine on severity of fatigue in hemodialysis patients showed that the study group experienced less fatigue severity compared to the placebo and control groups. As the subjects' mean age in the present study was over 50 years, it is concluded that ESRD seems to be more prevalent at higher ages. Tsay and Chen (2003) reported more ESRD prevalence at higher ages.[26] The higher number of male subjects in the present study reveal that ESRD prevalence seems to be more prevalent among men, which concords with the findings of Nasiri et al.[3] Most of the subjects were retired or jobless in the present study. Studies show that the rate of joblessness is high among hemodialysis patients.[27] Diabetes and hypertension are the major etiologies for ESRD,[18] which is consistent with the finding of the present study. Stimulation of acupoints results in improvement of vital energy circulation in the body and increase of opioids like endorphin and enkephalin, which may play a role in the reduction of patients' fatigue in all behavioral, sensory, cognitive, and emotional dimensions.[18] The obtained results showed that there was a significant reduction in the total mean score of fatigue in the study group after intervention compared to the placebo and control groups. Although a similar improvement was observed in the placebo group based on Duncan test, the improvement was more in the study group. It seems that in the placebo group, in addition to verbal and psychological communication between the researcher and the patients, due to the physiologic and psychological effects of acupressure, the patients expect the acupressure to have a positive effect on their body, named as Hawthorne phenomenon. So et al. (2007) in a study aimed to define the effect of Transcutaneous Electrical Acupoint Stimulation (TEAS) on improvement of muscular fatigue among athletes and showed a significant difference between TEAS and placebo groups in return of muscular power.[28] This study is in line with the present study except for the lack of a significant difference in the placebo groups after intervention. The difference between these two studies seems to be as a result of the study population, intervention method and the number of subjects. The results showed that the mean score of fatigue in emotional dimension significantly diminished in the study group after intervention compared to the placebo and control groups. From psychological aspect, acupressure induces alpha brain wave stimulation that consequently leads to relaxation and reduction of fatigue and anxiety.[28] This medicine has a positive effect on the body and mind and brings about mental joyfulness and health through stimulation of opioid — endogen systems such as endorphin and enkephalin.[29] Molassiotis et al. (2007)[30] reported the significant effect of acupuncture and acupressure on cancer patients' fatigue in three groups of acupressure, acupuncture, and placebo in the dimensions of general fatigue, activity, and physical fatigue, although it had no significant effect on the dimension of motivation and psychological fatigue. The difference, observed even in the control group, seems to be as a result of the effect of researcher's psychological factors. Their results showed that acupuncture was more effective than acupressure and acupressure was more effective than placebo. This finding is not consistent with our study, possibly due to the difference in the study population, the scale used, and the length of intervention. Our obtained results showed a significant reduction in the mean score of fatigue in sensory dimension in the study group after intervention compared to placebo and control groups. The study results of Tsay et al.[29] and the results of the present study are consistent due to use of similar acupoints and length of intervention. The only difference is that there was no placebo group in their study, and in the control group, the mean score of sensory dimension had a reduction, possibly as a result of psychological and mental elements. Our results showed a significant reduction in the mean score of fatigue in cognitive dimension in the study group after intervention compared to the placebo and control groups, which can be due to the effects to acupressure mechanism and researcher's psychological and mental factors during the study. In the study of Harris et al.[31] in which the effect of the two methods of stimulating acupressure medicine and relaxation on students' consciousness and fatigue was studied, stimulating acupressure significantly diminished students' fatigue and increased their daytime consciousness. Their consistent results with the present study may be due to the effect of acupressure mechanism, while there was no difference in the placebo group in this study. Our results showed a significant reduction in the mean score of fatigue in behavioral dimension after intervention in the study group compared to the placebo and control groups, as the acupoints used in the present study could have had a synergic effect to reduce fatigue. Hosseinabadi et al.[32] reported that acupressure was effective on the daily function of Pittsburgh Sleep Quality Scale in the elderly in the study group, and there was a significant difference between the study and placebo groups, which is not consistent with our study possibly due to the difference in study population. The results showed a significant reduction in the mean score of fatigue severity in the study group after intervention compared to the placebo and control groups. In a study of Molassiotis et al.,[30] the mean reduction score of fatigue severity in acupuncture group was more than in acupressure and placebo groups, which is not consistent with the results of the present study. This difference seems to be as a result of the type of intervention, length of time, frequency of intervention, the acupoints used, and/or the measurement scale used. Based on most of the experts' ideas, sham acupoints also have a treatment effect as most of the time, positive effects also occur in the placebo groups.[33] With regard to the inefficiency of the used acupoint in the placebo group, patients' reduction of signs can be as a result of their expectation concerning a positive effect from the intervention domination and/or they felt secure by the presence of the researcher as well as Hawthorne effect.\n\nBODY.C:\nThe results of the present study with regard to consideration of fatigue as a problem in hemodialysis patients and application of acupressure medicine by nurses in most of their clinical interventions showed that this non-invasive, cost-effective, learnable, and complication-free method can be used to lower the fatigue in hemodialysis patients. Therefore, this caring method of complementary medicine is suggested to be used by nurses in hemodialysis wards to lower the patients' fatigue.\n\n**Question:** Compared to Placebo what was the result of Routine unit care on Total score of fatigue?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** no significant difference\n" } ]
no significant difference
1227
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Managing migraine headaches experienced by patients who self–report with menstrually related migraine: a prospective, placebo–controlled study with oral sumatriptan\n\n ABSTRACT:\nThe objective was to evaluate the efficacy and tolerability of oral sumatriptan (100 mg) in patients who self–reported with menstrually related migraine. A prospective, multicentre, randomised, double–blind, placebocontrolled, two–group crossover study was carried out in 20 UK primary and secondary care surgeries. Of 115 patients with a self–reported history of menstrually related migraine that entered the study, 93 patients completed it. Patients treated all migraine attacks for 2 months with sumatriptan (100 mg) and for 2 months with placebo. The primary endpoint was the proportion of patients reporting headache relief at 4 hours for the first treated attack. Only 11% of patients fulfilled the protocol definition of menstrually related migraine. Patients reported a variable pattern of migraine attacks occurring inside and outside the menstrual window. For the first attack, significantly more patients receiving sumatriptan than placebo reported headache relief for attacks occurring inside (67% vs. 33%, p=0.007) and outside (79% vs. 31%, p<0.001) the menstrual period. Sumatriptan was generally well tolerated. Oral sumatriptan (100 mg) is an effective and well tolerated acute treatment for patients who report menstrually related migraine.\n\n**Question:** Compared to placebo for 2 months what was the result of sumatriptan for 2 months on headache relief for attacks occurring inside and outside the menstrual period (migraine attacks)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1228
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact of handwriting training on fluency, spelling and text quality among third graders\n\n ABSTRACT:\nAs recent studies and theoretical assumptions suggest that the quality of texts composed by children and adolescents is affected by their transcription skills, this experimental field trial aims at investigating the impact of combined handwriting/spelling training on fluency, spelling and text quality among normally developing 3rd graders (N = 175). In addition to the combined handwriting/spelling training group, the sample includes two other intervention groups, a handwriting training group and a spelling training group as well as a reading fluency training control group. The participating teachers (N = 11) and their students were randomly assigned to the different intervention and control conditions, which were scheduled to last 20 units (each unit lasts 15 min) distributed over 5 weeks (4 units/week). Data collection was administered both before (pre-test) and after (post-test) the intervention as well as 3 months later (follow-up). Measures included group-administered tests and tasks (spelling, visuo-motor integration, copy task, and composing) and individually administered tests and tasks (working memory and several handwriting tasks on the digitizing tablet).\nAs handwriting automaticity (measured on the digitizing tablet) was already high at the beginning of the study, the intervention was not able to improve it further. In consequence, an intervention effect on the quality of composed texts was not observed. Instead, text quality was predicted by working memory, fluency, spelling, and gender irrespective of the intervention.\n\nBODY.INTRODUCTION:\nHandwriting is a cultural skill and is taught and used in every school around the world, as well as applied in leisure activities and labour contexts later on despite the fact that keyboard typing is recently becoming more and more prevalent early in life.\n\n\nBODY.INTRODUCTION.MODELS OF HANDWRITING:\nMost scholars understand handwriting as a complex neuromotor skill involving cognitive and motor processes sequentially as well as simultaneously. Contemporary models assume that these processes are controlled by a hierarchic architecture of both central and peripheral components (Kandel, Peereman, & Ghimenton, 2013) operating partially in a parallel mode (e.g. Kandel, Peereman, Grosjacques, & Fayol, 2011; Van Galen, 1991; Van Galen, Meulenbroek, & Hylkema, 1986). According to Hayes (2012a, b), composing (handwriting as well as typing) consists of higher and lower level processes whereby handwriting is conceptualized within the low-level transcription process that follows planning and translating. While higher-level processes include ideation, choice of words (semantic coding) and syntax, lower-level processes refer to the retrieval of orthographic and allographic representations as well as the activation of neuromuscular networks and the subsequent execution of fine-motor movements. In general, higher order processes precede lower order processes, however, at least some processes are supposed to be synchronic, i.e. a writer executes an allograph while planning to write the next syllable during continuous handwriting (Kandel, Álvarez, & Vallée, 2006; Kandel et al., 2011). In addition, recent research indicates that some central processes are still ongoing during movement execution, i.e. showing that the writing system functions in a cascading manner (Delattre, Bonin, & Barry, 2006; Kandel et al., 2013; Olive, Alves, & Castro, 2009; Olive & Kellogg, 2002; Roux, McKeff, Grosjacques, Afonso, & Kandel, 2013). A writer can synchronously be engaged in transcribing a word and processing the next word. Therefore, interferences between different levels of handwriting are likely to occur during movement execution (Fayol & Lété, 2012).\n\nBODY.INTRODUCTION.AUTOMATICITY OF HANDWRITING, WORKING MEMORY, AND TEXT QUALITY:\nLow level components of handwriting implicate an integration and coordination of spelling knowledge, allographic representations and the execution of fine-motor movements (Berninger, 2009; Berninger & Swanson, 1994). For young children at the very beginning of their handwriting experience, each of these tasks is attention demanding and working memory resource consuming (Graham, Berninger, Abbott, Abbott, & Whitaker, 1997). The more this integration is automated, the lower the amount of the cognitive load is generally expected to be necessary (Bourdin & Fayol, 1994; Fayol & Lété, 2012) since handwriting automation is assumed to save resources for the remaining processes involved in text production (Grabowski, 2010; Graham et al., 1997; McCutchen, 1996, 2011). Children whose handwriting is not yet automated are easily overwhelmed with the more complex tasks of text production because of the demands on cognitive resources imposed on them, resulting in poor text production outcomes (e.g. Berninger & Swanson, 1994; Christensen & Jones, 1999; Graham, 1990). Accordingly, a training of handwriting over 8 weeks evidenced improvements both of the quality and quantity of written texts among adolescents suffering from poor orthographic-motor integration while the matched control group (whose students were instructed to write a diary) did not improve noticeably (Christensen, 2005). As Kim and Schatschneider (2016) demonstrated that working memory is a strong predictor of text quality already among first graders, handwriting training seems to be a promising method to improve the text quality early in the school career. Over the last years, this assumption could be confirmed for children with and without disabilities: automaticity trainings of transcription skills as handwriting lead to improvements in both text length and quality (Graham, McKeown, Kiuhara, & Harris, 2012). In their intervention study among Portuguese second graders, Alves et al. (2015) showed that a handwriting intervention group reached not only better handwriting fluency and text quality, but performed longer bursts with shorter pauses while writing texts when compared to keyboarding students. The handwriting training lasted over 10 weeks and included a pure functional graphomotor \"warm up\", followed by more and more challenging exercises, i.e. using the trained letters within words, sentences and stories. The comparison groups received similar trainings in spelling or keyboarding.\n\nBODY.INTRODUCTION.WORKING MEMORY, AUTOMATICITY OF HANDWRITING AND SPELLING:\nThere is also some evidence that cognitive load is not only devoted to the higher-level processes of writing, but also to transcription such as spelling. Spelling requires the retrieval of orthographic knowledge about the correct letters and their sequences in spoken or written words involving both orthographic long-term memory and working memory resources (Buchwald & Rapp, 2009). According to Tainturier and Rapp (2001), two sets of processes are involved in spelling that do not function independently from one another: among frequent words the letter sequences are directly retrieved from the so-called orthographic lexicon (lexical route), whereas for rare or novel words, the letter sequences are derived by a phonology to orthography conversion system (sublexical route). From this point of view, we assume that spelling capability is affected by the phonological component of working memory. However, it has been demonstrated that the visual-spatial modality of working memory has an additional effect on children's spelling (Bourke, Davies, Sumner, & Green, 2014; Fischbach, Könen, Rietz, & Hasselhorn, 2014). Sumner, Connelly, and Barnett (2013, 2014) have investigated the impact of spelling capabilities on handwriting production in a copy task by comparing children with and without dyslexia. They found many more within-word pauses (and constrained productivity and fluency for that reason) among the dyslectic children compared to the spelling-ability matched children. Interestingly, children with dyslexia were able to write fast, but paused longer than the controls within-words, probably due to cognitive demands required by spelling information processing and poor lexical representations. It can therefore be concluded that handwriting fluency, as well as working memory, are both positively related to spelling capacity.\n\nBODY.INTRODUCTION.HANDWRITING AND VISUAL-MOTOR INTEGRATION:\nVisual-motor integration (VMI) skills proved to be associated with legibility (e.g. Tseng & Murray, 1994; Volman, van Schendel, & Jongmans, 2006), however, research using the Developmental Test of VMI did not find an association with handwriting difficulties in young children (Marr & Cermak, 2002) or with handwriting dysfunction in older children (Goyen & Duff, 2005). Nevertheless, there is some evidence that VMI is related to academic performance in reading and writing (Kulp, 1999; Sortor & Kulp, 2003). Therefore, we included this variable to control for its impact on text quality.\n\nBODY.INTRODUCTION.GENDER EFFECTS:\nHandwriting fluency, automaticity and working memory are not the only factors influencing the quality of texts produced by children. Research has revealed that girls produce better and longer texts than boys (e.g. Kim, Al Otaiba, Wanzek, & Gatlin, 2015), therefore, the gender gap is an additional factor that has to be considered when predicting text quality. Berninger and Fuller (1992) investigated first, second and third graders and found that girls produced more words and clauses in narrative and expository composition than boys. Therefore, gender differences in transcription skills could at least partly explain the text quality differences. In addition, regarding the fact that girls have more positive attitudes than boys towards writing (Knudson, 1995; Lee, 2013), such motivational factors have explained additional variance in text quality in some studies (e.g. Knudson, 1995) but not in others (e.g. Graham, Berninger, & Fan, 2007).\n\nBODY.INTRODUCTION.TRAINING OF TRANSCRIPTION SKILLS:\nBecause of its important role as a transcription skill, handwriting has to be trained not only to become as legible as possible, but also to become more and more fluent. Fluency of handwriting means that a person is able to write in an automated way so that he or she does not have to think about how to form a letter or how to join movements of frequently used syllables. This results in an economic and fast way of personal handwriting. Automated movements show a particular shape of a smooth and regular course of velocity control: the ideal handwriter needs only one increase and decrease of speed per stroke unit (Mai & Marquardt, 1999). As shown by Graham, Berninger, Weintraub and Schafer (1998) and by Wicki, Hurschler Lichtsteiner, Saxer Geiger and Müller (2014), the development of both legibility and fluency of handwriting is not yet finished at the end of the second grade. There is more training needed over the following years of schooling. In their meta-analysis, Troia and Graham (2009) published 17 recommendations about the most efficient ways of teaching handwriting, e.g. that in the primary grades, 75–100 min per week should be used for handwriting instruction, that letters sharing common strokes (e.g., a, d, and g) should be grouped together, and that the children should be monitored and immediately get help to form letters better when they are illegible. Teachers are requested to allow children to develop their own handwriting style and to ask them to self-evaluate their handwriting and to improve it continuously. While traditional handwriting lessons used to focus on legibility of handwriting, the importance and the knowhow of teaching handwriting fluency is not yet well established. Santangelo and Graham (2016) recently found that in general, handwriting training is more efficient than no treatment. Individualizing instruction and using digital technology improved legibility, but no effects were found for fluency. A study by Berninger and co-authors (Berninger et al., 2006) on different ways to teach letters is important to understand why it is not useful to let children just copy letters directly from a model: In order to gain an automatized motor program, children need to trace a letter by visual arrows, getting verbal mediation, but afterwards, they are supposed to reproduce it from memory. Troia and Graham (2009) recommend further trainings in fluency by frequent writing and speed trials. Isolated motor trainings are inferior to handwriting training (Santangelo & Graham, 2016), accordingly, insights from promoting motor learning, e.g. the model of differential motor learning by Frank, Michelbrink, Beckmann and Schöllhorn (2008) or the principles of Neuromotor Task Training (Schoemaker & Smits-Engelsman, 2005) have been included in handwriting education (Jurt Betschart & Hurschler Lichtsteiner, 2017) and therapy. However, it has not yet been examined if they are really enhancing handwriting fluency among children with motor disabilities. A meta-analysis by Hoy, Egan, and Feder, (2011) proved that instructions of at least 20 lessons were effective, but the interventions focused on treatments by occupational therapist for children with handwriting difficulties. So far, there are only a few German studies investigating children's handwriting (e.g. Grabowski, Weinzierl, & Schmitt, 2010; Mahrhofer, 2004; Nottbusch, 2008; Sattler & Marquardt, 2010; Speck-Hamdan, Falmann, Hess, Odersky, & Rüb, 2016) and almost none looking at the link between transcriptions skills and text quality. This is unsatisfying because on the one hand German language has a more complex syllabic structure than Romance languages, as depicted by Seymour, Aro, and Erskine (2003), but on the other hand the orthographic depth of the German language is shallower than that of English for instance, so findings of previous studies in other languages may not be directly transferrable. In regard to written language acquisition in German-speaking countries, there are different ways of learning how to read and write in general and how to write by hand in particular. In the German-speaking part of Switzerland, a new handwriting style has been taught since 2001 in some schools locate. After positive research results confirming that children with the new handwriting type were able to write more legibly and fluently and showed more motivation to write than with the former one (Wicki & Hurschler Lichtsteiner, 2014), the so-called \"Basisschrift\" was established in almost every canton. This handwriting style has changed teaching methods, because the children now start with a beginners' alphabet of printed letters, then learn how to connect the most common letters fluently by garlands (joins on the baseline). Finally, students get support to develop their individual handwriting style (Jurt Betschart & Hurschler Lichtsteiner, 2017). In the German part of Switzerland, no studies about handwriting instruction nor any implications on language acquisition and text quality in particular have been published so far.\n\nBODY.INTRODUCTION.HYPOTHESES:\n\nWith respect to the different trainings carried out over 5 weeks, we expected that the combined handwriting/spelling training would improve the fluency of handwriting (speed and automaticity) and advance correct spelling as well as the text quality (i.e. number of ideas, text organization) to a higher degree than the handwriting only or a spelling only training because of the broader approach of the combined training, which included two transcriptions skills.In addition, we expected that the handwriting only and the spelling only training groups would both outperform the reading control group regarding handwriting fluency, spelling and text quality. We also expected that the aforementioned group differences would sustain over a period of 3 months after termination of the intervention.Irrespective of the trainings, we expected that text quality would be independently affected by handwriting fluency and automaticity, spelling, working memory, and gender. Text quality is assumed to be higher among children, indicating higher handwriting fluency and automaticity, improved spelling and working memory capabilities. In addition, text quality is assumed to be higher among girls than among boys.\n\n\nBODY.METHOD.DESIGN:\nA randomized controlled field study with pre-test, post-test and follow-up-test measurements was conducted. The sample included one main intervention group (a combined handwriting and spelling training), two comparison groups (handwriting training and spelling training) and a control group (reading fluency). The participants were typically developing third-graders (N = 175). This sample size was higher than the required sample size (N = 158) calculated by means of the GPower software (Faul, Erdfelder, Lang, & Buchner, 2007) assuming an effect size f(V) = 0.26 and alpha error = 0.05 (given 4 groups and 3 measurements, and repeated measures ANOVA). In order to control for cluster effects, each class was randomly divided in two parts, whereby the minority of every class was allocated to the main intervention and the majority randomly to one of the comparison groups or to the control group (Fig. 1). Most measures were assessed three times, however, working memory capacity and visual-motor integration were assessed only during the pre-test.Fig. 1Assignment of children to the groups, N = 175\n\n\nBODY.METHOD.PARTICIPANTS:\nPrincipals from elementary schools in the Canton of Lucerne in the German speaking part of Switzerland were asked to help the research team in recruiting the participating classes. The participating teachers had to give their consent of attending to four preparatory meetings and organizing the testing and training activities. They also had to find a second teacher for the training phase who was usually the associated teacher for special needs education. All children in the classes were included in the curriculum-based trainings. 202 children and their parents were asked to give their active consent for the additional tests and the use of the data; in 21 cases, the consent was refused. Because the average number of pupils in each class was lower than expected, it took 11 instead of 9 classes to reach the necessary sample size. 6 of the 181 participants moved away during the intervention time, which resulted in 175 children who attended the whole training program and all of the tests (attrition = 3.3%). Within the participants, there were no exclusions due to learning disabilities or other issues, but in order to control for special effects, all possible variables were noticed. 22 children (12.6%) attended integrative special needs education; 11 children (6.3%) received therapies such as psychomotor therapy. These cases were evenly distributed over the four intervention groups. 20 children (11.4%) attended L2 courses in German with uneven distribution among the groups: 17 of them were in the combined handwriting and spelling or in the handwriting group; due to the low numbers, a statistical test of this difference was not possible. Regarding age (Mage = 9.0 years, SD = 0.4 years, age range = 7.5–10.2 years), the children of the four intervention groups did not differ significantly. Demographic characteristics such as gender, handedness and language spoken were equally distributed over the four groups (see Table 1).Table 1Demographic characteristics of the sampleVariableCombinedHandwritingSpellingReadingN%n%n%n%n%χ2p17578343627Gender0.599n.s. Female8950.93848.71852.92055.61348.1 Male8649.14051.31647.11644.41451.9Handedness6.076n.s Right handers15789.77494.92882.43083.32592.5 Left handers1810.345.1617.6616.727.4Languages4.879n.s. Swiss German speaking only12973.76178.22161.82980.61866.7 Two or more languages4626.31721.81338.2719.4933.3\n\n\nBODY.METHOD.MATERIALS (DIGITIZER):\nHandwriting measures were recorded and analyzed by means of the software package CSWin (Mai & Marquardt, 2007) on a personal computer (Lenovo ThinkPad T61p Type 6457-7WG), which was connected to a digitizing tablet (Wacom Intuos 4 Pen Tablet PTK-640 and Intuos Inking Pen). The sampling frequency was 200 Hz and the accuracy was 0.1 mm in both x and y directions. Position data were recorded even if the stylus was lifted (less than 10.0 mm) above the tablet due to the inductive method of measurement. Nonparametric regression methods (kernel estimation) are part of the mathematical procedures of the program (Marquardt & Mai, 1994) in order to calculate and to smooth velocity and acceleration signals.\n\nBODY.METHOD.INTERVENTION:\n7 months prior to intervention, four meetings with the participating teachers took place: All participants had to plan the same quantity and quality of instruction and training regarding writing, German orthography and handwriting training throughout the curriculum. In addition, every teacher got an introduction into the didactic structure of the handwriting lessons as well as the materials necessary to prepare standardized lessons in the time period from summer 2015 to the end of January 2016 when the intervention started. The participating teachers (n = 11) and their students were randomly assigned to the different intervention and control conditions, which lasted 20 units (15 min per unit) distributed over 5 weeks (4 units/week). The respective trainings were highly standardized and held during ordinary school hours by the participating teachers who had been instructed in advance. The teachers had to check for an implementation as full as possible. Within the materials, the last page (for the last day) included a feedback sheet for the participating pupils in order to assess their motivation. We know that effective handwriting training should always be followed by meaningful tasks like writing or at least exercises including language actions (playing with words, little learning games like riddles) (Alves et al., 2015; Berninger & Richards, 2002). However, trainings for handwriting skills often include (sometimes unintended) orthographic challenges and spelling training can be very demanding for some children when it takes a lot of handwriting. This is why we tried to separate these training conditions from the beginning. Even the combined training offered a simple start without cognitive overload followed by different transfer opportunities.\n\nBODY.METHOD.INTERVENTION.COMBINED HANDWRITING AND SPELLING TRAINING:\nCompletely new training materials were developed for the combined intervention and for the handwriting intervention as well. Due to the semi-cursive handwriting type used in the German-speaking part of Switzerland, the material did not focus on the repetition of the alphabet (since the production of individual letters is presumed to be already automatized at that age), but on the fluent production of the connections between the letters, specifically on the connections as realized by garlands (joins on the baseline). According to Berninger and Richards (2002), the training was planned to be taught explicitly in a regular manner, by providing short and daily trainings of 15 min followed by writing activities. The daily intervention program was presented by a two-sided sheet with two A4-sized pages of material on each side starting with a finger-warm-up exercise (scribbling strokes and circles) followed by a graphomotor training of a selected pair of letters to be joined fluently under different conditions regarding speed, pressure, size and visual control according to Frank, Michelbrink, Beckmann and Schöllhorn (2008). Every week, a group of similar patterns (e.g. day 1 e-n, day 2 e-m, day 3 e-r) was offered. The last day was dedicated to repetitions of this group. These trainings had to be completed in broad colorful bands instead of rulings. In the combined handwriting and spelling condition, the task in the second part was to put these patterns of 2–3 joined letters into use within a spelling task, which was a training mostly focusing on morphemes, or to use these patterns directly by writing words or sentences. To make sure that the children spent enough time on the language-based part the last 5 min had to be strictly spent on the pages of the second part.\n\nBODY.METHOD.INTERVENTION.HANDWRITING ONLY TRAINING GROUP:\nIn the handwriting group, the tasks consisted of a graphomotor training (the same as page 1 and 2 of the combined group supplied with additional tasks of that type) without any combination with spelling training.\n\nBODY.METHOD.INTERVENTION.SPELLING ONLY TRAINING GROUP:\nThe spelling training was based on a training developed by Leemann (2015). We applied the chapter focusing on morphemic structures (Leemann, 2015, pp. 19–44), which had been described as the third of three important principles of German orthography (Eisenberg & Feilke, 2001) and is part of the regular curriculum at that age. Exercises consisted of recognizing morphemic elements, analyzing words and constructing them by using morphemes. In order to avoid handwriting within these tasks, the children had to complete them without handwriting, e.g. by marking morphemes using text markers or connecting questions and solutions by cues.\n\nBODY.METHOD.INTERVENTION.READING TRAINING CONTROL GROUP:\nThe training for reading fluency consisted of two parts (reading skills and reading fluency training) of the training developed by Kruse, Rickli, Riss, and Sommer (2014). Handwriting activities were omitted during these trainings. The intervention protocols indicated a mean number of training units of 19.3 (SD = 1.4) for the total sample. The mean number of training units varied from 19.0 to 19.8 over the four groups and did not differ significantly from each other.\n\nBODY.METHOD.PROCEDURE:\nData collection of the pre-test took place at the end of January 2016. The intervention started midst February and lasted 5 weeks. During the subsequent week, the post-tests were carried out. After a break of 11 weeks, the data collection of the follow-up-test took place midst June 2016. Most tests were group-administered by the teachers (spelling test, copy task, writing a 30'-text, visual motor integration), whereas two tests (Working Memory Test AGTB-5, Hasselhorn et al., 2012; and CSWin, Mai & Marquardt, 2007) were executed as single tests by the data collection team consisting of two researchers and five scientific assistants. Legibility of the handwriting was assessed by analyzing the copied texts by a rating system; however, since this variable is not focused on in this paper, we do not describe this measure in detail in this study.\n\nBODY.METHOD.TASKS AND MEASURES.PERSONAL DETAILS OF PARTICIPANTS:\nDetails on children's handedness, gender, age, languages spoken were collected by the participating teachers and submitted to the research team.\n\nBODY.METHOD.TASKS AND MEASURES.HANDWRITING FLUENCY MEASURED ON THE DIGITIZER:\nThe handwriting tasks were administered on a digitizing tablet and included a range of 17 items starting with basic handwriting movements, letter, and combinations of two joined letters, syllables, morphemes, non-sense words and common words. The children also had to copy sentences, carry out a small dictation and write two separate 3-min short texts. The first was about a fantasy theme (\"If I could conjure\", \"If I were rich\", \"If I were a super star\"), the second was a prescription (\"This is my way to school\", \"My favorite game\", \"My favorite pet\"). All tablet items were written with a regular inking pen on a blank sheet of paper placed on the digitizing tablet. Within each task, we derived the number of inversions in velocity (NIV) and the stroke frequency (FREQ) from the digitizer. The number of inversions in velocity (NIV) indicates the average number of velocity changes occurring within writing strokes. Fluent handwriting requires only one velocity change per stroke (acceleration followed by deceleration) and therefore results in a NIV score that is approximately or exactly equal to 1, whereby 1 indicates perfect automaticity (Mai & Marquardt, 1999). When copying a sentence, the mean number of inversions in velocity of 4th graders amounted to 2.7 (SD = 2.1) in an earlier study (Hurschler, Saxer, & Wicki, 2010). Frequency (FREQ) refers to the number of upward and downward movements in 1 s calculated by CSWin (Mai & Marquardt, 1999). This measure seems more appropriate than assessing speed as stroke length per second (mm/s) (the latter strongly depends on a person's writing size). In our earlier study, the mean stroke frequency of 4th graders was f = 2.1 (SD = .6) when copying a sentence (Hurschler et al., 2010).\n\nBODY.METHOD.TASKS AND MEASURES.HANDWRITING MEASURES IRRESPECTIVE OF TABLET TASKS:\nThe children had to execute a 5 min copying task where they were asked to write as fast and as legible as possible. Speed of handwriting was measured as the number of letters written within that task. The respective data of one class had to be excluded from the analysis because of violating the time limit.\n\nBODY.METHOD.TASKS AND MEASURES.VISUAL MOTOR INTEGRATION VMI (BEERY & BEERY, :\nThe Developmental Test of Visual-Motor Integration (VMI) \"is designed to assess the extent to which individuals can integrate their visual and motor abilities\" (Beery & Beery, 2006, p.14). It consists of 30 graphic forms of increasing complexity that are to be copied successively. The scoring procedure follows the instructions as specified in the test manual.\n\nBODY.METHOD.TASKS AND MEASURES.WORKING MEMORY:\nAt pre-test (t1), we administered three working memory sub-tests from the AGTB 5-12 (Hasselhorn et al., 2012). In addition to the two visual-spatial tasks (Corsi-Block and Matrix), a third task referring to the phonological loop (word-list recall-test, monosyllabic words) was administered.\n\nBODY.METHOD.TASKS AND MEASURES.SPELLING:\nIn order to get three parallel test versions for spelling, we combined the age-appropriate items of the \"Salzburger Rechtschreibetest: SLRT-II\" (Moll & Landerl, 2014) with some age-appropriate items from the \"Diagnostischer Rechtschreibtest DRT 3\" (Müller, 2003). The resulting test versions were pilot tested in advance revealing comparable test difficulties.\n\nBODY.METHOD.TASKS AND MEASURES.TEXT QUALITY:\nThe children were asked to write short experimental narrative texts according to a prompt (adapted from Kim et al. 2015). The three prompts linked to the pre-, post- and follow-up test were familiar to their everyday life (\"This happened when I got home from school...\", \"This happened in our break...\", \"This happened when I was a child...\") and were presented in a randomized order for each class. Children were allowed to write a brief mind-map of their ideas before writing. They were told that this time, their drafts would be the piece that would be evaluated and that there would be no phases of revision and/or clean copy. Each time, the teacher told the children the principal criteria of the later assessment in advance. The four criteria (1. My story has a gripping beginning, a middle part and a clear ending; 2. It is coherent without disruptive leaps; 3. It is told in a vivid and exciting way; 4. I use appropriate matching words and sentences) were derived from the curricular teaching materials. There was no help allowed by the teacher or by peers during production. After 30 min, all products—drafts and pieces of planning work—were collected and sent to the research team. The text quality was assessed by a rating procedure based on two German studies, Ko-Text (Kruse, Reichardt, Herrmann, Heinzel, & Lipowsky, 2012) and RESTLESS (Wild et al., 2016), meeting our requirements with respect to grade level and text type (narrative texts). The original rating consisted of 10 Likert-type items complemented by anchor examples. After the instruction of 6 raters and preliminary interrater tests, the applied rating system consisted of 7 items (coherence of topic, logic of action, cohesion, implicitness of the text, appropriate language, orientation towards basic patterns of narrative texts, and language risk). These items are outlined in detail in the Appendix. The text quality measure was calculated by two steps. First we combined the three coherence items (coherence of topic, logic of action, cohesion) into one coherence measure. Subsequently we averaged coherence and the remaining four items. In order to assess the interrater reliability, 19 texts (of the pre-test) were rated by all raters. Based on the average ratings, we calculated the intraclass correlation (ICC) average score by applying the two-way model. The resulting ICC of 0.97 indicates a good reliability.\n\nBODY.METHOD.TASKS AND MEASURES.MOTIVATION AND SELF-EVALUATION:\nIntegrated in the last training sheet, the children were asked to complete a short survey of self-evaluation relating to level of program difficulty and their motivation to participate and were given the possibility to write an open formatted feedback.\n\nBODY.METHOD.DATA ANALYSIS:\nThe hypotheses with respect to the intervention effects were examined as group × time interactions within the longitudinal data set (repeated measurements). We calculated them by means of repeated measures ANOVAs. Some departures from sphericity were identified by the Mauchly's test in SPSS. Since all violations were weak (epsilon > 0.75), we applied the Huynh–Feldt correction. As the handwriting measures derived from the digitizer (number of inversions in velocity and stroke frequency) were skewed and not normally distributed (as confirmed by Shapiro–Wilk tests), we calculated the logarithmic (base 10) values of these measures to meet the requirements for parametric statistics outlined before.\n\nBODY.RESULTS:\nIn this section, we will first describe the level of fluency and automaticity of handwriting being revealed in the different tasks used in this study. Secondly, we will report the results concerning the intervention effects, and thirdly we will present our results with respect to the hypotheses that are irrespective of the intervention.\n\nBODY.RESULTS.FLUENCY OF HANDWRITING AMONG THIRD GRADERS:\nAs the original values (before logarithmic transformation) are easier to understand (what these figures really mean), we will depict the means and standard deviations of the untransformed values in Table 2.Table 2Fluency of handwriting across timeMeasuret1t2t3M(SD)M(SD)M(SD)NIV 11.54(1.00)1.24(0.47)1.23(0.57)FREQ 13.34(1.17)3.69(0.95)4.02(1.00)NIV 22.74(3.17)2.04(2.21)1.59(0.88)FREQ 21.80(0.74)2.06(0.75)2.26(0.85)NIV 31.52(1.28)1.49(2.98)1.10(0.35)FREQ 33.01(1.14)3.40(1.00)3.62(0.91)NIV 41.26(0.55)1.35(0.69)1.35(0.69)FREQ 42.91(1.06)2.71(1.07)2.61(1.05)NIV 51.13(0.37)1.13(0.40)1.07(0.17)FREQ 53.71(1.20)3.71(1.20)3.63(1.03)NIV 61.25(0.65)1.18(0.44)1.23(0.51)FREQ 62.99(0.76)3.08(0.72)3.05(0.77)NIV 71.09(0.18)1.06(0.12)1.04(0.09)FREQ 73.63(0.71)3.85(0.70)3.93(0.62)NIV 82.41(0.18)2.32(1.46)2.12(1.31)FREQ 81.88(0.62)1.82(0.55)2.02(0.57)NIV 93.25(2.25)2.17(1.27)2.37(1.48)FREQ 91.54(0.53)1.81(0.51)1.86(0.56)NIV 102.56(1.48)2.05(1.26)1.83(0.92)FREQ 101.82(0.57)1.99(0.58)2.22(0.66)NIV 112.23(1.27)2.10(1.11)1.83(1.00)FREQ 112.08(0.64)2.12(0.67)2.46(0.75)NIV 121.78(1.02)1.73(1.03)1.55(0.82)FREQ 122.51(0.80)2.45(0.82)2.80(0.88)NIV 131.69(0.71)1.48(0.53)1.37(0.45)FREQ 132.49(0.62)2.65(0.59)2.91(0.58)NIV 141.58(0.64)1.48(0.51)1.38(0.37)FREQ 142.65(0.68)2.70(0.60)3.01(0.58)NIV 151.63(0.66)1.52(0.58)1.38(0.43)FREQ 152.62(0.66)2.70(0.67)3.03(0.61)NIV 161.24(0.25)1.22(0.25)1.18(0.16)FREQ 163.52(0.67)3.59(0.67)3.91(0.62)NIV 171.44(0.56)1.40(0.41)1.29(0.30)FREQ 172.92(0.64)2.94(0.63)3.24(0.56)N = 174 − 175NIV = Number of Inversions in Velocity; FREQ = Stroke Frequency; 1 = scribbling; 2 = finger movements; 3 = finger movements-faster; 4 = wrist movements; 5 = wrist movements\n(faster); 6 = combined finger and wrist movements; 7 = combined finger and writ movements (faster); 8 = garlands; 9 = double loops; 10 = syllables; 11 = nonsense words; 12 meaningful words; 13 = 3′ composition (based on a preliminary version); 14 = 3′ composition (descriptive text-without a draft); 15 = sentence writing (normal pace); 16 = sentence writing (as fast as possible); 17 = sentence dictation\n According to Marquardt (2011), a handwriting movement is almost automated if the number of inversions in velocity is lower than 1.5. Regarding the data at the pre-test, it is remarkable that for some tasks, the children did already show automatized movements. As expected, they very well managed to perform the simplest basic movements as wrist movements (NIV 4, NIV 5) and combined finger and wrist movements (NIV 6, NIV 7). This is not the case in regard to patterns like garlands (NIV 8) and double loops (NIV 9) or while writing syllables (NIV 10), words (NIV 11, NIV 12) and composing short texts (NIV 13, NIV 14). However, when asked to write a sentence as fast as possible (NIV 16) or to write a difficult sentence by dictation (NIV 17), they achieved very low values indicating high automaticity. Over time, almost all movements were at an automatized level at t2, even the short texts, except at the first trial of finger movements (NIV 2) and the garlands, double loops, syllables and words. In general, a clear linear improvement over time is discernible by visual inspection only. The stroke frequencies mirror the number of inversions in velocity findings described above: They are in general already high from the beginning of the study at pre-test and further improve from t1 to t3.\n\nBODY.RESULTS.INTERVENTION EFFECTS:\nIn order to avoid a multiple testing problem (accumulation of alpha error), we restricted the testing of the expected intervention effects on fluency to the movements (tasks) that were not yet fully automated at t1, i.e. double loops, a syllable (i.e. \"neu\"), words (i.e. \"akir\" and \"Falle\") and the two composed texts. Thus, twelve repeated measures ANOVAs were separately carried out using the fluency measures (automaticity and frequency) of six of the mentioned tasks as dependent variables. We found reliable time effects in almost all analyses but-with two exceptions—no intervention effects. The three intervention groups, as well as the control group, wrote more fluently (i.e. with improved automaticity, higher stroke frequency) over time. For example, the automaticity of handwriting (measured by the NIV score) when composing a text improved significantly over time (F(1.8,306.9) = 26.9, p < 0.001, ηp2 = 0.14), however, in general, there was no group*time interaction, i.e. the improvements of the combined intervention group did neither outperform the improvements of the spelling only nor the handwriting only training group nor did they differ from the improvements of the reading control group. One of the exceptions mentioned above was a marked increase of fluency in the double loops tasks observed in the combined and in the pure handwriting group from pre-test to post-test, which outperformed the spelling training and the reading groups as indicated by the significant interaction term intervention*time (F(3,171) = 2.7, p < 0.05, ηp2 = 0.05). As depicted in Fig. 2, the differences did not persist completely until the follow up, resulting in a nonsignificant interaction term when the third measurement was also included in the model.Fig. 2Course of automaticity (double loops) among interventions groups\n The second exception concerned stroke frequency when completing the task referring to writing the German word \"Falle\". Here, we found an intervention × time interaction (F(6,342) = 3.8, p = 0.001, ηp2 = 0.06) besides an intervention effect (F(3,171) = 3.2, p < 0.05, ηp2 = 0.05) and a time effect (F(2,342) = 12.3, p < 0.001, ηp2 = 0.07). Interestingly enough but completely unexpected, the respective means indicated that the stroke frequency declined among the combined and the handwriting training group from t1 to t2 and recovered from t2 to t3, i.e. showing a paradox intervention effect, while the spelling group and the reading group did improve their stroke frequency quite linearly over time. With respect to our hypothesis that assumed intervention effects on spelling and text quality, we calculated two additional repeated measures ANOVAs including this time spelling and text quality respectively as dependent measures. Again, these analyses failed to discern any group differences or intervention effects. However, we found a time effect showing that text quality had improved over time (F(2,338) = 6.1, p < 0.01, ηp2 = 0.04).\n\nBODY.RESULTS.PREDICTING TEXT QUALITY IRRESPECTIVE OF THE INTERVENTION:\nOur third hypothesis relates to the prediction of test quality regardless of group membership. We calculated a hierarchic regression analysis on text quality (measured at t3) by entering gender as the first predictor. In the second step, we entered two working memory measures (t1) and visuo-motor integration (t1) and in the final step, we entered the transcription measures, i.e. number of inversions in velocity (t3) and speed (t3) as well as spelling (t3). In order to investigate the influence of a clear-cut low automaticity, the number of inversions in velocity measure used for that analysis was dichotomized above one standard deviation (see Table 1). The final regression equation was significant indicating that the included variables are able to predict text quality (adj r2 = 0.21). The results presented in Table 3 reveals that gender, the phonological part of the working memory, spelling and the dichotomized NIV contributed to this prediction while the visual part of the working memory, visuo-motor integration and writing speed did not.Table 3Summary of hierarchical regression analysis for variables predicting text qualityVariableModel 1Model 2Model 3\nB\n\nSE B\n\nβ\n\nB\n\nSE B\n\nβ\n\nB\n\nSE B\n\nβ\nGender− 0.270.07− 0.28**− 0.290.07− 0.31**− 0.230.07− 0.25**WM word span0.200.050.26**0.150.050.20**WM visuo-spatial sketch pad− 0.000.020.020.000.030.02VMI0.020.010.080.020.010.09Handwriting speed0.000.000.02NIV Composing− 0.350.15− 0.17*Spelling− 0.020.00− 0.23**\nR\n2\n0.080.160.20F for change in R214.90**5.35**6.03**NIV composing is a dichotomized variableWM working memory; VMI Visuo-motor integration*p < 0.05. **p < 0.01\n\n\nBODY.DISCUSSION.INTERVENTION EFFECTS:\nThird graders' handwriting performance is interesting to analyze as they are in the midst of the learning process: They are more advanced than real beginners because they have mastered the first step of using the corresponding link from phonemes to graphemes and they know how to form the letters, but are not yet fully automatized since they are not able to write words and sentences without paying attention as skilled writers do. Therefore, this intervention study was designed to investigate the impact of a training program that aims to improve the transcription skills of third graders. Improved fluency is assumed to have a positive impact on text quality as previous studies have demonstrated. In line with our assumptions, we could only demonstrate a short-term intervention effect on fluency with respect to the automaticity improvements of double loops that, however, did not persist until t3. Besides that, the intervention did not improve fluency among the two handwriting groups to greater extent than among the remaining groups (spelling and reading). In contrast, we found a paradox effect indicating that the handwriting training could also have contributed to a reduction of fluency as evidenced by the results related to the task of writing a meaningful word. It is likely that the handwriting training temporarily promoted more deliberate movements among some children and not more automaticity in the first place. Another explanation for the missing intervention effect is the fact that the children's automaticity e.g. when writing sentences was already very well developed at the beginning of the study. Irrespective of whether they really realized the promoted garlands as a principle to increase handwriting fluency in their own handwriting (which would be an interesting qualitative further investigation), after reaching a certain automaticity in their own personal style, to learn and integrate new patterns could interfere with further improvements of fluency in the short-term. Under this premise, further improvements could not so easily be achieved by means of a training lasting 5 weeks or 20 units. Furthermore, it is quite difficult to prevent a control group like our reading group from handwriting \"training\" over 5 weeks of schooling since handwriting is part of everyday activities beyond the intervention units. However, it is possible that a program lasting longer than 5 weeks and more training units can enhance transcription skills among third graders as was the case in the study conducted by Alves et al. (2015). As the intervention did not yield differential effects on fluency, it is theoretically congruent that we also did not find any intervention effects on text quality since the former was thought to be a precondition of the latter. The fact that we did not find an effect on the spelling training is perhaps less surprising. The improvements in spelling develop slowly and are hardly measurable within several weeks.\n\nBODY.DISCUSSION.AUTOMATICITY OF HANDWRITING, WORKING MEMORY, SPELLING, AND TEXT QUALITY:\nAs outlined above transcription skills facilitate composing especially among young children because at that age handwriting is not yet automated and therefore working memory resource consuming (Graham, Berninger, Abbott, Abbott, & Whitaker, 1997). In line with this assumption the multiple regression analysis revealed that text quality positively was both related to the phonological loop component of working memory and to transcription measures such as fluency and spelling. In some contradiction with previous research, however, text quality was independent from the visuo-spatial component of working memory. With respect to transcription, interestingly, speed writing per se did not predict text quality whereas it was a dichotomized variable consisting of very low automaticity versus normal to good automaticity (measured as number of inversions of velocity) when writing a short text that emerged as an independent predictor of text quality. Thus, children whose cognitive resources are consumed by the transcription process because of very low automaticity and reduced spelling capacity lack that capacity when composing. This, in turn, is one possible reason for low text quality, a finding that is consistent with Prunty, Barnett, Wilmut and Plumb (2016), who investigated children with developmental coordination disorder, although they used other measures of handwriting fluency.\n\nBODY.DISCUSSION.GENDER:\nGender successfully predicted text quality. Referring to gender it is well known that boys' reading fluency competencies are lower than girls' (Bos, Tarelli, Bremerich-Vos, & Schwippert, 2012), and also the text quality of boys seems to be constantly lower than that of girls (Kim et al., 2015). Thus, we replicated previous results and controlled for the gender effect as we entered gender first into the regression.\n\nBODY.CONCLUSION:\nAccording to Santangelo and Graham (2016), teaching handwriting is an important duty over all primary school years. Although they advocated an individualized instruction taking into account different stages of graphomotor development and the capabilities to deal with new patterns, our design followed an experimental logic by controlling as many factors as possible. Therefore, we have to admit that our trainings among third graders do not fulfil the demands of an individualized instruction. As a recommendation to the involved schools and to teachers of third graders, we therefore propose to start the teaching on how to join letters by garlands earlier (that means at the beginning of the third class) when children are not yet fully automatized in their handwriting, to observe learning progress carefully and to offer suitable individualized training as it is realized in the official materials in the meanwhile (Jurt Betschart, Hurschler Lichtsteiner, & Reber, 2016).\n\n**Question:** Compared to the spelling training and the reading groups what was the result of the combined and in the pure handwriting group on increase of fluency in the double loops tasks?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly increased\n" } ]
significantly increased
1230
[ { "role": "user", "content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Intrathecal Morphine Plus General Anesthesia in Cardiac Surgery: Effects on Pulmonary Function, Postoperative Analgesia, and Plasma Morphine Concentration\n\n ABSTRACT.OBJECTIVES:\nTo evaluate the effects of intrathecal morphine on pulmonary function, analgesia, and morphine plasma concentrations after cardiac surgery.\n\nABSTRACT.INTRODUCTION:\nLung dysfunction increases morbidity and mortality after cardiac surgery. Regional analgesia may improve pulmonary outcomes by reducing pain, but the occurrence of this benefit remains controversial.\n\nABSTRACT.METHODS:\nForty-two patients were randomized for general anesthesia (control group n=22) or 400 μg of intrathecal morphine followed by general anesthesia (morphine group n=20). Postoperative analgesia was accomplished with an intravenous, patient-controlled morphine pump. Blood gas measurements, forced vital capacity (FVC), forced expiratory volume (FEV), and FVC/FEV ratio were obtained preoperatively, as well as on the first and second postoperative days. Pain at rest, profound inspiration, amount of coughing, morphine solicitation, consumption, and plasma morphine concentration were evaluated for 36 hours postoperatively. Statistical analyses were performed using the repeated measures ANOVA or Mann-Whiney tests (*p<0.05).\n\nABSTRACT.RESULTS:\nBoth groups experienced reduced FVC postoperatively (3.24 L to 1.38 L in control group; 2.72 L to 1.18 L in morphine FEV1 (p=0.085), group), with no significant decreases observed between groups. The two groups also exhibited similar results for FEV1/FVC (p=0.68) and PaO2/FiO2 ratio (p=0.08). The morphine group reported less pain intensity (evaluated using a visual numeric scale), especially when coughing (18 hours postoperatively: control group= 4.73 and morphine group= 1.80, p=0.001). Cumulative morphine consumption was reduced after 18 hours in the morphine group (control group= 20.14 and morphine group= 14.20 mg, p=0.037). The plasma morphine concentration was also reduced in the morphine group 24 hours after surgery (control group= 15.87 ng.mL−1 and morphine group= 4.08 ng.mL−1, p=0.029).\n\nABSTRACT.CONCLUSIONS:\nIntrathecal morphine administration did not significantly alter pulmonary function; however, it improved patient analgesia and reduced morphine consumption and morphine plasma concentration.\n\nBODY.INTRODUCTION:\nLung dysfunction after cardiac surgery is the most frequently reported cause of postoperative morbidity.1,2 In fact, postoperative atelectasis occurs more frequently in patients undergoing cardiac surgery with cardiopulmonary bypass (CPB) than in any other type of surgery3 In patients undergoing cardiac surgery, death caused by respiratory complications has been reported as being more frequent than that due to cardiac causes.4 Adequate perioperative analgesia enables full expansion of the chest, thereby contributing to reduce lung collapse in spontaneously breathing patients. This therapeutic intervention may reduce morbidity, hospital costs, and length of stay, thus improving patient quality of life and reducing the incidence of chronic pain.5 The impact of postoperative regional analgesia on major outcomes in non-cardiac surgery has been studied.6–8 Small doses of opioids administered to the central nervous system provide adequate analgesia, reducing the risks of intravenous analgesic administration, such as respiratory depression, pruritus, nausea, and vomiting. However, in the cardiac surgical population, regional anesthesia techniques are not routinely applied due to the scarcity of supporting studies and administration risks associated with preoperative systemic anticoagulation9 Low doses of intrathecal morphine used to control postoperative pain has been shown to promote prolonged analgesia,5 and has been associated with a lower risk of hematoma formation compared to the epidural technique.10 Intrathecal morphine has been studied for several years in patients undergoing cardiac surgery, but these studies were primarily directed towards evaluation of pain control and hemodynamic stability, and did not consider its possible impact of intrathecal morphine on respiratory function recovery.11 The objective of this study was to evaluate the effects of intrathecal morphine on pulmonary function, postoperative analgesia, and morphine plasma concentrations in patients undergoing coronary artery bypass surgery (CABG) with cardiopulmonary bypass (CPB) in comparison to standard intravenous analgesia.\n\nBODY.MATERIALS AND METHODS:\nAfter approval of the hospital ethics committee and collection of free informed consent, forty-two patients undergoing CABG with CPB and ranging in age from 18 to 80 years were included. Exclusion criteria were an ejection fraction below 40%, contraindications to neuraxial blockage, coagulopathy, use of low-weight heparin, warfarin, intrathecal morphine, or a platelet aggregation inhibitor other than aspirin, systemic or local infection, combined procedures, and patients with a specific contraindication to the medication employed in this study. Preoperative aspirin use was not an exclusion criterion. Preoperative surgical risk was evaluated using Higgins Surgical Risk Scale for cardiac surgery, and patients were considered: (1) minimum risk, (2) low risk, (3) moderate, (4) high, (5) extreme risk12. The patients were randomly assigned to receive general anesthesia with prior administration of intrathecal morphine at a dosage of 400 μg (morphine group n=20) or general anesthesia alone (control group n=22) according to a simple computer-generated list.\n\nBODY.MATERIALS AND METHODS.ANESTHESIA AND SURGICAL PROCEDURES:\nOn the day of the operation, the patients received 1 to 2 mg kg −1 of midazolam orally 30 minutes before surgery, up to a maximal dose of 15 mg. Patients were monitored by continuous ECG and ST-analysis, pulse oximetry, and an invasive arterial line inserted in the right radial artery. In the morphine group, a 400-μg intrathecal injection of morphine was administered using a 27-gauge spinal needle in the L3-L4 space prior to induction of general anesthesia. If intrathecal puncture was not successful after two attempts, the patient was excluded from the study. After pre-oxygenation, general anesthesia was induced using 0.3 mg/kg of hypnomidate, 0.1 mg kg−1 of pancuroniun bromide, and 0.5 μg kg−1 of sufentanil first as a bolus, followed by a continuous infusion at 0.5 μg kg−1 h−1. Mechanical ventilation was initiated using a Cícero ventilator (Dräger®, Germany) with a tidal volume of 8 ml kg−1, respiratory frequency of 12 ipm, oxygen inspired fraction of 0.6%, and 5 cm of H2O PEEP. A nasoesophageal thermometer, bladder catheter, and central venous catheter were inserted after anesthesia induction. When judged necessary by the attending anesthesiologist, a pulmonary artery catheter was placed in the right internal jugular vein. Immediately after intubation and after CPB weaning, a lung expansion maneuver using an airway pressure of 30 cmH2O for 20 seconds was performed to revert any intraoperative lung collapse. During CPB, hypnosis was maintained with a propofol infusion aimed at maintaining a calculated plasma propofol concentration of 2.5 ug ml−1 according to the Marsh model13. All patients received 1 g of methylprednisolone intravenously before initiation of CPB according to the institutional protocol. Sufentanil infusion was terminated at the moment of skin suture. After arrival in the intensive care unit (ICU), a patient-controlled analgesia pump (PCA) programmed for a 1 mg bolus of morphine with an administration interval locked at 5 minutes and free demand was installed. Dipyrone (30 mg kg−1) was administered if patients presented persistent pain despite the use of PCA. Tracheal extubation was performed when patients were fully awake and responsive to verbal commands, as well as when the peripheral oximetry was greater than 94%, spontaneous respiratory frequency was greater than 10, temperature was greater than 36° C, and the patient was hemodynamically stable such that they were bleeding less than 100 ml/h.\n\nBODY.MATERIALS AND METHODS.VENTILATORY AND PAIN SCORE EVALUATION:\nSpirometry using an Easy One® portable spirometer (Niche Medical, West Leederville, Australia) was also performed preoperatively in the first and second postoperative days (POD) to measure forced vital capacity (FVC), forced expiratory volume in the first second (FEV1), and thus the FEV1/FVC ratio. Arterial blood samples were collected during the preoperative period, immediately before surgery, after anesthesia induction, at the end of surgery, and in the first and second postoperative days to evaluate the PaO2/FiO2 ratio. Pain was evaluated using a visual numeric scale from zero to ten at rest, at the time of profound inspiration, and after cough at 3, 6, 12, 18, 24, and 36 hours postoperatively. Morphine cumulative consumption and solicitation was also evaluated at the same times listed above. All postoperative pain, respiratory function, and morphine utilization measurements were obtained by individuals who were not involved in patient randomization and anesthesia administration. Blood samples assessed for plasma morphine concentrations were collected intraoperatively, immediately after stopping sufentanil infusion, after 5, 15, 30, and 60 minutes, and after 3, 6, 12, 18, 24, and 36 hours in the ICU. The blood samples were centrifuged at 10.000 rpm for 10 minutes, and the supernatant was frozen at − 70 oC for later analysis. Plasma morphine concentrations were measured using a mass spectrophotometer.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND STATISTICAL ANALYSIS:\nA sample size of at least 36 patients was deemed necessary to detect a statistically significant decrease (p < 0.05) in cumulative morphine consumption of 20% between the morphine and control group, assuming a standard deviation of 10 mg in a single-tail paired Student's t-test using G Power 3 software (Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany). Statistical analyses were performed using SPSS 16.0 software (SPSS Inc., Chicago, Ilinois). The normal distribution of the collected data was confirmed by means of the Kolmogorov-Smirnov test. Demographic and surgical data were compared between groups using the unpaired Student's t-test. Spirometric variables were compared over time between the groups using repeated measures analysis of variance, followed by the Student-Neumann-Keuls post hoc test when necessary. Pain scores and cumulative morphine consumption were analyzed by means of the Friedman test, followed by the Mann-Whitney test when necessary. All data are presented as mean values ± standard deviation, error means, or as otherwise described. The significance level was fixed at 5%.\n\nBODY.RESULTS:\nPatient characteristics and general variables are presented in Table 1. There were no differences between groups with regard to the variables represented in Table 1. Mean surgery duration and CPB length were similar in both groups (392 ± 70 vs. 379 ± 91 minutes) and (98 ± 26 vs. 95 ± 38 minutes) (p=0.606 and p=0.780, respectively). Minimal temperature values were 31.3 ± 2.7°C in the control group and 30.7 ± 2.7°C in the morphine group (p=0.456). All patients achieved hemodynamic stability in the immediate postoperative period, permitting weaning off of the vasoactive drugs within the first 24 hours. The total dose of sufentanil employed was 257.77 μg ± 81.41 in the control group and 200.87 μg ± 55.11 in the morphine group (p= 0.012), and was related to the duration of surgery. Intrathecal morphine did not shorten the postoperative time to extubation between groups (396 ± 234 min in the control group and 349 min ± 175 in the morphine group, p=0.466). None of the patients in our cohort developed respiratory failure after extubation or needed postoperative non-invasive ventilation. All intrathecal punctures were successful at the first or second attempt. No cases of spinal hematoma or treatable pruritus were observed. All patients were discharged from the ICU without postoperative complications.\n\nBODY.RESULTS.GAS EXCHANGE AND SPIROMETRIC ANALYSIS:\nBoth group presented a significant decrease in FVC and FEV1 during the first and second postoperative days, with no significant difference between the groups (p=0.068 and 0.085, respectively). The FEV1/FVC ratio did not differ between groups (p=0.68), as presented in Table 2. In ratio did not differ between groups addition, the PaO2/FiO2 at any time point (p >0.05), as presented in Figure 1.\n\nBODY.RESULTS.PAIN SCORES, MORPHINE CONSUMPTION, AND PLASMA CONCENTRATIONS:\nThe intensity of pain at rest and with cough was significantly lower in the morphine group at 12, 18, 24, and 36 hours postoperatively when compared to the control group (p <0.05), as shown in panels A and C of Figure 2. With profound inspiration, patients in the morphine group reported pain levels that were significantly less intense at 12, 18, and 24 hours after surgery (panel B of Figure 2). The venous morphine solicitation count was lower in the morphine group at 18 (control group= 108.3 ± 213.8 and morphine group= 20.2 ± 30.3) and 24 hours (control group= 144.4 ± 251.2 and morphine group= 32.4 ± 38.05) (p<0.05). Cumulative morphine consumption was also reduced in the morphine group at 18 (control group= 20.14 ± 17.73 mg and morphine group=14.10 ± 26.15 mg), 24 (control group= 27.82 ± 22.77 mg and morphine group= 13.55 ± 10.49 mg), and 36 hours (control group = 38.50 ± 26.70 and morphine group= 24.30 ± 14.59) postoperatively (Figure 3), as well as upon extubation (control group= 3.32 ± 8.17 mg and morphine group= 0.15 ± 0.49 mg) (p<0.05). Plasma morphine levels were zero at the end of surgery in both groups. However, plasma levels were significantly reduced in the morphine group during the postoperative period 24 hours after the operation (control group= 15.87 ± 18.05 ng mL−1 and morphine group 4.08 ± 5.28 ng mL−1, p=0.029), as presented in Figure 4. Nine patients in the control group and five patients in the morphine group requested rescue analgesia with dipyrone (p= 0.460).\n\nBODY.DISCUSSION:\nIn the present study, intrathecal morphine administration in patients undergoing coronary artery bypass graft reduced pain scores, postoperative morphine solicitation, and morphine consumption and its plasma concentration. However, despite better analgesic control, no significant improvements in spirometry or gas exchange were observed. Respiratory system dysfunction is the most prevalent organic complication after cardiac surgery, even in the absence of preoperative pulmonary disease1. Perioperative atelectasis leads to a pulmonary ventilation/perfusion mismatch that likely serves as a major cause of shunt and hypoxemia after cardiopulmonary bypass.14 The impact of postoperative analgesia techniques on pulmonary function outcome has been well studied in non-cardiac surgery, but few studies to date have examined this matter in cardiac surgery.8 A study that enrolled 113 patients submitted for different types of cardiac surgery observed that thoracic epidural analgesia (TEA) combined with general anesthesia followed by patient-controlled thoracic epidural analgesia offered no major advantages with respect to lung function, length of hospital stay, quality of recovery, or morbidity when compared to general anesthesia alone, with both groups followed by patient-controlled analgesia with intravenous morphine.15 In contrast, a meta-analysis including 15 trials enrolling 1.178 patients suggested that TEA significantly reduced the risk of pulmonary complications (OR 0.41), time to tracheal extubation by 4.5 hours, and analog pain scores at rest and with activity.9 In the present study, intrathecal instead of epidural morphine was used because there is no evidence for analgesia superiority between them. However, the risk of spinal hematoma is lower with the intrathecal technique,16 especially with the required anticoagulation prior to cardiac surgery. The last two meta-analyses used to evaluate the use of intrathecal morphine in cardiac surgery did not reveal a correlation between the efficacy of pain control and better respiratory outcome,3,11 likely because these studies did not employ lung function as an end-point11. Some of these studies described no improvement of pain control10 or modest reduction of pain scores,11 with higher doses correlating with better postoperative analgesia17 as well as longer time to extubation.18 In line with the beginning of the fast-track era and its goal to decrease the intubation time after surgery, changes in anesthesia techniques and a reduction in the dose of intrathecal morphine were previously associated with a decreased time to extubation,20 with some cases of prolonged respiratory depression.21 A previous study compared intrathecal morphine doses of 250 and 500 μg and placebo, and showed the superiority of intrathecal analgesia, without alterations in gas exchange19 or an increase in time to extubation.22 In the present study, the selected dose of 400 μg of morphine had no effect on time to tracheal extubation in the postoperative period. According to our results, use of low doses of intrathecal morphine can decrease the intensity of postoperative pain at different times, such as at rest, upon profound inspiration, and upon cough. This suggests that not only does intrathecal morphine provide better analgesia at rest, but also during activities that support walking, respiratory maneuvers, and the use of incentive spirometry. This is important to ensure the quality of patient recovery, since it has been demonstrated that respiratory physiotherapy may reduce rates of pulmonary complications.23–25 Improved analgesia may allow for a decrease in the solicitation and consumption of postoperative analgesics, resulting in lower plasma analgesic levels, as observed in this study. A previous cardiac surgery study that also used intrathecal morphine revealed a venous consumption of 13.5 mg at a dose of 250 μg, and a consumption of 11.7 mg of morphine with an intrathecal dose of 500 μg compared to 21.7 mg in the placebo group 24 hours after the operation.22 In a meta-analysis, 668 patients undergoing cardiac surgery were reported to consume 11 mg of intravenous morphine in intrathecal group .9 The importance of studying the plasma concentration of morphine as a marker for progress in the of treatment of pain was considered by Bonica in 1985, but only a few studies have investigated this idea26 Only one study has been published regarding plasma morphine concentrations during patient control analgesia techniques,27 and no studies reported intrathecal morphine use in the postoperative period or its correlation with pain scores. In patients undergoing hysterectomy with postoperative PCA, an increase in plasma morphine levels resulted in clinical effects 6 to 12 hours after surgery. This lag could be explained by the delay in morphine passing through the hematoencephalic barrier. After this period, a significant reduction in morphine consumption and pain scores was observed, and this can be explained by maintenance of a steady-state level of morphine sufficient to maintain an effective level of analgesia.27 Mutations in the opioid μ receptor were described to support the interindividual variability theory in the population and its possible influences on the use of morphine.28–30 Although the number of patients in our study was insufficient to fully evaluate the adverse effects of intrathecal morphine, no cases of spinal hematoma or treatable pruritus were observed in the present study. Previous studies utilizing doses between 5–24 μg kg−1 for cardiac surgery described a 30% incidence of pruritus,21 which is the most common symptom observed after morphine treatment. Fortunately, this common side effect becomes severe in only 1% of cases.31 The absence of an intrathecal puncture in the control group may have limited our ability to observe adverse effects. Despite the risk of spinal hematoma, the use of intrathecal analgesia in cardiac surgery32 has increased. This increase may be explained by the demand for better analgesia with chest tubes, sternotomy, deambulation, and physiotherapy maneuvers in the absence of a trained pain management team and patient-control pumps. Thus, our study has direct clinical applications. Future studies are being conducted to study the effects of pain control on quality of life, satisfaction, and postoperative recovery, since no additional benefits of intrathecal morphine beyond pain control have been shown until now. None of these quality of life or recovery variables have been correlated with better pain scores, but a modest reduction in satisfaction was shown in the presence of adverse effects.33 Considering the risks of nerve blockade and required anticoagulation in cardiac surgery, further efforts are required to identify better pain control to ensure better patient outcomes.\n\n**Question:** Compared to general anesthesia what was the result of 400 ?g of intrathecal morphine followed by general anesthesia on pain intensity?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n" } ]
[ { "role": "assistant", "content": "**Answer:** significantly decreased\n" } ]
significantly decreased