Citations
All
Search in:AllTitleAbstractAuthor name
Publications
(14M+)
Patents
Grants
Pathways
Clinical trials
The language you are using is not recognised as English. To correctly search in your language please select Search and translation language
Publication
Journal: European Journal of Cancer
February/2/2009
Abstract
BACKGROUND
Assessment of the change in tumour burden is an important feature of the clinical evaluation of cancer therapeutics: both tumour shrinkage (objective response) and disease progression are useful endpoints in clinical trials. Since RECIST was published in 2000, many investigators, cooperative groups, industry and government authorities have adopted these criteria in the assessment of treatment outcomes. However, a number of questions and issues have arisen which have led to the development of a revised RECIST guideline (version 1.1). Evidence for changes, summarised in separate papers in this special issue, has come from assessment of a large data warehouse (>6500 patients), simulation studies and literature reviews. HIGHLIGHTS OF REVISED RECIST 1.1: Major changes include: Number of lesions to be assessed: based on evidence from numerous trial databases merged into a data warehouse for analysis purposes, the number of lesions required to assess tumour burden for response determination has been reduced from a maximum of 10 to a maximum of five total (and from five to two per organ, maximum). Assessment of pathological lymph nodes is now incorporated: nodes with a short axis of 15 mm are considered measurable and assessable as target lesions. The short axis measurement should be included in the sum of lesions in calculation of tumour response. Nodes that shrink to <10mm short axis are considered normal. Confirmation of response is required for trials with response primary endpoint but is no longer required in randomised studies since the control arm serves as appropriate means of interpretation of data. Disease progression is clarified in several aspects: in addition to the previous definition of progression in target disease of 20% increase in sum, a 5mm absolute increase is now required as well to guard against over calling PD when the total sum is very small. Furthermore, there is guidance offered on what constitutes 'unequivocal progression' of non-measurable/non-target disease, a source of confusion in the original RECIST guideline. Finally, a section on detection of new lesions, including the interpretation of FDG-PET scan assessment is included. Imaging guidance: the revised RECIST includes a new imaging appendix with updated recommendations on the optimal anatomical assessment of lesions.
CONCLUSIONS
A key question considered by the RECIST Working Group in developing RECIST 1.1 was whether it was appropriate to move from anatomic unidimensional assessment of tumour burden to either volumetric anatomical assessment or to functional assessment with PET or MRI. It was concluded that, at present, there is not sufficient standardisation or evidence to abandon anatomical assessment of tumour burden. The only exception to this is in the use of FDG-PET imaging as an adjunct to determination of progression. As is detailed in the final paper in this special issue, the use of these promising newer approaches requires appropriate clinical validation studies.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Systematic Biology
December/23/2003
Abstract
The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximum- likelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/.
Publication
Journal: Science
April/21/1999
Abstract
Human mesenchymal stem cells are thought to be multipotent cells, which are present in adult marrow, that can replicate as undifferentiated cells and that have the potential to differentiate to lineages of mesenchymal tissues, including bone, cartilage, fat, tendon, muscle, and marrow stroma. Cells that have the characteristics of human mesenchymal stem cells were isolated from marrow aspirates of volunteer donors. These cells displayed a stable phenotype and remained as a monolayer in vitro. These adult stem cells could be induced to differentiate exclusively into the adipocytic, chondrocytic, or osteocytic lineages. Individual stem cells were identified that, when expanded to colonies, retained their multilineage potential.
Publication
Journal: Genetics
September/16/1974
Abstract
Methods are described for the isolation, complementation and mapping of mutants of Caenorhabditis elegans, a small free-living nematode worm. About 300 EMS-induced mutants affecting behavior and morphology have been characterized and about one hundred genes have been defined. Mutations in 77 of these alter the movement of the animal. Estimates of the induced mutation frequency of both the visible mutants and X chromosome lethals suggests that, just as in Drosophila, the genetic units in C. elegans are large.
Authors
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: CA - A Cancer Journal for Clinicians
March/10/2014
Abstract
Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data were collected by the National Cancer Institute, the Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries and mortality data were collected by the National Center for Health Statistics. A total of 1,665,540 new cancer cases and 585,720 cancer deaths are projected to occur in the United States in 2014. During the most recent 5 years for which there are data (2006-2010), delay-adjusted cancer incidence rates declined slightly in men (by 0.6% per year) and were stable in women, while cancer death rates decreased by 1.8% per year in men and by 1.4% per year in women. The combined cancer death rate (deaths per 100,000 population) has been continuously declining for 2 decades, from a peak of 215.1 in 1991 to 171.8 in 2010. This 20% decline translates to the avoidance of approximately 1,340,400 cancer deaths (952,700 among men and 387,700 among women) during this time period. The magnitude of the decline in cancer death rates from 1991 to 2010 varies substantially by age, race, and sex, ranging from no decline among white women aged 80 years and older to a 55% decline among black men aged 40 years to 49 years. Notably, black men experienced the largest drop within every 10-year age group. Further progress can be accelerated by applying existing cancer control knowledge across all segments of the population.
Pulse
Views:
52
Posts:
No posts
Rating:
Not rated
Publication
Journal: Proceedings of the National Academy of Sciences of the United States of America
May/20/2001
Abstract
Microarrays can measure the expression of thousands of genes to identify changes in expression between different biological states. Methods are needed to determine the significance of these changes while accounting for the enormous number of genes. We describe a method, Significance Analysis of Microarrays (SAM), that assigns a score to each gene on the basis of change in gene expression relative to the standard deviation of repeated measurements. For genes with scores greater than an adjustable threshold, SAM uses permutations of the repeated measurements to estimate the percentage of genes identified by chance, the false discovery rate (FDR). When the transcriptional response of human cells to ionizing radiation was measured by microarrays, SAM identified 34 genes that changed at least 1.5-fold with an estimated FDR of 12%, compared with FDRs of 60 and 84% by using conventional methods of analysis. Of the 34 genes, 19 were involved in cell cycle regulation and 3 in apoptosis. Surprisingly, four nucleotide excision repair genes were induced, suggesting that this repair pathway for UV-damaged DNA might play a previously unrecognized role in repairing DNA damaged by ionizing radiation.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Journal of the National Cancer Institute
February/27/2000
Abstract
Anticancer cytotoxic agents go through a process by which their antitumor activity-on the basis of the amount of tumor shrinkage they could generate-has been investigated. In the late 1970s, the International Union Against Cancer and the World Health Organization introduced specific criteria for the codification of tumor response evaluation. In 1994, several organizations involved in clinical research combined forces to tackle the review of these criteria on the basis of the experience and knowledge acquired since then. After several years of intensive discussions, a new set of guidelines is ready that will supersede the former criteria. In parallel to this initiative, one of the participating groups developed a model by which response rates could be derived from unidimensional measurement of tumor lesions instead of the usual bidimensional approach. This new concept has been largely validated by the Response Evaluation Criteria in Solid Tumors Group and integrated into the present guidelines. This special article also provides some philosophic background to clarify the various purposes of response evaluation. It proposes a model by which a combined assessment of all existing lesions, characterized by target lesions (to be measured) and nontarget lesions, is used to extrapolate an overall response to treatment. Methods of assessing tumor lesions are better codified, briefly within the guidelines and in more detail in Appendix I. All other aspects of response evaluation have been discussed, reviewed, and amended whenever appropriate.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Cell
February/9/2005
Abstract
We predict regulatory targets of vertebrate microRNAs (miRNAs) by identifying mRNAs with conserved complementarity to the seed (nucleotides 2-7) of the miRNA. An overrepresentation of conserved adenosines flanking the seed complementary sites in mRNAs indicates that primary sequence determinants can supplement base pairing to specify miRNA target recognition. In a four-genome analysis of 3' UTRs, approximately 13,000 regulatory relationships were detected above the estimate of false-positive predictions, thereby implicating as miRNA targets more than 5300 human genes, which represented 30% of our gene set. Targeting was also detected in open reading frames. In sum, well over one third of human genes appear to be conserved miRNA targets.
Publication
Journal: Nature Biotechnology
August/29/2010
Abstract
High-throughput mRNA sequencing (RNA-Seq) promises simultaneous transcript discovery and abundance estimation. However, this would require algorithms that are not restricted by prior gene annotations and that account for alternative transcription and splicing. Here we introduce such algorithms in an open-source software program called Cufflinks. To test Cufflinks, we sequenced and analyzed >430 million paired 75-bp RNA-Seq reads from a mouse myoblast cell line over a differentiation time series. We detected 13,692 known transcripts and 3,724 previously unannotated ones, 62% of which are supported by independent expression data or by homologous genes in other species. Over the time series, 330 genes showed complete switches in the dominant transcription start site (TSS) or splice isoform, and we observed more subtle shifts in 1,304 other genes. These results suggest that Cufflinks can illuminate the substantial regulatory flexibility and complexity in even this well-studied model of muscle development and that it can improve transcriptome-based genome annotation.
Pulse
Views:
2
Posts:
No posts
Rating:
Not rated
Publication
Journal: Bioinformatics
June/17/2009
Abstract
BACKGROUND
A new protocol for sequencing the messenger RNA in a cell, known as RNA-Seq, generates millions of short sequence fragments in a single run. These fragments, or 'reads', can be used to measure levels of gene expression and to identify novel splice variants of genes. However, current software for aligning RNA-Seq data to a genome relies on known splice junctions and cannot identify novel ones. TopHat is an efficient read-mapping algorithm designed to align reads from an RNA-Seq experiment to a reference genome without relying on known splice sites.
RESULTS
We mapped the RNA-Seq reads from a recent mammalian RNA-Seq experiment and recovered more than 72% of the splice junctions reported by the annotation-based software from that study, along with nearly 20,000 previously unreported junctions. The TopHat pipeline is much faster than previous systems, mapping nearly 2.2 million reads per CPU hour, which is sufficient to process an entire RNA-Seq experiment in less than a day on a standard desktop computer. We describe several challenges unique to ab initio splice site discovery from RNA-Seq reads that will require further algorithm development.
BACKGROUND
TopHat is free, open-source software available from http://tophat.cbcb.umd.edu.
BACKGROUND
Supplementary data are available at Bioinformatics online.
Pulse
Views:
4
Posts:
No posts
Rating:
Not rated
Publication
Journal: Biostatistics
October/22/2003
Abstract
In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.
Publication
Journal: Bioinformatics
July/28/2013
Abstract
BACKGROUND
Accurate alignment of high-throughput RNA-seq data is a challenging and yet unsolved problem because of the non-contiguous transcript structure, relatively short read lengths and constantly increasing throughput of the sequencing technologies. Currently available RNA-seq aligners suffer from high mapping error rates, low mapping speed, read length limitation and mapping biases.
RESULTS
To align our large (>80 billon reads) ENCODE Transcriptome RNA-seq dataset, we developed the Spliced Transcripts Alignment to a Reference (STAR) software based on a previously undescribed RNA-seq alignment algorithm that uses sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure. STAR outperforms other aligners by a factor of >50 in mapping speed, aligning to the human genome 550 million 2 × 76 bp paired-end reads per hour on a modest 12-core server, while at the same time improving alignment sensitivity and precision. In addition to unbiased de novo detection of canonical junctions, STAR can discover non-canonical splices and chimeric (fusion) transcripts, and is also capable of mapping full-length RNA sequences. Using Roche 454 sequencing of reverse transcription polymerase chain reaction amplicons, we experimentally validated 1960 novel intergenic splice junctions with an 80-90% success rate, corroborating the high precision of the STAR mapping strategy.
METHODS
STAR is implemented as a standalone C++ code. STAR is free open source software distributed under GPLv3 license and can be downloaded from http://code.google.com/p/rna-star/.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Nucleic Acids Research
February/2/2009
Abstract
Functional analysis of large gene lists, derived in most cases from emerging high-throughput genomic, proteomic and bioinformatics scanning approaches, is still a challenging and daunting task. The gene-annotation enrichment analysis is a promising high-throughput strategy that increases the likelihood for investigators to identify biological processes most pertinent to their study. Approximately 68 bioinformatics enrichment tools that are currently available in the community are collected in this survey. Tools are uniquely categorized into three major classes, according to their underlying enrichment algorithms. The comprehensive collections, unique tool classifications and associated questions/issues will provide a more comprehensive and up-to-date view regarding the advantages, pitfalls and recent trends in a simpler tool-class level rather than by a tool-by-tool approach. Thus, the survey will help tool designers/developers and experienced end users understand the underlying algorithms and pertinent details of particular tool categories/tools, enabling them to make the best choices for their particular research interests.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Molecular Biology and Evolution
July/11/2017
Abstract
We present the latest version of the Molecular Evolutionary Genetics Analysis (Mega) software, which contains many sophisticated methods and tools for phylogenomics and phylomedicine. In this major upgrade, Mega has been optimized for use on 64-bit computing systems for analyzing larger datasets. Researchers can now explore and analyze tens of thousands of sequences in Mega The new version also provides an advanced wizard for building timetrees and includes a new functionality to automatically predict gene duplication events in gene family trees. The 64-bit Mega is made available in two interfaces: graphical and command line. The graphical user interface (GUI) is a native Microsoft Windows application that can also be used on Mac OS X. The command line Mega is available as native applications for Windows, Linux, and Mac OS X. They are intended for use in high-throughput and scripted analysis. Both versions are available from www.megasoftware.net free of charge.
Publication
Journal: CA - A Cancer Journal for Clinicians
March/12/2015
Abstract
Each year the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data were collected by the National Cancer Institute (Surveillance, Epidemiology, and End Results [SEER] Program), the Centers for Disease Control and Prevention (National Program of Cancer Registries), and the North American Association of Central Cancer Registries. Mortality data were collected by the National Center for Health Statistics. A total of 1,658,370 new cancer cases and 589,430 cancer deaths are projected to occur in the United States in 2015. During the most recent 5 years for which there are data (2007-2011), delay-adjusted cancer incidence rates (13 oldest SEER registries) declined by 1.8% per year in men and were stable in women, while cancer death rates nationwide decreased by 1.8% per year in men and by 1.4% per year in women. The overall cancer death rate decreased from 215.1 (per 100,000 population) in 1991 to 168.7 in 2011, a total relative decline of 22%. However, the magnitude of the decline varied by state, and was generally lowest in the South (∼15%) and highest in the Northeast (≥20%). For example, there were declines of 25% to 30% in Maryland, New Jersey, Massachusetts, New York, and Delaware, which collectively averted 29,000 cancer deaths in 2011 as a result of this progress. Further gains can be accelerated by applying existing cancer control knowledge across all segments of the population.
Publication
Journal: Acta crystallographica. Section D, Biological crystallography
February/28/2010
Abstract
The usage and control of recent modifications of the program package XDS for the processing of rotation images are described in the context of previous versions. New features include automatic determination of spot size and reflecting range and recognition and assignment of crystal symmetry. Moreover, the limitations of earlier package versions on the number of correction/scaling factors and the representation of pixel contents have been removed. Large program parts have been restructured for parallel processing so that the quality and completeness of collected data can be assessed soon after measurement.
Publication
Journal: Genetics
July/16/1989
Abstract
A series of yeast shuttle vectors and host strains has been created to allow more efficient manipulation of DNA in Saccharomyces cerevisiae. Transplacement vectors were constructed and used to derive yeast strains containing nonreverting his3, trp1, leu2 and ura3 mutations. A set of YCp and YIp vectors (pRS series) was then made based on the backbone of the multipurpose plasmid pBLUESCRIPT. These pRS vectors are all uniform in structure and differ only in the yeast selectable marker gene used (HIS3, TRP1, LEU2 and URA3). They possess all of the attributes of pBLUESCRIPT and several yeast-specific features as well. Using a pRS vector, one can perform most standard DNA manipulations in the same plasmid that is introduced into yeast.
Publication
Journal: JAMA - Journal of the American Medical Association
June/12/2003
Abstract
"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.
Pulse
Views:
3
Posts:
No posts
Rating:
Not rated
Publication
Journal: Applied and Environmental Microbiology
March/29/2010
Abstract
mothur aims to be a comprehensive software package that allows users to use a single piece of software to analyze community sequence data. It builds upon previous tools to provide a flexible and powerful software package for analyzing sequencing data. As a case study, we used mothur to trim, screen, and align sequences; calculate distances; assign sequences to operational taxonomic units; and describe the alpha and beta diversity of eight marine samples previously characterized by pyrosequencing of 16S rRNA gene fragments. This analysis of more than 222,000 sequences was completed in less than 2 h with a laptop computer.
Publication
Journal: Bioinformatics
June/20/2010
Abstract
BACKGROUND
Testing for correlations between different sets of genomic features is a fundamental task in genomics research. However, searching for overlaps between features with existing web-based methods is complicated by the massive datasets that are routinely produced with current sequencing technologies. Fast and flexible tools are therefore required to ask complex questions of these data in an efficient manner.
RESULTS
This article introduces a new software suite for the comparison, manipulation and annotation of genomic features in Browser Extensible Data (BED) and General Feature Format (GFF) format. BEDTools also supports the comparison of sequence alignments in BAM format to both BED and GFF features. The tools are extremely efficient and allow the user to compare large datasets (e.g. next-generation sequencing data) with both public and custom genome annotation tracks. BEDTools can be combined with one another as well as with standard UNIX commands, thus facilitating routine genomics tasks as well as pipelines that can quickly answer intricate questions of large genomic datasets.
METHODS
BEDTools was written in C++. Source code and a comprehensive user manual are freely available at http://code.google.com/p/bedtools
BACKGROUND
aaronquinlan@gmail.com; imh4y@virginia.edu
BACKGROUND
Supplementary data are available at Bioinformatics online.
Publication
Journal: Science
March/1/1988
Abstract
A thermostable DNA polymerase was used in an in vitro DNA amplification procedure, the polymerase chain reaction. The enzyme, isolated from Thermus aquaticus, greatly simplifies the procedure and, by enabling the amplification reaction to be performed at higher temperatures, significantly improves the specificity, yield, sensitivity, and length of products that can be amplified. Single-copy genomic sequences were amplified by a factor of more than 10 million with very high specificity, and DNA segments up to 2000 base pairs were readily amplified. In addition, the method was used to amplify and detect a target DNA molecule present only once in a sample of 10(5) cells.
Publication
Journal: New England Journal of Medicine
October/5/1993
Abstract
Long-term microvascular and neurologic complications cause major morbidity and mortality in patients with insulin-dependent diabetes mellitus (IDDM). We examined whether intensive treatment with the goal of maintaining blood glucose concentrations close to the normal range could decrease the frequency and severity of these complications.
A total of 1441 patients with IDDM--726 with no retinopathy at base line (the primary-prevention cohort) and 715 with mild retinopathy (the secondary-intervention cohort) were randomly assigned to intensive therapy administered either with an external insulin pump or by three or more daily insulin injections and guided by frequent blood glucose monitoring or to conventional therapy with one or two daily insulin injections. The patients were followed for a mean of 6.5 years, and the appearance and progression of retinopathy and other complications were assessed regularly.
In the primary-prevention cohort, intensive therapy reduced the adjusted mean risk for the development of retinopathy by 76 percent (95 percent confidence interval, 62 to 85 percent), as compared with conventional therapy. In the secondary-intervention cohort, intensive therapy slowed the progression of retinopathy by 54 percent (95 percent confidence interval, 39 to 66 percent) and reduced the development of proliferative or severe nonproliferative retinopathy by 47 percent (95 percent confidence interval, 14 to 67 percent). In the two cohorts combined, intensive therapy reduced the occurrence of microalbuminuria (urinary albumin excretion of>> or = 40 mg per 24 hours) by 39 percent (95 percent confidence interval, 21 to 52 percent), that of albuminuria (urinary albumin excretion of>> or = 300 mg per 24 hours) by 54 percent (95 percent confidence interval 19 to 74 percent), and that of clinical neuropathy by 60 percent (95 percent confidence interval, 38 to 74 percent). The chief adverse event associated with intensive therapy was a two-to-threefold increase in severe hypoglycemia.
Intensive therapy effectively delays the onset and slows the progression of diabetic retinopathy, nephropathy, and neuropathy in patients with IDDM.
Pulse
Views:
9
Posts:
No posts
Rating:
Not rated
Publication
Journal: Nature Biotechnology
March/1/2012
Abstract
Massively parallel sequencing of cDNA has enabled deep and efficient probing of transcriptomes. Current approaches for transcript reconstruction from such data often rely on aligning reads to a reference genome, and are thus unsuitable for samples with a partial or missing reference genome. Here we present the Trinity method for de novo assembly of full-length transcripts and evaluate it on samples from fission yeast, mouse and whitefly, whose reference genome is not yet available. By efficiently constructing and analyzing sets of de Bruijn graphs, Trinity fully reconstructs a large fraction of transcripts, including alternatively spliced isoforms and transcripts from recently duplicated genes. Compared with other de novo transcriptome assemblers, Trinity recovers more full-length transcripts across a broad range of expression levels, with a sensitivity similar to methods that rely on genome alignments. Our approach provides a unified solution for transcriptome reconstruction in any sample, especially in the absence of a reference genome.
Pulse
Views:
3
Posts:
No posts
Rating:
Not rated
Publication
Journal: Journal of Molecular Evolution
April/12/1981
Abstract
Some simple formulae were obtained which enable us to estimate evolutionary distances in terms of the number of nucleotide substitutions (and, also, the evolutionary rates when the divergence times are known). In comparing a pair of nucleotide sequences, we distinguish two types of differences; if homologous sites are occupied by different nucleotide bases but both are purines or both pyrimidines, the difference is called type I (or "transition" type), while, if one of the two is a purine and the other is a pyrimidine, the difference is called type II (or "transversion" type). Letting P and Q be respectively the fractions of nucleotide sites showing type I and type II differences between two sequences compared, then the evolutionary distance per site is K = -(1/2) ln [(1-2P-Q) square root of 1-2Q]. The evolutionary rate per year is then given by k = K/(2T), where T is the time since the divergence of the two sequences. If only the third codon positions are compared, the synonymous component of the evolutionary base substitutions per site is estimated by K'S = -(1/2) ln (1-2P-Q). Also, formulae for standard errors were obtained. Some examples were worked out using reported globin sequences to show that synonymous substitutions occur at much higher rates than amino acid-altering substitutions in evolution.
Authors
load more...