Citations
All
Search in:AllTitleAbstractAuthor name
Publications
(10M+)
Patents
Grants
Pathways
Clinical trials
The language you are using is not recognised as English. To correctly search in your language please select Search and translation language
Publication
Journal: CA - A Cancer Journal for Clinicians
February/27/2012
Abstract
Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths expected in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival based on incidence data from the National Cancer Institute, the Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries and mortality data from the National Center for Health Statistics. A total of 1,638,910 new cancer cases and 577,190 deaths from cancer are projected to occur in the United States in 2012. During the most recent 5 years for which there are data (2004-2008), overall cancer incidence rates declined slightly in men (by 0.6% per year) and were stable in women, while cancer death rates decreased by 1.8% per year in men and by 1.6% per year in women. Over the past 10 years of available data (1999-2008), cancer death rates have declined by more than 1% per year in men and women of every racial/ethnic group with the exception of American Indians/Alaska Natives, among whom rates have remained stable. The most rapid declines in death rates occurred among African American and Hispanic men (2.4% and 2.3% per year, respectively). Death rates continue to decline for all 4 major cancer sites (lung, colorectum, breast, and prostate), with lung cancer accounting for almost 40% of the total decline in men and breast cancer accounting for 34% of the total decline in women. The reduction in overall cancer death rates since 1990 in men and 1991 in women translates to the avoidance of about 1,024,400 deaths from cancer. Further progress can be accelerated by applying existing cancer control knowledge across all segments of the population, with an emphasis on those groups in the lowest socioeconomic bracket.
Pulse
Views:
2
Posts:
No posts
Rating:
Not rated
Publication
Journal: CA - A Cancer Journal for Clinicians
March/10/2014
Abstract
Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data were collected by the National Cancer Institute, the Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries and mortality data were collected by the National Center for Health Statistics. A total of 1,665,540 new cancer cases and 585,720 cancer deaths are projected to occur in the United States in 2014. During the most recent 5 years for which there are data (2006-2010), delay-adjusted cancer incidence rates declined slightly in men (by 0.6% per year) and were stable in women, while cancer death rates decreased by 1.8% per year in men and by 1.4% per year in women. The combined cancer death rate (deaths per 100,000 population) has been continuously declining for 2 decades, from a peak of 215.1 in 1991 to 171.8 in 2010. This 20% decline translates to the avoidance of approximately 1,340,400 cancer deaths (952,700 among men and 387,700 among women) during this time period. The magnitude of the decline in cancer death rates from 1991 to 2010 varies substantially by age, race, and sex, ranging from no decline among white women aged 80 years and older to a 55% decline among black men aged 40 years to 49 years. Notably, black men experienced the largest drop within every 10-year age group. Further progress can be accelerated by applying existing cancer control knowledge across all segments of the population.
Pulse
Views:
52
Posts:
No posts
Rating:
Not rated
Publication
Journal: Journal of the National Cancer Institute
February/27/2000
Abstract
Anticancer cytotoxic agents go through a process by which their antitumor activity-on the basis of the amount of tumor shrinkage they could generate-has been investigated. In the late 1970s, the International Union Against Cancer and the World Health Organization introduced specific criteria for the codification of tumor response evaluation. In 1994, several organizations involved in clinical research combined forces to tackle the review of these criteria on the basis of the experience and knowledge acquired since then. After several years of intensive discussions, a new set of guidelines is ready that will supersede the former criteria. In parallel to this initiative, one of the participating groups developed a model by which response rates could be derived from unidimensional measurement of tumor lesions instead of the usual bidimensional approach. This new concept has been largely validated by the Response Evaluation Criteria in Solid Tumors Group and integrated into the present guidelines. This special article also provides some philosophic background to clarify the various purposes of response evaluation. It proposes a model by which a combined assessment of all existing lesions, characterized by target lesions (to be measured) and nontarget lesions, is used to extrapolate an overall response to treatment. Methods of assessing tumor lesions are better codified, briefly within the guidelines and in more detail in Appendix I. All other aspects of response evaluation have been discussed, reviewed, and amended whenever appropriate.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Cell
February/9/2005
Abstract
We predict regulatory targets of vertebrate microRNAs (miRNAs) by identifying mRNAs with conserved complementarity to the seed (nucleotides 2-7) of the miRNA. An overrepresentation of conserved adenosines flanking the seed complementary sites in mRNAs indicates that primary sequence determinants can supplement base pairing to specify miRNA target recognition. In a four-genome analysis of 3' UTRs, approximately 13,000 regulatory relationships were detected above the estimate of false-positive predictions, thereby implicating as miRNA targets more than 5300 human genes, which represented 30% of our gene set. Targeting was also detected in open reading frames. In sum, well over one third of human genes appear to be conserved miRNA targets.
Publication
Journal: Bioinformatics
June/17/2009
Abstract
BACKGROUND
A new protocol for sequencing the messenger RNA in a cell, known as RNA-Seq, generates millions of short sequence fragments in a single run. These fragments, or 'reads', can be used to measure levels of gene expression and to identify novel splice variants of genes. However, current software for aligning RNA-Seq data to a genome relies on known splice junctions and cannot identify novel ones. TopHat is an efficient read-mapping algorithm designed to align reads from an RNA-Seq experiment to a reference genome without relying on known splice sites.
RESULTS
We mapped the RNA-Seq reads from a recent mammalian RNA-Seq experiment and recovered more than 72% of the splice junctions reported by the annotation-based software from that study, along with nearly 20,000 previously unreported junctions. The TopHat pipeline is much faster than previous systems, mapping nearly 2.2 million reads per CPU hour, which is sufficient to process an entire RNA-Seq experiment in less than a day on a standard desktop computer. We describe several challenges unique to ab initio splice site discovery from RNA-Seq reads that will require further algorithm development.
BACKGROUND
TopHat is free, open-source software available from http://tophat.cbcb.umd.edu.
BACKGROUND
Supplementary data are available at Bioinformatics online.
Pulse
Views:
4
Posts:
No posts
Rating:
Not rated
Publication
Journal: Biostatistics
October/22/2003
Abstract
In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.
Publication
Journal: Behavior Research Methods
September/17/2007
Abstract
G*Power (Erdfelder, Faul, & Buchner, 1996) was designed as a general stand-alone power analysis program for statistical tests commonly used in social and behavioral research. G*Power 3 is a major extension of, and improvement over, the previous versions. It runs on widely used computer platforms (i.e., Windows XP, Windows Vista, and Mac OS X 10.4) and covers many different statistical tests of the t, F, and chi2 test families. In addition, it includes power analyses for z tests and some exact tests. G*Power 3 provides improved effect size calculators and graphic options, supports both distribution-based and design-based input modes, and offers all types of power analyses in which users might be interested. Like its predecessors, G*Power 3 is free.
Publication
Journal: Bioinformatics
July/28/2013
Abstract
BACKGROUND
Accurate alignment of high-throughput RNA-seq data is a challenging and yet unsolved problem because of the non-contiguous transcript structure, relatively short read lengths and constantly increasing throughput of the sequencing technologies. Currently available RNA-seq aligners suffer from high mapping error rates, low mapping speed, read length limitation and mapping biases.
RESULTS
To align our large (>80 billon reads) ENCODE Transcriptome RNA-seq dataset, we developed the Spliced Transcripts Alignment to a Reference (STAR) software based on a previously undescribed RNA-seq alignment algorithm that uses sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure. STAR outperforms other aligners by a factor of >50 in mapping speed, aligning to the human genome 550 million 2 × 76 bp paired-end reads per hour on a modest 12-core server, while at the same time improving alignment sensitivity and precision. In addition to unbiased de novo detection of canonical junctions, STAR can discover non-canonical splices and chimeric (fusion) transcripts, and is also capable of mapping full-length RNA sequences. Using Roche 454 sequencing of reverse transcription polymerase chain reaction amplicons, we experimentally validated 1960 novel intergenic splice junctions with an 80-90% success rate, corroborating the high precision of the STAR mapping strategy.
METHODS
STAR is implemented as a standalone C++ code. STAR is free open source software distributed under GPLv3 license and can be downloaded from http://code.google.com/p/rna-star/.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Molecular Biology and Evolution
July/11/2017
Abstract
We present the latest version of the Molecular Evolutionary Genetics Analysis (Mega) software, which contains many sophisticated methods and tools for phylogenomics and phylomedicine. In this major upgrade, Mega has been optimized for use on 64-bit computing systems for analyzing larger datasets. Researchers can now explore and analyze tens of thousands of sequences in Mega The new version also provides an advanced wizard for building timetrees and includes a new functionality to automatically predict gene duplication events in gene family trees. The 64-bit Mega is made available in two interfaces: graphical and command line. The graphical user interface (GUI) is a native Microsoft Windows application that can also be used on Mac OS X. The command line Mega is available as native applications for Windows, Linux, and Mac OS X. They are intended for use in high-throughput and scripted analysis. Both versions are available from www.megasoftware.net free of charge.
Publication
Journal: CA - A Cancer Journal for Clinicians
March/12/2015
Abstract
Each year the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data were collected by the National Cancer Institute (Surveillance, Epidemiology, and End Results [SEER] Program), the Centers for Disease Control and Prevention (National Program of Cancer Registries), and the North American Association of Central Cancer Registries. Mortality data were collected by the National Center for Health Statistics. A total of 1,658,370 new cancer cases and 589,430 cancer deaths are projected to occur in the United States in 2015. During the most recent 5 years for which there are data (2007-2011), delay-adjusted cancer incidence rates (13 oldest SEER registries) declined by 1.8% per year in men and were stable in women, while cancer death rates nationwide decreased by 1.8% per year in men and by 1.4% per year in women. The overall cancer death rate decreased from 215.1 (per 100,000 population) in 1991 to 168.7 in 2011, a total relative decline of 22%. However, the magnitude of the decline varied by state, and was generally lowest in the South (∼15%) and highest in the Northeast (≥20%). For example, there were declines of 25% to 30% in Maryland, New Jersey, Massachusetts, New York, and Delaware, which collectively averted 29,000 cancer deaths in 2011 as a result of this progress. Further gains can be accelerated by applying existing cancer control knowledge across all segments of the population.
Publication
Journal: Acta crystallographica. Section D, Biological crystallography
February/28/2010
Abstract
The usage and control of recent modifications of the program package XDS for the processing of rotation images are described in the context of previous versions. New features include automatic determination of spot size and reflecting range and recognition and assignment of crystal symmetry. Moreover, the limitations of earlier package versions on the number of correction/scaling factors and the representation of pixel contents have been removed. Large program parts have been restructured for parallel processing so that the quality and completeness of collected data can be assessed soon after measurement.
Publication
Journal: Annals of Surgery
August/23/2004
Abstract
OBJECTIVE
Although quality assessment is gaining increasing attention, there is still no consensus on how to define and grade postoperative complications. This shortcoming hampers comparison of outcome data among different centers and therapies and over time.
METHODS
A classification of complications published by one of the authors in 1992 was critically re-evaluated and modified to increase its accuracy and its acceptability in the surgical community. Modifications mainly focused on the manner of reporting life-threatening and permanently disabling complications. The new grading system still mostly relies on the therapy used to treat the complication. The classification was tested in a cohort of 6336 patients who underwent elective general surgery at our institution. The reproducibility and personal judgment of the classification were evaluated through an international survey with 2 questionnaires sent to 10 surgical centers worldwide.
RESULTS
The new ranking system significantly correlated with complexity of surgery (P < 0.0001) as well as with the length of the hospital stay (P < 0.0001). A total of 144 surgeons from 10 different centers around the world and at different levels of training returned the survey. Ninety percent of the case presentations were correctly graded. The classification was considered to be simple (92% of the respondents), reproducible (91%), logical (92%), useful (90%), and comprehensive (89%). The answers of both questionnaires were not dependent on the origin of the reply and the level of training of the surgeons.
CONCLUSIONS
The new complication classification appears reliable and may represent a compelling tool for quality assessment in surgery in all parts of the world.
Publication
Journal: JAMA - Journal of the American Medical Association
June/12/2003
Abstract
"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.
Pulse
Views:
3
Posts:
No posts
Rating:
Not rated
Publication
Journal: Nature
September/13/2000
Abstract
Human breast tumours are diverse in their natural history and in their responsiveness to treatments. Variation in transcriptional programs accounts for much of the biological diversity of human cells and tumours. In each cell, signal transduction and regulatory systems transduce information from the cell's identity to its environmental status, thereby controlling the level of expression of every gene in the genome. Here we have characterized variation in gene expression patterns in a set of 65 surgical specimens of human breast tumours from 42 different individuals, using complementary DNA microarrays representing 8,102 human genes. These patterns provided a distinctive molecular portrait of each tumour. Twenty of the tumours were sampled twice, before and after a 16-week course of doxorubicin chemotherapy, and two tumours were paired with a lymph node metastasis from the same patient. Gene expression patterns in two tumour samples from the same individual were almost always more similar to each other than either was to any other sample. Sets of co-expressed genes were identified for which variation in messenger RNA levels could be related to specific features of physiological variation. The tumours could be classified into subtypes distinguished by pervasive differences in their gene expression patterns.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Nucleic Acids Research
August/17/2003
Abstract
The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.
Pulse
Views:
3
Posts:
No posts
Rating:
Not rated
Publication
Journal: Applied and Environmental Microbiology
March/29/2010
Abstract
mothur aims to be a comprehensive software package that allows users to use a single piece of software to analyze community sequence data. It builds upon previous tools to provide a flexible and powerful software package for analyzing sequencing data. As a case study, we used mothur to trim, screen, and align sequences; calculate distances; assign sequences to operational taxonomic units; and describe the alpha and beta diversity of eight marine samples previously characterized by pyrosequencing of 16S rRNA gene fragments. This analysis of more than 222,000 sequences was completed in less than 2 h with a laptop computer.
Publication
Journal: Journal of Clinical Psychiatry
January/10/1999
Abstract
The Mini-International Neuropsychiatric Interview (M.I.N.I.) is a short structured diagnostic interview, developed jointly by psychiatrists and clinicians in the United States and Europe, for DSM-IV and ICD-10 psychiatric disorders. With an administration time of approximately 15 minutes, it was designed to meet the need for a short but accurate structured psychiatric interview for multicenter clinical trials and epidemiology studies and to be used as a first step in outcome tracking in nonresearch clinical settings. The authors describe the development of the M.I.N.I. and its family of interviews: the M.I.N.I.-Screen, the M.I.N.I.-Plus, and the M.I.N.I.-Kid. They report on validation of the M.I.N.I. in relation to the Structured Clinical Interview for DSM-III-R, Patient Version, the Composite International Diagnostic Interview, and expert professional opinion, and they comment on potential applications for this interview.
Publication
Journal: Bioinformatics
June/20/2010
Abstract
BACKGROUND
Testing for correlations between different sets of genomic features is a fundamental task in genomics research. However, searching for overlaps between features with existing web-based methods is complicated by the massive datasets that are routinely produced with current sequencing technologies. Fast and flexible tools are therefore required to ask complex questions of these data in an efficient manner.
RESULTS
This article introduces a new software suite for the comparison, manipulation and annotation of genomic features in Browser Extensible Data (BED) and General Feature Format (GFF) format. BEDTools also supports the comparison of sequence alignments in BAM format to both BED and GFF features. The tools are extremely efficient and allow the user to compare large datasets (e.g. next-generation sequencing data) with both public and custom genome annotation tracks. BEDTools can be combined with one another as well as with standard UNIX commands, thus facilitating routine genomics tasks as well as pipelines that can quickly answer intricate questions of large genomic datasets.
METHODS
BEDTools was written in C++. Source code and a comprehensive user manual are freely available at http://code.google.com/p/bedtools
BACKGROUND
aaronquinlan@gmail.com; imh4y@virginia.edu
BACKGROUND
Supplementary data are available at Bioinformatics online.
Publication
Journal: Science
March/1/1988
Abstract
A thermostable DNA polymerase was used in an in vitro DNA amplification procedure, the polymerase chain reaction. The enzyme, isolated from Thermus aquaticus, greatly simplifies the procedure and, by enabling the amplification reaction to be performed at higher temperatures, significantly improves the specificity, yield, sensitivity, and length of products that can be amplified. Single-copy genomic sequences were amplified by a factor of more than 10 million with very high specificity, and DNA segments up to 2000 base pairs were readily amplified. In addition, the method was used to amplify and detect a target DNA molecule present only once in a sample of 10(5) cells.
Publication
Journal: Journal of Biological Chemistry
July/23/1975
Abstract
A technique has been developed for the separation of proteins by two-dimensional polyacrylamide gel electrophoresis. Due to its resolution and sensitivity, this technique is a powerful tool for the analysis and detection of proteins from complex biological sources. Proteins are separated according to isoelectric point by isoelectric focusing in the first dimension, and according to molecular weight by sodium dodecyl sulfate electrophoresis in the second dimension. Since these two parameters are unrelated, it is possible to obtain an almost uniform distribution of protein spots across a two-diminsional gel. This technique has resolved 1100 different components from Escherichia coli and should be capable of resolving a maximum of 5000 proteins. A protein containing as little as one disintegration per min of either 14C or 35S can be detected by autoradiography. A protein which constitutes 10 minus 4 to 10 minus 5% of the total protein can be detected and quantified by autoradiography. The reproducibility of the separation is sufficient to permit each spot on one separation to be matched with a spot on a different separation. This technique provides a method for estimation (at the described sensitivities) of the number of proteins made by any biological system. This system can resolve proteins differing in a single charge and consequently can be used in the analysis of in vivo modifications resulting in a change in charge. Proteins whose charge is changed by missense mutations can be identified. A detailed description of the methods as well as the characteristics of this system are presented.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Journal of Molecular Evolution
April/12/1981
Abstract
Some simple formulae were obtained which enable us to estimate evolutionary distances in terms of the number of nucleotide substitutions (and, also, the evolutionary rates when the divergence times are known). In comparing a pair of nucleotide sequences, we distinguish two types of differences; if homologous sites are occupied by different nucleotide bases but both are purines or both pyrimidines, the difference is called type I (or "transition" type), while, if one of the two is a purine and the other is a pyrimidine, the difference is called type II (or "transversion" type). Letting P and Q be respectively the fractions of nucleotide sites showing type I and type II differences between two sequences compared, then the evolutionary distance per site is K = -(1/2) ln [(1-2P-Q) square root of 1-2Q]. The evolutionary rate per year is then given by k = K/(2T), where T is the time since the divergence of the two sequences. If only the third codon positions are compared, the synonymous component of the evolutionary base substitutions per site is estimated by K'S = -(1/2) ln (1-2P-Q). Also, formulae for standard errors were obtained. Some examples were worked out using reported globin sequences to show that synonymous substitutions occur at much higher rates than amino acid-altering substitutions in evolution.
Authors
Publication
Journal: BMC Genomics
March/31/2008
Abstract
BACKGROUND
The number of prokaryotic genome sequences becoming available is growing steadily and is growing faster than our ability to accurately annotate them.
METHODS
We describe a fully automated service for annotating bacterial and archaeal genomes. The service identifies protein-encoding, rRNA and tRNA genes, assigns functions to the genes, predicts which subsystems are represented in the genome, uses this information to reconstruct the metabolic network and makes the output easily downloadable for the user. In addition, the annotated genome can be browsed in an environment that supports comparative analysis with the annotated genomes maintained in the SEED environment. The service normally makes the annotated genome available within 12-24 hours of submission, but ultimately the quality of such a service will be judged in terms of accuracy, consistency, and completeness of the produced annotations. We summarize our attempts to address these issues and discuss plans for incrementally enhancing the service.
CONCLUSIONS
By providing accurate, rapid annotation freely to the community we have created an important community resource. The service has now been utilized by over 120 external users annotating over 350 distinct genomes.
Publication
Journal: CA - A Cancer Journal for Clinicians
August/3/2009
Abstract
Each year, the American Cancer Society estimates the number of new cancer cases and deaths expected in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival based on incidence data from the National Cancer Institute, Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries and mortality data from the National Center for Health Statistics. Incidence and death rates are standardized by age to the 2000 United States standard million population. A total of 1,479,350 new cancer cases and 562,340 deaths from cancer are projected to occur in the United States in 2009. Overall cancer incidence rates decreased in the most recent time period in both men (1.8% per year from 2001 to 2005) and women (0.6% per year from 1998 to 2005), largely because of decreases in the three major cancer sites in men (lung, prostate, and colon and rectum [colorectum]) and in two major cancer sites in women (breast and colorectum). Overall cancer death rates decreased in men by 19.2% between 1990 and 2005, with decreases in lung (37%), prostate (24%), and colorectal (17%) cancer rates accounting for nearly 80% of the total decrease. Among women, overall cancer death rates between 1991 and 2005 decreased by 11.4%, with decreases in breast (37%) and colorectal (24%) cancer rates accounting for 60% of the total decrease. The reduction in the overall cancer death rates has resulted in the avoidance of about 650,000 deaths from cancer over the 15-year period. This report also examines cancer incidence, mortality, and survival by site, sex, race/ethnicity, education, geographic area, and calendar year. Although progress has been made in reducing incidence and mortality rates and improving survival, cancer still accounts for more deaths than heart disease in persons younger than 85 years of age. Further progress can be accelerated by applying existing cancer control knowledge across all segments of the population and by supporting new discoveries in cancer prevention, early detection, and treatment.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Journal of Computational Biology
August/26/2012
Abstract
The lion's share of bacteria in various environments cannot be cloned in the laboratory and thus cannot be sequenced using existing technologies. A major goal of single-cell genomics is to complement gene-centric metagenomic data with whole-genome assemblies of uncultivated organisms. Assembly of single-cell data is challenging because of highly non-uniform read coverage as well as elevated levels of sequencing errors and chimeric reads. We describe SPAdes, a new assembler for both single-cell and standard (multicell) assembly, and demonstrate that it improves on the recently released E+V-SC assembler (specialized for single-cell data) and on popular assemblers Velvet and SoapDeNovo (for multicell data). SPAdes generates single-cell assemblies, providing information about genomes of uncultivatable bacteria that vastly exceeds what may be obtained via traditional metagenomics studies. SPAdes is available online ( http://bioinf.spbau.ru/spades ). It is distributed as open source software.
Pulse
Views:
2
Posts:
No posts
Rating:
Not rated
load more...