Citations
All
Search in:AllTitleAbstractAuthor name
Publications
(608K+)
Patents
Grants
Pathways
Clinical trials
The language you are using is not recognised as English. To correctly search in your language please select Search and translation language
Publication
Journal: NeuroImage
August/8/2006
Abstract
There has been much recent interest in using magnetic resonance diffusion imaging to provide information about anatomical connectivity in the brain, by measuring the anisotropic diffusion of water in white matter tracts. One of the measures most commonly derived from diffusion data is fractional anisotropy (FA), which quantifies how strongly directional the local tract structure is. Many imaging studies are starting to use FA images in voxelwise statistical analyses, in order to localise brain changes related to development, degeneration and disease. However, optimal analysis is compromised by the use of standard registration algorithms; there has not to date been a satisfactory solution to the question of how to align FA images from multiple subjects in a way that allows for valid conclusions to be drawn from the subsequent voxelwise analysis. Furthermore, the arbitrariness of the choice of spatial smoothing extent has not yet been resolved. In this paper, we present a new method that aims to solve these issues via (a) carefully tuned non-linear registration, followed by (b) projection onto an alignment-invariant tract representation (the "mean FA skeleton"). We refer to this new approach as Tract-Based Spatial Statistics (TBSS). TBSS aims to improve the sensitivity, objectivity and interpretability of analysis of multi-subject diffusion imaging studies. We describe TBSS in detail and present example TBSS results from several diffusion imaging studies.
Pulse
Views:
18
Posts:
No posts
Rating:
Not rated
Publication
Journal: Annual Review of Neuroscience
September/7/2000
Abstract
The field of neuroscience has, after a long period of looking the other way, again embraced emotion as an important research area. Much of the progress has come from studies of fear, and especially fear conditioning. This work has pinpointed the amygdala as an important component of the system involved in the acquisition, storage, and expression of fear memory and has elucidated in detail how stimuli enter, travel through, and exit the amygdala. Some progress has also been made in understanding the cellular and molecular mechanisms that underlie fear conditioning, and recent studies have also shown that the findings from experimental animals apply to the human brain. It is important to remember why this work on emotion succeeded where past efforts failed. It focused on a psychologically well-defined aspect of emotion, avoided vague and poorly defined concepts such as "affect," "hedonic tone," or "emotional feelings," and used a simple and straightforward experimental approach. With so much research being done in this area today, it is important that the mistakes of the past not be made again. It is also time to expand from this foundation into broader aspects of mind and behavior.
Authors
Publication
Journal: Journal of Computational Chemistry
July/28/2009
Abstract
CHARMM (Chemistry at HARvard Molecular Mechanics) is a highly versatile and widely used molecular simulation program. It has been developed over the last three decades with a primary focus on molecules of biological interest, including proteins, peptides, lipids, nucleic acids, carbohydrates, and small molecule ligands, as they occur in solution, crystals, and membrane environments. For the study of such systems, the program provides a large suite of computational tools that include numerous conformational and path sampling methods, free energy estimators, molecular minimization, dynamics, and analysis techniques, and model-building capabilities. The CHARMM program is applicable to problems involving a much broader class of many-particle systems. Calculations with CHARMM can be performed using a number of different energy functions and models, from mixed quantum mechanical-molecular mechanical force fields, to all-atom classical potential energy functions with explicit solvent and various boundary conditions, to implicit solvent and membrane models. The program has been ported to numerous platforms in both serial and parallel architectures. This article provides an overview of the program as it exists today with an emphasis on developments since the publication of the original CHARMM article in 1983.
Pulse
Views:
3
Posts:
No posts
Rating:
Not rated
Publication
Journal: Biostatistics
February/26/2007
Abstract
Non-biological experimental variation or "batch effects" are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (>> 25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.
Publication
Journal: The Lancet
January/22/2013
Abstract
BACKGROUND
Non-fatal health outcomes from diseases and injuries are a crucial consideration in the promotion and monitoring of individual and population health. The Global Burden of Disease (GBD) studies done in 1990 and 2000 have been the only studies to quantify non-fatal health outcomes across an exhaustive set of disorders at the global and regional level. Neither effort quantified uncertainty in prevalence or years lived with disability (YLDs).
METHODS
Of the 291 diseases and injuries in the GBD cause list, 289 cause disability. For 1160 sequelae of the 289 diseases and injuries, we undertook a systematic analysis of prevalence, incidence, remission, duration, and excess mortality. Sources included published studies, case notification, population-based cancer registries, other disease registries, antenatal clinic serosurveillance, hospital discharge data, ambulatory care data, household surveys, other surveys, and cohort studies. For most sequelae, we used a Bayesian meta-regression method, DisMod-MR, designed to address key limitations in descriptive epidemiological data, including missing data, inconsistency, and large methodological variation between data sources. For some disorders, we used natural history models, geospatial models, back-calculation models (models calculating incidence from population mortality rates and case fatality), or registration completeness models (models adjusting for incomplete registration with health-system access and other covariates). Disability weights for 220 unique health states were used to capture the severity of health loss. YLDs by cause at age, sex, country, and year levels were adjusted for comorbidity with simulation methods. We included uncertainty estimates at all stages of the analysis.
RESULTS
Global prevalence for all ages combined in 2010 across the 1160 sequelae ranged from fewer than one case per 1 million people to 350,000 cases per 1 million people. Prevalence and severity of health loss were weakly correlated (correlation coefficient -0·37). In 2010, there were 777 million YLDs from all causes, up from 583 million in 1990. The main contributors to global YLDs were mental and behavioural disorders, musculoskeletal disorders, and diabetes or endocrine diseases. The leading specific causes of YLDs were much the same in 2010 as they were in 1990: low back pain, major depressive disorder, iron-deficiency anaemia, neck pain, chronic obstructive pulmonary disease, anxiety disorders, migraine, diabetes, and falls. Age-specific prevalence of YLDs increased with age in all regions and has decreased slightly from 1990 to 2010. Regional patterns of the leading causes of YLDs were more similar compared with years of life lost due to premature mortality. Neglected tropical diseases, HIV/AIDS, tuberculosis, malaria, and anaemia were important causes of YLDs in sub-Saharan Africa.
CONCLUSIONS
Rates of YLDs per 100,000 people have remained largely constant over time but rise steadily with age. Population growth and ageing have increased YLD numbers and crude rates over the past two decades. Prevalences of the most common causes of YLDs, such as mental and behavioural disorders and musculoskeletal disorders, have not decreased. Health systems will need to address the needs of the rising numbers of individuals with a range of disorders that largely cause disability but not mortality. Quantification of the burden of non-fatal health outcomes will be crucial to understand how well health systems are responding to these challenges. Effective and affordable strategies to deal with this rising burden are an urgent priority for health systems in most parts of the world.
BACKGROUND
Bill & Melinda Gates Foundation.
Publication
Journal: Bioinformatics
March/12/2012
Abstract
BACKGROUND
Next-generation sequencing technologies generate very large numbers of short reads. Even with very deep genome coverage, short read lengths cause problems in de novo assemblies. The use of paired-end libraries with a fragment size shorter than twice the read length provides an opportunity to generate much longer reads by overlapping and merging read pairs before assembling a genome.
RESULTS
We present FLASH, a fast computational tool to extend the length of short reads by overlapping paired-end reads from fragment libraries that are sufficiently short. We tested the correctness of the tool on one million simulated read pairs, and we then applied it as a pre-processor for genome assemblies of Illumina reads from the bacterium Staphylococcus aureus and human chromosome 14. FLASH correctly extended and merged reads >99% of the time on simulated reads with an error rate of <1%. With adequately set parameters, FLASH correctly merged reads over 90% of the time even when the reads contained up to 5% errors. When FLASH was used to extend reads prior to assembly, the resulting assemblies had substantially greater N50 lengths for both contigs and scaffolds.
METHODS
The FLASH system is implemented in C and is freely available as open-source code at http://www.cbcb.umd.edu/software/flash.
BACKGROUND
t.magoc@gmail.com.
Publication
Journal: Proceedings of the National Academy of Sciences of the United States of America
December/28/1979
Abstract
We describe a technique for transferring electrophoretically separated bands of double-stranded DNA from agarose gels to diazobenzyloxymethyl-paper. Controlled cleavage of the DNA in situ by sequential treatment with dilute acid, which causes partial depurination, and dilute alkali, which causes cleavage and separation of the strands, allows the DNA to leave the gel rapidly and completely, with an efficiency independent of its size. Covalent attachment of DNA to paper prevents losses during subsequent hybridization and washing steps and allows a single paper to be reused many times. Ten percent dextran sulfate, originally found to accelerate DNA hybridization in solution by about 10-fold [J.G. Wetmur (1975) Biopolymers 14, 2517-2524], accelerates the rate of hybridization of randomly cleaved double-stranded DNA probes to immobilized nucleic acids by as much as 100-fold, without increasing the background significantly.
Publication
Journal: Molecular Biology of the Cell
July/28/2003
Abstract
Much of the work conducted on adult stem cells has focused on mesenchymal stem cells (MSCs) found within the bone marrow stroma. Adipose tissue, like bone marrow, is derived from the embryonic mesenchyme and contains a stroma that is easily isolated. Preliminary studies have recently identified a putative stem cell population within the adipose stromal compartment. This cell population, termed processed lipoaspirate (PLA) cells, can be isolated from human lipoaspirates and, like MSCs, differentiate toward the osteogenic, adipogenic, myogenic, and chondrogenic lineages. To confirm whether adipose tissue contains stem cells, the PLA population and multiple clonal isolates were analyzed using several molecular and biochemical approaches. PLA cells expressed multiple CD marker antigens similar to those observed on MSCs. Mesodermal lineage induction of PLA cells and clones resulted in the expression of multiple lineage-specific genes and proteins. Furthermore, biochemical analysis also confirmed lineage-specific activity. In addition to mesodermal capacity, PLA cells and clones differentiated into putative neurogenic cells, exhibiting a neuronal-like morphology and expressing several proteins consistent with the neuronal phenotype. Finally, PLA cells exhibited unique characteristics distinct from those seen in MSCs, including differences in CD marker profile and gene expression.
Publication
Journal: Proceedings of the Royal Society B: Biological Sciences
April/3/2003
Abstract
Although much biological research depends upon species diagnoses, taxonomic expertise is collapsing. We are convinced that the sole prospect for a sustainable identification capability lies in the construction of systems that employ DNA sequences as taxon 'barcodes'. We establish that the mitochondrial gene cytochrome c oxidase I (COI) can serve as the core of a global bioidentification system for animals. First, we demonstrate that COI profiles, derived from the low-density sampling of higher taxonomic categories, ordinarily assign newly analysed taxa to the appropriate phylum or order. Second, we demonstrate that species-level assignments can be obtained by creating comprehensive COI profiles. A model COI profile, based upon the analysis of a single individual from each of 200 closely allied species of lepidopterans, was 100% successful in correctly identifying subsequent specimens. When fully developed, a COI identification system will provide a reliable, cost-effective and accessible solution to the current problem of species identification. Its assembly will also generate important new insights into the diversification of life and the rules of molecular evolution.
Publication
Journal: Nature
October/20/2008
Abstract
Animal microRNAs (miRNAs) regulate gene expression by inhibiting translation and/or by inducing degradation of target messenger RNAs. It is unknown how much translational control is exerted by miRNAs on a genome-wide scale. We used a new proteomic approach to measure changes in synthesis of several thousand proteins in response to miRNA transfection or endogenous miRNA knockdown. In parallel, we quantified mRNA levels using microarrays. Here we show that a single miRNA can repress the production of hundreds of proteins, but that this repression is typically relatively mild. A number of known features of the miRNA-binding site such as the seed sequence also govern repression of human protein synthesis, and we report additional target sequence characteristics. We demonstrate that, in addition to downregulating mRNA levels, miRNAs also directly repress translation of hundreds of genes. Finally, our data suggest that a miRNA can, by direct or indirect effects, tune protein synthesis from thousands of genes.
Publication
Journal: Nature Reviews Neuroscience
January/7/2008
Abstract
In response to a peripheral infection, innate immune cells produce pro-inflammatory cytokines that act on the brain to cause sickness behaviour. When activation of the peripheral immune system continues unabated, such as during systemic infections, cancer or autoimmune diseases, the ensuing immune signalling to the brain can lead to an exacerbation of sickness and the development of symptoms of depression in vulnerable individuals. These phenomena might account for the increased prevalence of clinical depression in physically ill people. Inflammation is therefore an important biological event that might increase the risk of major depressive episodes, much like the more traditional psychosocial factors.
Pulse
Views:
4
Posts:
No posts
Rating:
Not rated
Publication
Journal: Critical Care
November/2/2005
Abstract
BACKGROUND
There is no consensus definition of acute renal failure (ARF) in critically ill patients. More than 30 different definitions have been used in the literature, creating much confusion and making comparisons difficult. Similarly, strong debate exists on the validity and clinical relevance of animal models of ARF; on choices of fluid management and of end-points for trials of new interventions in this field; and on how information technology can be used to assist this process. Accordingly, we sought to review the available evidence, make recommendations and delineate key questions for future studies.
METHODS
We undertook a systematic review of the literature using Medline and PubMed searches. We determined a list of key questions and convened a 2-day consensus conference to develop summary statements via a series of alternating breakout and plenary sessions. In these sessions, we identified supporting evidence and generated recommendations and/or directions for future research.
RESULTS
We found sufficient consensus on 47 questions to allow the development of recommendations. Importantly, we were able to develop a consensus definition for ARF. In some cases it was also possible to issue useful consensus recommendations for future investigations. We present a summary of the findings. (Full versions of the six workgroups' findings are available on the internet at http://www.ADQI.net)
CONCLUSIONS
Despite limited data, broad areas of consensus exist for the physiological and clinical principles needed to guide the development of consensus recommendations for defining ARF, selection of animal models, methods of monitoring fluid therapy, choice of physiological and clinical end-points for trials, and the possible role of information technology.
Publication
Journal: Science
April/10/1972
Abstract
A fluid mosaic model is presented for the gross organization and structure of the proteins and lipids of biological membranes. The model is consistent with the restrictions imposed by thermodynamics. In this model, the proteins that are integral to the membrane are a heterogeneous set of globular molecules, each arranged in an amphipathic structure, that is, with the ionic and highly polar groups protruding from the membrane into the aqueous phase, and the nonpolar groups largely buried in the hydrophobic interior of the membrane. These globular molecules are partially embedded in a matrix of phospholipid. The bulk of the phospholipid is organized as a discontinuous, fluid bilayer, although a small fraction of the lipid may interact specifically with the membrane proteins. The fluid mosaic structure is therefore formally analogous to a two-dimensional oriented solution of integral proteins (or lipoproteins) in the viscous phospholipid bilayer solvent. Recent experiments with a wide variety of techniqes and several different membrane systems are described, all of which abet consistent with, and add much detail to, the fluid mosaic model. It therefore seems appropriate to suggest possible mechanisms for various membrane functions and membrane-mediated phenomena in the light of the model. As examples, experimentally testable mechanisms are suggested for cell surface changes in malignant transformation, and for cooperative effects exhibited in the interactions of membranes with some specific ligands. Note added in proof: Since this article was written, we have obtained electron microscopic evidence (69) that the concanavalin A binding sites on the membranes of SV40 virus-transformed mouse fibroblasts (3T3 cells) are more clustered than the sites on the membranes of normal cells, as predicted by the hypothesis represented in Fig. 7B. T-here has also appeared a study by Taylor et al. (70) showing the remarkable effects produced on lymphocytes by the addition of antibodies directed to their surface immunoglobulin molecules. The antibodies induce a redistribution and pinocytosis of these surface immunoglobulins, so that within about 30 minutes at 37 degrees C the surface immunoglobulins are completely swept out of the membrane. These effects do not occur, however, if the bivalent antibodies are replaced by their univalent Fab fragments or if the antibody experiments are carried out at 0 degrees C instead of 37 degrees C. These and related results strongly indicate that the bivalent antibodies produce an aggregation of the surface immunoglobulin molecules in the plane of the membrane, which can occur only if the immunoglobulin molecules are free to diffuse in the membrane. This aggregation then appears to trigger off the pinocytosis of the membrane components by some unknown mechanism. Such membrane transformations may be of crucial importance in the induction of an antibody response to an antigen, as well as iv other processes of cell differentiation.
Publication
Journal: Science
December/5/2007
Abstract
Human cancer is caused by the accumulation of mutations in oncogenes and tumor suppressor genes. To catalog the genetic changes that occur during tumorigenesis, we isolated DNA from 11 breast and 11 colorectal tumors and determined the sequences of the genes in the Reference Sequence database in these samples. Based on analysis of exons representing 20,857 transcripts from 18,191 genes, we conclude that the genomic landscapes of breast and colorectal cancers are composed of a handful of commonly mutated gene "mountains" and a much larger number of gene "hills" that are mutated at low frequency. We describe statistical and bioinformatic tools that may help identify mutations with a role in tumorigenesis. These results have implications for understanding the nature and heterogeneity of human cancers and for using personal genomics for tumor diagnosis and therapy.
Publication
Journal: PLoS ONE
August/31/2010
Abstract
BACKGROUND
Multiple genome alignment remains a challenging problem. Effects of recombination including rearrangement, segmental duplication, gain, and loss can create a mosaic pattern of homology even among closely related organisms.
RESULTS
We describe a new method to align two or more genomes that have undergone rearrangements due to recombination and substantial amounts of segmental gain and loss (flux). We demonstrate that the new method can accurately align regions conserved in some, but not all, of the genomes, an important case not handled by our previous work. The method uses a novel alignment objective score called a sum-of-pairs breakpoint score, which facilitates accurate detection of rearrangement breakpoints when genomes have unequal gene content. We also apply a probabilistic alignment filtering method to remove erroneous alignments of unrelated sequences, which are commonly observed in other genome alignment methods. We describe new metrics for quantifying genome alignment accuracy which measure the quality of rearrangement breakpoint predictions and indel predictions. The new genome alignment algorithm demonstrates high accuracy in situations where genomes have undergone biologically feasible amounts of genome rearrangement, segmental gain and loss. We apply the new algorithm to a set of 23 genomes from the genera Escherichia, Shigella, and Salmonella. Analysis of whole-genome multiple alignments allows us to extend the previously defined concepts of core- and pan-genomes to include not only annotated genes, but also non-coding regions with potential regulatory roles. The 23 enterobacteria have an estimated core-genome of 2.46Mbp conserved among all taxa and a pan-genome of 15.2Mbp. We document substantial population-level variability among these organisms driven by segmental gain and loss. Interestingly, much variability lies in intergenic regions, suggesting that the Enterobacteriacae may exhibit regulatory divergence.
CONCLUSIONS
The multiple genome alignments generated by our software provide a platform for comparative genomic and population genomic studies. Free, open-source software implementing the described genome alignment approach is available from http://gel.ahabs.wisc.edu/mauve.
Publication
Journal: British Medical Journal
July/11/2004
Abstract
Users of clinical practice guidelines and other recommendations need to know how much confidence they can place in the recommendations. Systematic and explicit methods of making judgments can reduce errors and improve communication. We have developed a system for grading the quality of evidence and the strength of recommendations that can be applied across a wide range of interventions and contexts. In this article we present a summary of our approach from the perspective of a guideline user. Judgments about the strength of a recommendation require consideration of the balance between benefits and harms, the quality of the evidence, translation of the evidence into specific circumstances, and the certainty of the baseline risk. It is also important to consider costs (resource utilisation) before making a recommendation. Inconsistencies among systems for grading the quality of evidence and the strength of recommendations reduce their potential to facilitate critical appraisal and improve communication of these judgments. Our system for guiding these complex judgments balances the need for simplicity with the need for full and transparent consideration of all important issues.
Publication
Journal: The Lancet
July/31/2012
Abstract
BACKGROUND
Strong evidence shows that physical inactivity increases the risk of many adverse health conditions, including major non-communicable diseases such as coronary heart disease, type 2 diabetes, and breast and colon cancers, and shortens life expectancy. Because much of the world's population is inactive, this link presents a major public health issue. We aimed to quantify the eff ect of physical inactivity on these major non-communicable diseases by estimating how much disease could be averted if inactive people were to become active and to estimate gain in life expectancy at the population level.
METHODS
For our analysis of burden of disease, we calculated population attributable fractions (PAFs) associated with physical inactivity using conservative assumptions for each of the major non-communicable diseases, by country, to estimate how much disease could be averted if physical inactivity were eliminated. We used life-table analysis to estimate gains in life expectancy of the population.
RESULTS
Worldwide, we estimate that physical inactivity causes 6% (ranging from 3·2% in southeast Asia to 7·8% in the eastern Mediterranean region) of the burden of disease from coronary heart disease, 7% (3·9-9·6) of type 2 diabetes, 10% (5·6-14·1) of breast cancer, and 10% (5·7-13·8) of colon cancer. Inactivity causes 9% (range 5·1-12·5) of premature mortality, or more than 5·3 million of the 57 million deaths that occurred worldwide in 2008. If inactivity were not eliminated, but decreased instead by 10% or 25%, more than 533 000 and more than 1·3 million deaths, respectively, could be averted every year. We estimated that elimination of physical inactivity would increase the life expectancy of the world's population by 0·68 (range 0·41-0·95) years.
CONCLUSIONS
Physical inactivity has a major health eff ect worldwide. Decrease in or removal of this unhealthy behaviour could improve health substantially.
BACKGROUND
None.
Publication
Journal: Biochemical Journal
April/25/2001
Abstract
The specificities of 28 commercially available compounds reported to be relatively selective inhibitors of particular serine/threonine-specific protein kinases have been examined against a large panel of protein kinases. The compounds KT 5720, Rottlerin and quercetin were found to inhibit many protein kinases, sometimes much more potently than their presumed targets, and conclusions drawn from their use in cell-based experiments are likely to be erroneous. Ro 318220 and related bisindoylmaleimides, as well as H89, HA1077 and Y 27632, were more selective inhibitors, but still inhibited two or more protein kinases with similar potency. LY 294002 was found to inhibit casein kinase-2 with similar potency to phosphoinositide (phosphatidylinositol) 3-kinase. The compounds with the most impressive selectivity profiles were KN62, PD 98059, U0126, PD 184352, rapamycin, wortmannin, SB 203580 and SB 202190. U0126 and PD 184352, like PD 98059, were found to block the mitogen-activated protein kinase (MAPK) cascade in cell-based assays by preventing the activation of MAPK kinase (MKK1), and not by inhibiting MKK1 activity directly. Apart from rapamycin and PD 184352, even the most selective inhibitors affected at least one additional protein kinase. Our results demonstrate that the specificities of protein kinase inhibitors cannot be assessed simply by studying their effect on kinases that are closely related in primary structure. We propose guidelines for the use of protein kinase inhibitors in cell-based assays.
Publication
Journal: The Lancet
December/9/2007
Abstract
Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September, 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles.18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies.A detailed explanation and elaboration document is published separately and is freely available on the websites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE statement will contribute to improving the quality of reporting of observational studies
Publication
Journal: Science
June/17/2007
Abstract
Cellular responses to DNA damage are mediated by a number of protein kinases, including ATM (ataxia telangiectasia mutated) and ATR (ATM and Rad3-related). The outlines of the signal transduction portion of this pathway are known, but little is known about the physiological scope of the DNA damage response (DDR). We performed a large-scale proteomic analysis of proteins phosphorylated in response to DNA damage on consensus sites recognized by ATM and ATR and identified more than 900 regulated phosphorylation sites encompassing over 700 proteins. Functional analysis of a subset of this data set indicated that this list is highly enriched for proteins involved in the DDR. This set of proteins is highly interconnected, and we identified a large number of protein modules and networks not previously linked to the DDR. This database paints a much broader landscape for the DDR than was previously appreciated and opens new avenues of investigation into the responses to DNA damage in mammals.
Pulse
Views:
3
Posts:
No posts
Rating:
Not rated
Publication
Journal: Genetics
May/11/1999
Abstract
The origin of organismal complexity is generally thought to be tightly coupled to the evolution of new gene functions arising subsequent to gene duplication. Under the classical model for the evolution of duplicate genes, one member of the duplicated pair usually degenerates within a few million years by accumulating deleterious mutations, while the other duplicate retains the original function. This model further predicts that on rare occasions, one duplicate may acquire a new adaptive function, resulting in the preservation of both members of the pair, one with the new function and the other retaining the old. However, empirical data suggest that a much greater proportion of gene duplicates is preserved than predicted by the classical model. Here we present a new conceptual framework for understanding the evolution of duplicate genes that may help explain this conundrum. Focusing on the regulatory complexity of eukaryotic genes, we show how complementary degenerative mutations in different regulatory elements of duplicated genes can facilitate the preservation of both duplicates, thereby increasing long-term opportunities for the evolution of new gene functions. The duplication-degeneration-complementation (DDC) model predicts that (1) degenerative mutations in regulatory elements can increase rather than reduce the probability of duplicate gene preservation and (2) the usual mechanism of duplicate gene preservation is the partitioning of ancestral functions rather than the evolution of new functions. We present several examples (including analysis of a new engrailed gene in zebrafish) that appear to be consistent with the DDC model, and we suggest several analytical and experimental approaches for determining whether the complementary loss of gene subfunctions or the acquisition of novel functions are likely to be the primary mechanisms for the preservation of gene duplicates. For a newly duplicated paralog, survival depends on the outcome of the race between entropic decay and chance acquisition of an advantageous regulatory mutation. Sidow 1996(p. 717) On one hand, it may fix an advantageous allele giving it a slightly different, and selectable, function from its original copy. This initial fixation provides substantial protection against future fixation of null mutations, allowing additional mutations to accumulate that refine functional differentiation. Alternatively, a duplicate locus can instead first fix a null allele, becoming a pseudogene. Walsh 1995 (p. 426) Duplicated genes persist only if mutations create new and essential protein functions, an event that is predicted to occur rarely. Nadeau and Sankoff 1997 (p. 1259) Thus overall, with complex metazoans, the major mechanism for retention of ancient gene duplicates would appear to have been the acquisition of novel expression sites for developmental genes, with its accompanying opportunity for new gene roles underlying the progressive extension of development itself. Cooke et al. 1997 (p. 362)
Publication
Journal: Genome Research
December/1/2009
Abstract
Population stratification has long been recognized as a confounding factor in genetic association studies. Estimated ancestries, derived from multi-locus genotype data, can be used to perform a statistical correction for population stratification. One popular technique for estimation of ancestry is the model-based approach embodied by the widely applied program structure. Another approach, implemented in the program EIGENSTRAT, relies on Principal Component Analysis rather than model-based estimation and does not directly deliver admixture fractions. EIGENSTRAT has gained in popularity in part owing to its remarkable speed in comparison to structure. We present a new algorithm and a program, ADMIXTURE, for model-based estimation of ancestry in unrelated individuals. ADMIXTURE adopts the likelihood model embedded in structure. However, ADMIXTURE runs considerably faster, solving problems in minutes that take structure hours. In many of our experiments, we have found that ADMIXTURE is almost as fast as EIGENSTRAT. The runtime improvements of ADMIXTURE rely on a fast block relaxation scheme using sequential quadratic programming for block updates, coupled with a novel quasi-Newton acceleration of convergence. Our algorithm also runs faster and with greater accuracy than the implementation of an Expectation-Maximization (EM) algorithm incorporated in the program FRAPPE. Our simulations show that ADMIXTURE's maximum likelihood estimates of the underlying admixture coefficients and ancestral allele frequencies are as accurate as structure's Bayesian estimates. On real-world data sets, ADMIXTURE's estimates are directly comparable to those from structure and EIGENSTRAT. Taken together, our results show that ADMIXTURE's computational speed opens up the possibility of using a much larger set of markers in model-based ancestry estimation and that its estimates are suitable for use in correcting for population stratification in association studies.
Pulse
Views:
1
Posts:
No posts
Rating:
Not rated
Publication
Journal: Science
April/5/2011
Abstract
Metastasis causes most cancer deaths, yet this process remains one of the most enigmatic aspects of the disease. Building on new mechanistic insights emerging from recent research, we offer our perspective on the metastatic process and reflect on possible paths of future exploration. We suggest that metastasis can be portrayed as a two-phase process: The first phase involves the physical translocation of a cancer cell to a distant organ, whereas the second encompasses the ability of the cancer cell to develop into a metastatic lesion at that distant site. Although much remains to be learned about the second phase, we feel that an understanding of the first phase is now within sight, due in part to a better understanding of how cancer cell behavior can be modified by a cell-biological program called the epithelial-to-mesenchymal transition.
Publication
Journal: Magnetic Resonance in Medicine
August/2/1995
Abstract
The typical functional magnetic resonance (fMRI) study presents a formidable problem of multiple statistical comparisons (i.e.,>> 10,000 in a 128 x 128 image). To protect against false positives, investigators have typically relied on decreasing the per pixel false positive probability. This approach incurs an inevitable loss of power to detect statistically significant activity. An alternative approach, which relies on the assumption that areas of true neural activity will tend to stimulate signal changes over contiguous pixels, is presented. If one knows the probability distribution of such cluster sizes as a function of per pixel false positive probability, one can use cluster-size thresholds independently to reject false positives. Both Monte Carlo simulations and fMRI studies of human subjects have been used to verify that this approach can improve statistical power by as much as fivefold over techniques that rely solely on adjusting per pixel false positive probabilities.
load more...