系统生物学


分类

现刊
往期刊物
0 Q&A 919 Views Aug 5, 2025

Brain endothelial cells, which constitute the cerebrovasculature, form the first interface between the blood and brain and play essential roles in maintaining central nervous system (CNS) homeostasis. These cells exhibit strong apicobasal polarity, with distinct luminal and abluminal membrane compositions that crucially mediate compartmentalized functions of the vasculature. Existing transcriptomic and proteomic profiling techniques often lack the spatial resolution to discriminate between these membrane compartments, limiting insights into their distinct molecular compositions and functions. To overcome these limitations, we developed an in vivo proteomic strategy to selectively label and enrich luminal cerebrovascular proteins. In this approach, we perfuse a membrane-impermeable biotinylation reagent into the vasculature to covalently tag cell surface proteins exposed on the luminal side. This is followed by microvessel isolation and streptavidin-based enrichment of biotinylated proteins for downstream mass spectrometry analysis. Using this method, we robustly identified over 1,000 luminally localized proteins via standard liquid chromatography–tandem mass spectrometry (LC–MS/MS) techniques, achieving substantially improved enrichment of canonical luminal markers compared with conventional vascular proteomic approaches. Our method enables the generation of a high-confidence, compartment-resolved atlas of the luminal cerebrovascular proteome and offers a scalable platform for investigating endothelial surface biology in both healthy and disease contexts.

0 Q&A 1352 Views Jul 20, 2025

This manuscript details protocols for the ZnCl2 precipitation-assisted sample preparation (ZASP) for proteomic analysis. By inducing protein precipitation with ZASP precipitation buffer (ZPB, final concentration of ZnCl 2 at 100 mM and 50% methanol), ZASP depletes harsh detergents and impurities, such as sodium dodecyl sulfate (SDS), Triton X-100, and urea, at high concentrations in solution from protein solutions prior to trypsin digestion. It is a practical, robust, and cost-effective approach for proteomic sample preparation. It has been observed that 90.2% of the proteins can be recovered from lysates by incubating with an equal volume of ZPB at room temperature for 10 min. In 1 h of data-dependent acquisition (DDA) analysis on an Exploris 480, 4,037 proteins and 25,626 peptides were quantified from 1 μg of mouse small intestine proteins, reaching a peak of 4,500 proteins and up to 30,000 peptides with 5 μg of input. Additionally, ZASP outperformed other common sample preparation methods such as sodium deoxycholate (SDC)-based in-solution digestion, acetone precipitation, filter-aided sample preparation (FASP), and single-pot, solid-phase-enhanced sample preparation (SP3). It demonstrated superior performance in protein (4,456 proteins) and peptide identification (29,871 peptides), lower missing cleavage rates (16.3%), and high reproducibility (Pearson correlation coefficients of 0.96 between replicates) with similar protein distributions and cellular localization patterns. Significantly, the cost of ZASP per sample with 100 μg of protein as input is lower than 30 RMB, including the expense of trypsin.

0 Q&A 860 Views Feb 5, 2025

Glioblastoma (GBM) is the most aggressive brain tumor, and different efforts have been employed in the search for new drugs and therapeutic protocols for GBM. A label-free, mass spectrometry–based quantitative proteomics has been developed to identify and characterize proteins that are differentially expressed in GBM to gain a better understanding of the interactions and functions that lead to the pathological state focusing on the extracellular matrix (ECM). The main challenge in GBM research has been to identify novel molecular therapeutic targets and accurate diagnostic/prognostic biomarkers. To better investigate the GBM secretome upon in vitro treatment with histone deacetylase inhibitor (iHDAC), we employed a high-throughput label-free methodology of protein identification and quantification based on mass spectrometry followed by in silico studies. Our analysis revealed significant changes in the ECM protein profile, particularly those associated with the angiogenic matrisome. Proteins such as decorin, ADAM10, ADAM12, and ADAM15 were differentially regulated upon in silico analysis. In contrast, key angiogenesis markers such as VEGF and ECM proteins like fibronectin and integrins did not display significant changes. These results suggest that iHDAC inhibitors may modulate or suppress tumor behavior growth by targeting ECM proteins’ secretion rather than directly inhibiting angiogenesis.

0 Q&A 591 Views Dec 20, 2024

Proteomics analysis is crucial for understanding the molecular mechanisms underlying muscle adaptations to different types of exercise, such as concentric and eccentric training. Traditional methods like two-dimensional gel electrophoresis and standard mass spectrometry have been used to analyze muscle protein content and modifications. This protocol details the preparation of muscle samples for proteomics analysis using ultra-high-performance liquid chromatography (UHPLC). It includes steps for muscle biopsy collection, protein extraction, digestion, and UHPLC-based analysis. The UHPLC method offers high-resolution separation of complex protein mixtures, providing more detailed and accurate proteomic profiles compared to conventional techniques. This protocol significantly enhances sensitivity, reproducibility, and efficiency, making it ideal for comprehensive muscle proteomics studies.

0 Q&A 704 Views Dec 5, 2024

The extracellular matrix (ECM) is a complex network of proteins that provides structural support and biochemical cues to cells within tissues. Characterizing ECM composition is critical for understanding this tissue component’s roles in development, homeostasis, and disease processes. This protocol describes an integrated pipeline for profiling both cellular and ECM proteins across varied tissue types using mass spectrometry–based proteomics. The workflow covers stepwise extraction of cellular and extracellular proteins, enzymatic digestion into peptides, peptide cleanup, mass spectrometry analysis, and bioinformatic data processing. The key advantages include unbiased coverage of cellular, ECM-associated, and core-ECM proteins, including the fraction of ECM that cannot be solubilized using strong chaotropic agents such as urea or guanidine hydrochloride. Additionally, the method has been optimized for reproducible ECM enrichment and quantification across diverse tissue samples. This protocol enables systematic mapping of the ECM at a proteome-wide scale.

0 Q&A 1457 Views Aug 20, 2024

Bottom-up proteomics utilizes sample preparation techniques to enzymatically digest proteins, thereby generating identifiable and quantifiable peptides. Proteomics integrates with other omics methodologies, such as genomics and transcriptomics, to elucidate biomarkers associated with diseases and responses to drug or biologics treatment. The methodologies employed for preparing proteomic samples for mass spectrometry analysis exhibit variability across several factors, including the composition of lysis buffer detergents, homogenization techniques, protein extraction and precipitation methodologies, alkylation strategies, and the selection of digestion enzymes. The general workflow for bottom-up proteomics consists of sample preparation, mass spectrometric data acquisition (LC-MS/MS analysis), and subsequent downstream data analysis including protein quantification and differential expression analysis. Sample preparation poses a persistent challenge due to issues such as low reproducibility and inherent procedure complexities. Herein, we have developed a validated chloroform/methanol sample preparation protocol to obtain reproducible peptide mixtures from both rodent tissue and human cell line samples for bottom-up proteomics analysis. The protocol we established may facilitate the standardization of bottom-up proteomics workflows, thereby enhancing the acquisition of reliable biologically and/or clinically relevant proteomic data.

0 Q&A 2934 Views Nov 20, 2022

Chemical proteomics focuses on the drug–target–phenotype relationship for target deconvolution and elucidation of the mechanism of action—key and bottleneck in drug development and repurposing. Majorly due to the limits of using chemically modified ligands in affinity-based methods, new, unbiased, proteome-wide, and MS-based chemical proteomics approaches have been developed to perform drug target deconvolution, using full proteome profiling and no chemical modification of the studied ligand. Of note among them, thermal proteome profiling (TPP) aims to identify the target(s) by measuring the difference in melting temperatures between each identified protein in drug-treated versus vehicle-treated samples, with the thermodynamic interpretation of “protein melting” and curve fitting of all quantified proteins, at all temperatures, in each biological replicate. Including TPP, all the other chemical proteomics approaches often fail to provide target deconvolution with sufficient proteome depth, statistical power, throughput, and sustainability, which could hardly fulfill the final purpose of drug development. The proteome integral solubility alteration (PISA) assay provides no thermodynamic interpretation, but a throughput 10–100-fold compared to the other proteomics methods, high sustainability, much lower time of analysis and sample amount requirements, high confidence in results, maximal proteome coverage (~10,000 protein IDs), and up to five drugs / test molecules in one assay, with at least biological triplicates of each treatment. Each drug-treated or vehicle-treated sample is split into many fractions and exposed to a gradient of heat as solubility perturbing agent before being recomposed into one sample; each soluble fraction is isolated, then deep and quantitative proteomics is applied across all samples. The proteins interacting with the tested molecules (targets and off-targets), the activated mechanistic factors, or proteins modified during the treatment show reproducible changes in their soluble amount compared to vehicle-treated controls. As of today, the maximal multiplexing capability is 18 biological samples per PISA assay, which enables statistical robustness and flexible experimental design accommodation for fuller target deconvolution, including integration of orthogonal chemical proteomics methods in one PISA assay. Living cells for studying target engagement in vivo or, alternatively, protein extracts to identify in vitro ligand-interacting proteins can be studied, and the minimal need in sample amount unlocks target deconvolution using primary cells and their derived cultures.


Graphical abstract:




0 Q&A 2532 Views Feb 5, 2022

Cells sense and respond to mitogens by activating a cascade of signaling events, primarily mediated by tyrosine phosphorylation (pY). Because of its key roles in cellular homeostasis, deregulation of this signaling is often linked to oncogenesis. To understand the mechanisms underlying these signaling pathway aberrations, it is necessary to quantify tyrosine phosphorylation on a global scale in cancer cell models. However, the majority of the protein phosphorylation events occur on serine (86%) and threonine (12%) residues, whereas only 2% of phosphorylation events occur on tyrosine residues (Olsen et al., 2006). The low stoichiometry of tyrosine phosphorylation renders it difficult to quantify cellular pY events comprehensively with high mass accuracy and reproducibility. Here, we describe a detailed protocol for isolating and quantifying tyrosine phosphorylated peptides from drug-perturbed, growth factor-stimulated cancer cells, using immunoaffinity purification and tandem mass tags (TMT) coupled with mass spectrometry.


0 Q&A 3001 Views Jun 20, 2021

Protein N-glycosylation plays a vital role in diverse cellular processes, and dysregulated N-glycosylation is implicated in a variety of human diseases including neurodegenerative disorders and cancer. With recent advances in high-resolution mass spectrometry-based glycoproteomics technologies enabling large-scale N-glycoproteome profiling of disease and control samples, analysis of the large datasets has become a challenge. Here, we provide a protocol for the systems-level analysis of in vivo N-glycosylation sites on N-glycosylated proteins and their changes in human disease, such as Alzheimer's disease. The protocol includes quantitation and differential analysis of N-glycopeptide abundance, in addition to integrative N-glycoproteome and proteome data analyses, to determine disease-associated changes in N-glycosylation site occupancy and identify differentially N-glycosylated proteins in human disease versus control samples. This protocol can be modified and applied to study proteome-wide N-glycosylation alterations in response to different cellular stresses or pathophysiological states in other organisms or model systems.

3 Q&A 7589 Views Sep 5, 2020
Protein-ligand binding prediction is central to the drug-discovery process. This often follows an analysis of genomics data for protein targets and then protein structure discovery. However, the complexity of performing reproducible protein conformational analysis and ligand binding calculations, using vetted methods and protocols can be a challenge. Here we show how Biomolecular Reaction and Interaction Dynamics Global Environment (BRIDGE), an open-source web-based compute and analytics platform for computational chemistry developed based on the Galaxy bioinformatics platform, makes protocol sharing seamless following genomics and proteomics. BRIDGE makes available tools and workflows to carry out protein molecular dynamics simulations and accurate free energy computations of protein-ligand binding. We illustrate the dynamics and simulation protocols for predicting protein-ligand binding affinities in silico on the T4 lysozyme system. This protocol is suitable for both novice and experienced practitioners. We show that with BRIDGE, protocols can be shared with collaborators or made publicly available, thus making simulation results and computations independently verifiable and reproducible.