Given the significant requirements for transforming and promoting the process industry, we present the major limitations of current petrochemical enterprises, including limitations in decision-making, production operation, efficiency and security, information integration, and so forth. To promote a vision of the process industry with efficient, green, and smart production, modern information technology should be utilized throughout the entire optimization process for production, management, and marketing. To focus on smart equipment in manufacturing processes, as well as on the adaptive intelligent optimization of the manufacturing process, operating mode, and supply chain management, we put forward several key scientific problems in engineering in a demand-driven and application-oriented manner, namely: ① intelligent sensing and integration of all process information, including production and management information; ② collaborative decision-making in the supply chain, industry chain, and value chain, driven by knowledge; ③ cooperative control and optimization of plant-wide production processes via human-cyber-physical interaction; and ④life-cycle assessments for safety and environmental footprint monitoring, in addition to tracing analysis and risk control. In order to solve these limitations and core scientific problems, we further present fundamental theories and key technologies for smart and optimal manufacturing in the process industry. Although this paper discusses the process industry in China, the conclusions in this paper can be extended to the process industry around the world.
The challenges posed by smart manufacturing for the process industries and for process systems engineering (PSE) researchers are discussed in this article. Much progress has been made in achieving plant- and site-wide optimization, but benchmarking would give greater confidence. Technical challenges confronting process systems engineers in developing enabling tools and techniques are discussed regarding flexibility and uncertainty, responsiveness and agility, robustness and security, the prediction of mixture properties and function, and new modeling and mathematics paradigms. Exploiting intelligence from big data to drive agility will require tackling new challenges, such as how to ensure the consistency and confidentiality of data through long and complex supply chains. Modeling challenges also exist, and involve ensuring that all key aspects are properly modeled, particularly where health, safety, and environmental concerns require accurate predictions of small but critical amounts at specific locations. Environmental concerns will require us to keep a closer track on all molecular species so that they are optimally used to create sustainable solutions. Disruptive business models may result, particularly from new personalized products, but that is difficult to predict.
This work uses a mathematical optimization approach to analyze and compare facilities that either capture carbon dioxide (CO2) artificially or use naturally captured CO2 in the form of lignocellulosic biomass toward the production of the same product, dimethyl ether (DME). In nature, plants capture CO2 via photosynthesis in order to grow. The design of the first process discussed here is based on a superstructure optimization approach in order to select technologies that transform lignocellulosic biomass into DME. Biomass is gasified; next, the raw syngas must be purified using reforming, scrubbing, and carbon capture technologies before it can be used to directly produce DME. Alternatively, CO2 can be captured and used to produce DME via hydrogenation. Hydrogen (H2) is produced by splitting water using solar energy. Facilities based on both photovoltaic (PV) solar or concentrated solar power (CSP) technologies have been designed; their monthly operation, which is based on solar availability, is determined using a multi-period approach. The current level of technological development gives biomass an advantage as a carbon capture technology, since both water consumption and economic parameters are in its favor. However, due to the area required for growing biomass and the total amount of water consumed (if plant growing is also accounted for), the decision to use biomass is not a straightforward one.
Most olefins (e.g., ethylene and propylene) will continue to be produced through steam cracking (SC) of hydrocarbons in the coming decade. In an uncertain commodity market, the chemical industry is investing very little in alternative technologies and feedstocks because of their current lack of economic viability, despite decreasing crude oil reserves and the recognition of global warming. In this perspective, some of the most promising alternatives are compared with the conventional SC process, and the major bottlenecks of each of the competing processes are highlighted. These technologies emerge especially from the abundance of cheap propane, ethane, and methane from shale gas and stranded gas. From an economic point of view, methane is an interesting starting material, if chemicals can be produced from it. The huge availability of crude oil and the expected substantial decline in the demand for fuels imply that the future for proven technologies such as Fischer-Tropsch synthesis (FTS) or methanol to gasoline is not bright. The abundance of cheap ethane and the large availability of crude oil, on the other hand, have caused the SC industry to shift to these two extremes, making room for the on-purpose production of light olefins, such as by the catalytic dehydrogenation of propane.
Smart manufacturing will transform the oil refining and petrochemical sector into a connected, information-driven environment. Using real-time and high-value support systems, smart manufacturing enables a coordinated and performance-oriented manufacturing enterprise that responds quickly to customer demands and minimizes energy and material usage, while radically improving sustainability, productivity, innovation, and economic competitiveness. In this paper, several examples of the application of so-called “smart manufacturing” for the petrochemical sector are demonstrated, such as the fault detection of a catalytic cracking unit driven by big data, advanced optimization for the planning and scheduling of oil refinery sites, and more. Key scientific factors and challenges for the further smart manufacturing of chemical and petrochemical processes are identified.
In the globalized market environment, increasingly significant economic and environmental factors within complex industrial plants impose importance on the optimization of global production indices; such optimization includes improvements in production efficiency, product quality, and yield, along with reductions of energy and resource usage. This paper briefly overviews recent progress in data-driven hybrid intelligence optimization methods and technologies in improving the performance of global production indices in mineral processing. First, we provide the problem description. Next, we summarize recent progress in data-based optimization for mineral processing plants. This optimization consists of four layers: optimization of the target values for monthly global production indices, optimization of the target values for daily global production indices, optimization of the target values for operational indices, and automation systems for unit processes. We briefly overview recent progress in each of the different layers. Finally, we point out opportunities for future works in data-based optimization for mineral processing plants.
The scheduling of gasoline-blending operations is an important problem in the oil refining industry. This problem not only exhibits the combinatorial nature that is intrinsic to scheduling problems, but also non-convex nonlinear behavior, due to the blending of various materials with different quality properties. In this work, a global optimization algorithm is proposed to solve a previously published continuous-time mixed-integer nonlinear scheduling model for gasoline blending. The model includes blend recipe optimization, the distribution problem, and several important operational features and constraints. The algorithm employs piecewise McCormick relaxation (PMCR) and normalized multiparametric disaggregation technique (NMDT) to compute estimates of the global optimum. These techniques partition the domain of one of the variables in a bilinear term and generate convex relaxations for each partition. By increasing the number of partitions and reducing the domain of the variables, the algorithm is able to refine the estimates of the global solution. The algorithm is compared to two commercial global solvers and two heuristic methods by solving four examples from the literature. Results show that the proposed global optimization algorithm performs on par with commercial solvers but is not as fast as heuristic approaches.
In the present work, two new, (multi-)parametric programming (mp-P)-inspired algorithms for the solution of mixed-integer nonlinear programming (MINLP) problems are developed, with their main focus being on process synthesis problems. The algorithms are developed for the special case in which the nonlinearities arise because of logarithmic terms, with the first one being developed for the deterministic case, and the second for the parametric case (p-MINLP). The key idea is to formulate and solve the square system of the first-order Karush-Kuhn-Tucker (KKT) conditions in an analytical way, by treating the binary variables and/or uncertain parameters as symbolic parameters. To this effect, symbolic manipulation and solution techniques are employed. In order to demonstrate the applicability and validity of the proposed algorithms, two process synthesis case studies are examined. The corresponding solutions are then validated using state-of-the-art numerical MINLP solvers. For p-MINLP, the solution is given by an optimal solution as an explicit function of the uncertain parameters.
Over time, the performance of processes may deviate from the initial design due to process variations and uncertainties, making it necessary to develop systematic methods for online optimality assessment based on routine operating process data. Some processes have multiple operating modes caused by the set point change of the critical process variables to achieve different product specifications. On the other hand, the operating region in each operating mode can alter, due to uncertainties. In this paper, we will establish an optimality assessment framework for processes that typically have multi-mode, multi-region operations, as well as transitions between different modes. The kernel density approach for mode detection is adopted and improved for operating mode detection. For online mode detection, the model-based clustering discriminant analysis (MclustDA) approach is incorporated with some a priori knowledge of the system. In addition, multi-modal behavior of steady-state modes is tackled utilizing the mixture probabilistic principal component regression (MPPCR) method, and dynamic principal component regression (DPCR) is used to investigate transitions between different modes. Moreover, a probabilistic causality detection method based on the sequential forward floating search (SFFS) method is introduced for diagnosing poor or non-optimum behavior. Finally, the proposed method is tested on the Tennessee Eastman (TE) benchmark simulation process in order to evaluate its performance.
This paper deals with a first-principle mathematical model that describes the electrostatic coalescer units devoted to the separation of water from oil in water-in-oil emulsions, which are typical of the upstream operations in oil fields. The main phenomena governing the behavior of the electrostatic coalescer are described, starting from fundamental laws. In addition, the gradual coalescence of the emulsion droplets is considered in the mathematical modeling in a dynamic fashion, as the phenomenon is identified as a key step in the overall yield of the unit operation. The resulting differential system with boundary conditions is then integrated via performing numerical libraries, and the simulation results confirm the available literature and the industrial data. A sensitivity analysis is provided with respect to the main parameters. The mathematical model results in a flexible tool that is useful for the purposes of design, unit behavior prediction, performance monitoring, and optimization.
Carbon capture and storage (CCS) technology will play a critical role in reducing anthropogenic carbon dioxide (CO2) emission from fossil-fired power plants and other energy-intensive processes. However, the increment of energy cost caused by equipping a carbon capture process is the main barrier to its commercial deployment. To reduce the capital and operating costs of carbon capture, great efforts have been made to achieve optimal design and operation through process modeling, simulation, and optimization. Accurate models form an essential foundation for this purpose. This paper presents a study on developing a more accurate rate-based model in Aspen Plus® for the monoethanolamine (MEA)-based carbon capture process by multistage model validations. The modeling framework for this process was established first. The steady-state process model was then developed and validated at three stages, which included a thermodynamic model, physical properties calculations, and a process model at the pilot plant scale, covering a wide range of pressures, temperatures, and CO2 loadings. The calculation correlations of liquid density and interfacial area were updated by coding Fortran subroutines in Aspen Plus®. The validation results show that the correlation combination for the thermodynamic model used in this study has higher accuracy than those of three other key publications and the model prediction of the process model has a good agreement with the pilot plant experimental data. A case study was carried out for carbon capture from a 250 MWe combined cycle gas turbine (CCGT) power plant. Shorter packing height and lower specific duty were achieved using this accurate model.
As the demand for energy continues to increase, shale gas, as an unconventional source of methane (CH4), shows great potential for commercialization. However, due to the ultra-low permeability of shale gas reservoirs, special procedures such as horizontal drilling, hydraulic fracturing, periodic well shut-in, and carbon dioxide (CO2) injection may be required in order to boost gas production, maximize economic benefits, and ensure safe and environmentally sound operation. Although intensive research is devoted to this emerging technology, many researchers have studied shale gas design and operational decisions only in isolation. In fact, these decisions are highly interactive and should be considered simultaneously. Therefore, the research question addressed in this study includes interactions between design and operational decisions. In this paper, we first establish a full-physics model for a shale gas reservoir. Next, we conduct a sensitivity analysis of important design and operational decisions such as well length, well arrangement, number of fractures, fracture distance, CO2 injection rate, and shut-in scheduling in order to gain in-depth insights into the complex behavior of shale gas networks. The results suggest that the case with the highest shale gas production may not necessarily be the most profitable design; and that drilling, fracturing, and CO2 injection have great impacts on the economic viability of this technology. In particular, due to the high costs, enhanced gas recovery (EGR) using CO2 does not appear to be commercially competitive, unless tax abatements or subsidies are available for CO2 sequestration. It was also found that the interactions between design and operational decisions are significant and that these decisions should be optimized simultaneously.
In this paper, a reinforcement learning (RL)-based Sarsa temporal-difference (TD) algorithm is applied to search for a unified bidding and operation strategy for a coal-fired power plant with monoethanolamine (MEA)-based post-combustion carbon capture under different carbon dioxide (CO2) allowance market conditions. The objective of the decision maker for the power plant is to maximize the discounted cumulative profit during the power plant lifetime. Two constraints are considered for the objective formulation. Firstly, the tradeoff between the energy-intensive carbon capture and the electricity generation should be made under presumed fixed fuel consumption. Secondly, the CO2 allowances purchased from the CO2 allowance market should be approximately equal to the quantity of CO2 emission from power generation. Three case studies are demonstrated thereafter. In the first case, we show the convergence of the Sarsa TD algorithm and find a deterministic optimal bidding and operation strategy. In the second case, compared with the independently designed operation and bidding strategies discussed in most of the relevant literature, the Sarsa TD-based unified bidding and operation strategy with time-varying flexible market-oriented CO2 capture levels is demonstrated to help the power plant decision maker gain a higher discounted cumulative profit. In the third case, a competitor operating another power plant identical to the preceding plant is considered under the same CO2 allowance market. The competitor also has carbon capture facilities but applies a different strategy to earn profits. The discounted cumulative profits of the two power plants are then compared, thus exhibiting the competitiveness of the power plant that is using the unified bidding and operation strategy explored by the Sarsa TD algorithm.
This paper presents a concise summary of recent studies on the long-term variations of haze in North China and on the environmental and dynamic conditions for severe persistent haze events. Results indicate that haze days have an obviously rising trend over the past 50 years in North China. The occurrence frequency of persistent haze events has a similar rising trend due to the continuous rise of winter temperatures, decrease of surface wind speeds, and aggravation of atmospheric stability. In North China, when severe persistent haze events occur, anomalous southwesterly winds prevail in the lower troposphere, providing sufficient moisture for the formation of haze. Moreover, North China is mainly controlled by a deep downdraft in the mid-lower troposphere, which contributes to reducing the thickness of the planetary boundary layer, obviously reducing the atmospheric capacity for pollutants. This atmospheric circulation and sinking motion provide favorable conditions for the formation and maintenance of haze in North China.
The Paris Agreement proposed to keep the increase in global average temperature to well below 2?°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5?°C above pre-industrial levels. It was thus the first international treaty to endow the 2?°C global temperature target with legal effect. The qualitative expression of the ultimate objective in Article 2 of the United Nations Framework Convention on Climate Change (UNFCCC) has now evolved into the numerical temperature rise target in Article 2 of the Paris Agreement. Starting with the Second Assessment Report (SAR) of the Intergovernmental Panel on Climate Change (IPCC), an important task for subsequent assessments has been to provide scientific information to help determine the quantified long-term goal for UNFCCC negotiation. However, due to involvement in the value judgment within the scope of non-scientific assessment, the IPCC has never scientifically affirmed the unacceptable extent of global temperature rise. The setting of the long-term goal for addressing climate change has been a long process, and the 2?°C global temperature target is the political consensus on the basis of scientific assessment. This article analyzes the evolution of the long-term global goal for addressing climate change and its impact on scientific assessment, negotiation processes, and global low-carbon development, from aspects of the origin of the target, the series of assessments carried out by the IPCC focusing on Article 2 of the UNFCCC, and the promotion of the global temperature goal at the political level.
Tissue engineering is a relatively new but rapidly developing field in the medical sciences. Noncoding RNAs (ncRNAs) are functional RNA molecules without a protein-coding function; they can regulate cellular behavior and change the biological milieu of the tissue. The application of ncRNAs in tissue engineering is starting to attract increasing attention as a means of resolving a large number of unmet healthcare needs, although ncRNA-based approaches have not yet entered clinical practice. In-depth research on the regulation and delivery of ncRNAs may improve their application in tissue engineering. The aim of this review is: to outline essential ncRNAs that are related to tissue engineering for the repair and regeneration of nerve, skin, liver, vascular system, and muscle tissue; to discuss their regulation and delivery; and to anticipate their potential therapeutic applications.
Knee osteoarthritis (OA) is the most common form of arthritis worldwide. The incidence of this disease is rising and its treatment poses an economic burden. Two early targets of knee OA treatment include the predominant symptom of pain, and cartilage damage in the knee joint. Current treatments have been beneficial in treating the disease but none is as effective as total knee arthroplasty (TKA). However, while TKA is an end-stage solution of the disease, it is an invasive and expensive procedure. Therefore, innovative regenerative engineering strategies should be established as these could defer or annul the need for a TKA. Several biomaterial and cell-based therapies are currently in development and have shown early promise in both preclinical and clinical studies. The use of advanced biomaterials and stem cells independently or in conjunction to treat knee OA could potentially reduce pain and regenerate focal articular cartilage damage. In this review, we discuss the pathogenesis of pain and cartilage damage in knee OA and explore novel treatment options currently being studied, along with some of their limitations.
Given the limited spontaneous repair that follows cartilage injury, demand is growing for tissue engineering approaches for cartilage regeneration. There are two major applications for tissue-engineered cartilage. One is in orthopedic surgery, in which the engineered cartilage is usually used to repair cartilage defects or loss in an articular joint or meniscus in order to restore the joint function. The other is for head and neck reconstruction, in which the engineered cartilage is usually applied to repair cartilage defects or loss in an auricle, trachea, nose, larynx, or eyelid. The challenges faced by the engineered cartilage for one application are quite different from those faced by the engineered cartilage for the other application. As a result, the emphases of the engineering strategies to generate cartilage are usually quite different for each application. The statuses of preclinical animal investigations and of the clinical translation of engineered cartilage are also at different levels for each application. The aim of this review is to provide an opinion piece on the challenges, current developments, and future directions for cartilage engineering for both applications.
The stiffness and nanotopographical characteristics of the extracellular matrix (ECM) influence numerous developmental, physiological, and pathological processes in vivo. These biophysical cues have therefore been applied to modulate almost all aspects of cell behavior, from cell adhesion and spreading to proliferation and differentiation. Delineation of the biophysical modulation of cell behavior is critical to the rational design of new biomaterials, implants, and medical devices. The effects of stiffness and topographical cues on cell behavior have previously been reviewed, respectively; however, the interwoven effects of stiffness and nanotopographical cues on cell behavior have not been well described, despite similarities in phenotypic manifestations. Herein, we first review the effects of substrate stiffness and nanotopography on cell behavior, and then focus on intracellular transmission of the biophysical signals from integrins to nucleus. Attempts are made to connect extracellular regulation of cell behavior with the biophysical cues. We then discuss the challenges in dissecting the biophysical regulation of cell behavior and in translating the mechanistic understanding of these cues to tissue engineering and regenerative medicine.
Stem cell homing, namely the recruitment of mesenchymal stem cells (MSCs) to injured tissues, is highly effective for bone regeneration in vivo. In order to explore whether the incorporation of mimetic peptide sequences on magnesium-doped (Mg-doped) hydroxyapatite (HA) may regulate the homing of MSCs, and thus induce cell migration to a specific site, we covalently functionalized MgHA disks with two chemotactic/haptotactic factors: either the fibronectin fragment III1-C human (FF III1-C), or the peptide sequence Gly-Arg-Gly-Asp-Ser-Pro-Lys, a fibronectin analog that is able to bind to integrin transmembrane receptors. Preliminary biological evaluation of MSC viability, analyzed by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) test, suggested that stem cells migrate to the MgHA disks in response to the grafted haptotaxis stimuli.
Host-microbe interactions at the gastrointestinal interface have emerged as a key component in the governance of human health and disease. Advances in micro-physiological systems are providing researchers with unprecedented access and insights into this complex relationship. These systems combine the benefits of microengineering, microfluidics, and cell culture in a bid to recreate the environmental conditions prevalent in the human gut. Here we present the human-microbial cross talk (HuMiX) platform, one such system that leverages this multidisciplinary approach to provide a representative in vitro model of the human gastrointestinal interface. HuMiX presents a novel and robust means to study the molecular interactions at the host-microbe interface. We summarize our proof-of-concept results obtained using the platform and highlight its potential to greatly enhance our understanding of host-microbe interactions with a potential to greatly impact the pharmaceutical, food, nutrition, and healthcare industries in the future. A number of key questions and challenges facing these technologies are also discussed.
Method development has always been and will continue to be a core driving force of microbiome science. In this perspective, we argue that in the next decade, method development in microbiome analysis will be driven by three key changes in both ways of thinking and technological platforms: ① a shift from dissecting microbiota structureby sequencing to tracking microbiota state, function, and intercellular interaction via imaging; ② a shift from interrogating a consortium or population of cells to probing individual cells; and ③ a shift from microbiome data analysis to microbiome data science. Some of the recent method-development efforts by Chinese microbiome scientists and their international collaborators that underlie these technological trends are highlighted here. It is our belief that the China Microbiome Initiative has the opportunity to deliver outstanding “Made-in-China” tools to the international research community, by building an ambitious, competitive, and collaborative program at the forefront of method development for microbiome science.
Trillions of microbes have evolved with and continue to live on and within human beings. A variety of environmental factors can affect intestinal microbial imbalance, which has a close relationship with human health and disease. Here, we focus on the interactions between the human microbiota and the host in order to provide an overview of the microbial role in basic biological processes and in the development and progression of major human diseases such as infectious diseases, liver diseases, gastrointestinal cancers, metabolic diseases, respiratory diseases, mental or psychological diseases, and autoimmune diseases. We also review important advances in techniques associated with microbial research, such as DNA sequencing, metabonomics, and proteomics combined with computation-based bioinformatics. Current research on the human microbiota has become much more sophisticated and more comprehensive. Therefore, we propose that research should focus on the host-microbe interaction and on cause-effect mechanisms, which could pave the way to an understanding of the role of gut microbiota in health and disease. and provide new therapeutic targets and treatment approaches in clinical practice.
The human microbiota is an aggregate of microorganisms residing in the human body, mostly in the gastrointestinal tract (GIT). Our gut microbiota evolves with us and plays a pivotal role in human health and disease. In recent years, the microbiota has gained increasing attention due to its impact on host metabolism, physiology, and immune system development, but also because the perturbation of the microbiota may result in a number of diseases. The gut microbiota may be linked to malignancies such as gastric cancer and colorectal cancer. It may also be linked to disorders such as nonalcoholic fatty liver disease (NAFLD); obesity and diabetes, which are characterized as “lifestyle diseases” of the industrialized world; coronary heart disease; and neurological disorders. Although the revolution in molecular technologies has provided us with the necessary tools to study the gut microbiota more accurately, we need to elucidate the relationships between the gut microbiota and several human pathologies more precisely, as understanding the impact that the microbiota plays in various diseases is fundamental for the development of novel therapeutic strategies. Therefore, the aim of this review is to provide the reader with an updated overview of the importance of the gut microbiota for human health and the potential to manipulate gut microbial composition for purposes such as the treatment of antibiotic-resistant Clostridium difficile (C. difficile) infections. The concept of altering the gut community by microbial intervention in an effort to improve health is currently in its infancy. However, the therapeutic implications appear to be very great. Thus, the removal of harmful organisms and the enrichment of beneficial microbes may protect our health, and such efforts will pave the way for the development of more rational treatment options in the future.
Colorectal cancer (CRC) is a multistage disease resulting from complex factors, including genetic mutations, epigenetic changes, chronic inflammation, diet, and lifestyle. Recent accumulating evidence suggests that the gut microbiota is a new and important player in the development of CRC. Imbalance of the gut microbiota, especially dysregulated gut bacteria, contributes to colon cancer through mechanisms of inflammation, host defense modulations, oxidative stress, and alterations in bacterial-derived metabolism. Gut commensal bacteria are anatomically defined as four populations: luminal commensal bacteria, mucus-resident bacteria, epithelium-resident bacteria, and lymphoid tissue-resident commental bacteria. The bacterial flora that are harbored in the gastrointestinal (GI) tract vary both longitudinally and cross-sectionally by different anatomical localization. It is notable that the translocation of colonic commensal bacteria is closely related to CRC progression. CRC-associated bacteria can serve as a non-invasive and accurate biomarker for CRC diagnosis. In this review, we summarize recent findings on the oncogenic roles of gut bacteria with different anatomical localization in CRC progression.
The steady increase of IgE-dependent allergic diseases after the Second World War is a unique phenomenon in the history of humankind. Numerous cross-sectional studies, comprehensive longitudinal cohort studies of children living in various types of environment, and mechanistic experimental studies have pointed to the disappearance of “protective factors” related to major changes in lifestyle and environment. A common unifying concept is that of the immunoregulatory role of the gut microbiota. This review focuses on the protection against allergic disorders that is provided by the farming environment and by exposure to microbial diversity. It also questions whether and how microbial bioengineering will be able in the future to restore an interplay that was beneficial to the proper immunological development of children in the past and that was irreversibly disrupted by changes in lifestyle. The protective “farming environment” includes independent and additional influences: contact with animals, stay in barns/stables, and consumption of unprocessed milk and milk products, by mothers during pregnancy and by children in early life. More than the overall quantity of microbes, the biodiversity of the farm microbial environment appears to be crucial for this protection, as does the biodiversity of the gut microbiota that it may provide. Use of conventional probiotics, especially various species or strains of Lactobacillus and Bifidobacterium, has not fulfilled the expectations of allergists and pediatricians to prevent allergy. Among the specific organisms present in cowsheds that could be used for prevention, Acinetobacter (A.) lwoffii F78, Lactococcus (L.) lactis G121, and Staphylococcus (S.) sciuri W620 seem to be the most promising, based on experimental studies in mouse models of allergic respiratory diseases. However, the development of a new generation of probiotics based on very productive research on the farming environment faces several obstacles that cannot be overcome without a close collaboration between microbiologists, immunologists, and bioengineers, as well as pediatricians, allergists, specialists of clinical trials, and ethical committees.
In recent decades, diseases concerning the gut microbiota have presented some of the most serious public health problems worldwide. The human host’s physiological status is influenced by the intestinal microbiome, thus integrating external factors, such as diet, with genetic and immune signals. The notion that chronic inflammation drives carcinogenesis has been widely established for various tissues. It is surprising that the role of the microbiota in tumorigenesis has only recently been recognized, given that the presence of bacteria at tumor sites was first described more than a century ago. Extensive epidemiological studies have revealed that there is a strong link between the gut microbiota and some common cancers. However, the exact molecular mechanisms linking the gut microbiota and cancer are not yet fully understood. Changes to the gut microbiota are instrumental in determining the occurrence and progression of hepatocarcinoma, chronic liver diseases related to alcohol, nonalcoholic fatty liver disease (NAFLD), and cirrhosis. To be specific, the gut milieu may play an important role in systemic inflammation, endotoxemia, and vasodilation, which leads to complications such as spontaneous bacterial peritonitis and hepatic encephalopathy. Relevant animal studies involving gut microbiota manipulations, combined with observational studies on patients with NAFLD, have provided ample evidence pointing to the contribution of dysbiosis to the pathogenesis of NAFLD. Given the poor prognosis of these clinical events, their prevention and early management are essential. Studies of the composition and function of the gut microbiota could shed some light on understanding the prognosis because the microbiota serves as an essential component of the gut milieu that can impact the aforementioned clinical events. As far as disease management is concerned, probiotics may provide a novel direction for therapeutics for hepatocellular carcinoma (HCC) and NAFLD, given that probiotics function as a type of medicine that can improve human health by regulating the immune system. Here, we provide an overview of the relationships among the gut microbiota, tumors, and liver diseases. In addition, considering the significance of bacterial homeostasis, we discuss probiotics in this article in order to guide treatments for related diseases.
Gut and oral microflora are important factors in the pathogenesis and development of rheumatoid arthritis (RA). Recent studies have shown that probiotic supplements have beneficial consequences on experimental arthritis in rats. However, results from randomized clinical trials on the effects of probiotics have not been consistent. The aim of this study was to systematically review the existing evidence for the effects of probiotic intervention in RA. We included randomized controlled trials (RCTs) of RA patients receiving stable treatment with disease-modifying anti-rheumatic drugs (DMARDs) that: ① were combined with additional probiotic supplements or ② were combined with either no additional supplements or only a placebo treatment. Statistical analysis was performed using Review Manager 5.3.3. Six randomized clinical trials were eligible for inclusion in the meta-analysis, with 249 participants in total. The results showed that the probiotic intervention treatment has not yet achieved significant improvement in the American College of Rheumatology 20% improvement criteria (ACR20) score and the disease activity score in 28 joints (DAS28). The laboratory index C-reactive protein (CRP) (mg·L−1) was significantly reduced in the intervention group. The expression of inflammatory cytokines tumor necrosis factor (TNF)-α and interleukine (IL)-1β was also significantly reduced, while IL-10 expression increased in the probiotic intervention groups. This article is the first systematic review and meta-analysis providing a comprehensive assessment of the benefits of treating RA with probiotics. We found that probiotic supplementation may show a limited improvement in RA therapy in existing reports because of a lack of sufficiently high-quality work on the part of clinicians. More multi-centered, large-sample RCTs are needed in order to evaluate the benefits of probiotics in RA treatment.
This paper introduces the high-speed electrical multiple unit (EMU) life cycle, including the design, manufacturing, testing, and maintenance stages. It also presents the train control and monitoring system (TCMS) software development platform, the TCMS testing and verification bench, the EMU driving simulation platform, and the EMU remote data transmittal and maintenance platform. All these platforms and benches combined together make up the EMU life cycle cost (LCC) system. Each platform facilitates EMU LCC management and is an important part of the system.
Our previous studies have shown that zein has good biocompatibility and good mechanical properties. The first product from a porous scaffold of zein, a resorbable bone substitute, has passed the biological evaluation of medical devices (ISO 10993) by the China Food and Drug Administration. However, Class III medical devices need quality monitoring before being placed on the market, and such monitoring includes quality control of raw materials, choice of sterilization method, and evaluation of biocompatibility. In this paper, we investigated four sources of zein through amino acid analysis (AAA) and sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) in order to monitor the composition and purity, and control the quality of raw materials. We studied the effect of three kinds of sterilization method on a porous zein scaffold by SDS-PAGE. We also compared the changes in SDS-PAGE patterns when irradiated with different doses of gamma radiation. We found that polymerization or breakage did not occur on peptide chains of zein during gamma-ray (γ-ray) sterilization in the range of 20–30 kGy, which suggested that γ-ray sterilization is suitable for porous zein scaffolds. Regarding cell compatibility, we found a difference between using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay and a cell-counting kit-8 (CCK-8) assay to assess cell proliferation on zein film, and concluded that the CCK-8 assay is more suitable, due to its low background optical density.