Assessing environmental flows (e-flows) for urban rivers is important for water resources planning and river protection. Many e-flow assessment methods have been established based on species’ habitat provision requirements and pollutant dilution requirements. To avoid flood risk, however, many urban rivers have been transformed into straight, trapezoidal-profiled concrete channels, leading to the disappearanceof valuable species. With the construction of water pollution-control projects, pollutant inputs into rivers have been effectively controlled in some urban rivers. For these rivers, the e-flows determined by traditional methods will be very small, and will consequently lead to a low priority being given to river protection in future water resources allocation and management. To more effectively assess the e-flows of channelized urban rivers, we propose three e-flow degrees, according to longitudinal hydrological connectivity (high, medium, and low), in addition to the pollutant dilution water requirement determined by the mass-balance equation. In the high connectivity scenario, the intent is for the e-flows to maintain flow velocity, which can ensure the self-purification of rivers and reduce algal blooms; in the medium connectivity scenario, the intent is for the e-flows to permanently maintain the longitudinal hydrological connectivity of rivers that are isolated into several ponds by means of weirs, in order to ensure the exchange of material, energy, and information in rivers; and in the low connectivity scenario, the intent is for the e-flows to intermittently connect isolated ponds every few days (which is designed to further reduce e-flows). The proposed methods have been used in Shiwuli River, China, to demonstrate their effectiveness. The new methods can offer more precise and realistic e-flow results and can effectively direct the construction and management of e-flow supply projects.
The goal of this project was to design, build, and test a pilot-scale floating modular treatment system for total phosphorus (TP) removal from nutrient-impaired lakes in central Florida, USA. The treatment system consisted of biological and physical–chemical treatment modules. First, investigations of prospective biological and physical–chemical treatment processes in mesocosms and in bench-scale experiments were conducted. Thirteen different mesocosms were constructed with a variety of substrates and combinations of macrophytes and tested for TP and orthophosphate (PO43−) removal efficiencies and potential areal removal rates. Bench-scale jar tests and column tests of seven types of absorptive media in addition to three commercial resins were conducted in order to test absorptive capacity. Once isolated process testing was complete, a floating island treatment system (FITS) was designed and deployed for eight months in a lake in central Florida. Phosphorus removal efficiencies of the mesocosm systems averaged about 40%–50%, providing an average uptake of 5.0 g·m–2·a–1 across all mesocosms. The best-performing mesocosms were a submerged aquatic vegetation (SAV) mesocosm and an algae scrubber (AGS), which removed 20 and 50 mg·m–2·d–1, respectively, for an average removal of 5.5 and 12.0 g·m2·a1 for the SAV and AGS systems, Of the absorptive media, the best performance was alum residual (AR), which reduced PO43− concentrations by about 75% after 5 min of contact time. Of the commercial resins tested, the PhosX resin was superior to the others, removing about 40% of phosphorus after 30 min and 60% after 60 min. Under baseline operation conditions during deployment, the FITS exhibited mean PO43− removal efficiencies of 53%; using the 50th and 90th percentile of PO3 4 removal during deployment, and the footprint of the FITS system, yielded efficiencies for the combined FITS system of 56% and 86%, respectively, and areal phosphorus removal rates between 8.9 and 16.5 g·m–2·a–1.
Adaptive vegetation management is time-consuming and requires long-term colony monitoring to obtain reliable results. Although vegetation management has been widely adopted, the only method existing at present for evaluating the habitat conditions under management involves observations over a long period of time. The presence of reactive oxygen species (ROS) has long been used as an indicator of environmental stress in plants, and has recently been intensely studied. Among such ROS, hydrogen peroxide (H2O2) is relatively stable, and can be conveniently and accurately quantified. Thus, the quantification of plant H2O2 could be applied as a stress indicator for riparian and aquatic vegetation management approaches while evaluating the conditions of a plant species within a habitat. This study presents an approach for elucidating the applicability of H2O2 as a quantitative indicator of environmental stresses on plants, particularly for vegetation management. Submerged macrophytes and riparian species were studied under laboratory and field conditions (Lake Shinji, Saba River, Eno River, and Hii River in Japan) for H2O2 formation under various stress conditions. The results suggest that H2O2 can be conveniently applied as a stress indicator in environmental management.
This study develops a multivariate eco-hydrological risk-assessment framework based on the multivariate copula method in order to evaluate the occurrence of extreme eco-hydrological events for the Xiangxi River within the Three Gorges Reservoir (TGR) area in China. Parameter uncertainties in marginal distributions and dependence structure are quantified by a Markov chain Monte Carlo (MCMC) algorithm. Uncertainties in the joint return periods are evaluated based on the posterior distributions. The probabilistic features of bivariate and multivariate hydrological risk are also characterized. The results show that the obtained predictive intervals bracketed the observations well, especially for flood duration. The uncertainty for the joint return period in “AND” case increases with an increase in the return period for univariate flood variables. Furthermore, a low design discharge and high service time may lead to high bivariate hydrological risk with great uncertainty.
Constructing and operating a multi-reservoir system changes the natural flow regime of rivers, and thus imposes adverse impacts on riverine ecosystems. To balance human needs with ecosystem needs, this study proposes an ecologically oriented operation strategy for a multi-reservoir system that integrates environmental flow requirements into the joint operation of a multi-reservoir system in order to maintain different ecological functions throughout the river. This strategy is a combination of a regular optimal operation scheme and a series of real-time ecological operation schemes. During time periods when the incompatibilities between human water needs and ecosystem needs for environmental flows are relatively small, the regular optimal operation scheme is implemented in order to maximize multiple human water-use benefits under the constraints of a minimum water-release policy. During time periods when reservoir-induced hydrological alteration imposes significant negative impacts on the river’s key ecological functions, real-time ecological operation schemes are implemented in order to modify the outflow from reservoirs to meet the environmental flow requirements of these functions. The practical use of this strategy is demonstrated for the simulation operation of a large-scale multi-reservoir system which located in the middle and lower Han River Basin in China. The results indicate that the real-time ecological operation schemes ensure the environmental flow requirements of the river’s key ecological functions, and that adverse impacts on human water-use benefits can be compensated for by the regular optimal operation scheme. The ecologically oriented operation strategy for a multi-reservoir system that is proposed in this study enriches the theoretical application of the multi-reservoir system joint operation which considers environmental flow requirements.
Climate conditions play a crucial role in the survival of mountain communities, whose survival already critically depends on socioeconomic factors. In the case of montane areas that are prone to natural hazards, such as alpine slope failure and debris flows, climatic factors exert a major influence that should be considered when creating appropriate sustainable scenarios. In fact, it has been shown that climate change alters the availability of ecosystem services (ES), thus increasing the risks of declining soil fertility and reduced water availability, as well as the loss of grassland, potential shifts in regulatory services (e.g., protection from natural hazards), and cultural services. This study offers a preliminary discussion on a case study of a region in the Italian Alps that is experiencing increased extreme precipitation and erosion, and where an isolated and historically resilient community directly depends on a natural resource economy. Preliminary results show that economic factors have influenced past population trends of the Novalesa community in the Piemonte Region in northwest Italy. However, the increasing number of rock fall and debris flow events, which are triggered by meteo-climatic factors, may further influence the livelihood and wellbeing of this community, and of other similar communities around the world. Therefore, environmental monitoring and data analysis will be important means of detecting trends in landscape and climate change and choosing appropriate planning options. Such analysis, in turn, would ensure the survival of about 10% of the global population, and would also represent a possibility for future economic development in critical areas prone to poverty conditions.
The Ganga River, the longest river in India, is stressed by extreme anthropogenic activity and climate change, particularly in the Varanasi region. Anticipated climate changes and an expanding populace are expected to further impede the efficient use of water. In this study, hydrological modeling was applied to Soil and Water Assessment Tool (SWAT) modeling in the Ganga catchment, over a region of 15 621.612 km2 in the southern part of Uttar Pradesh. The primary goals of this study are: ① To test the execution and applicability of the SWAT model in anticipating runoff and sediment yield; and ②to compare and determine the best calibration algorithm among three popular algorithms—sequential uncertainty fitting version 2 (SUFI-2), the generalized likelihood uncertainty estimation (GLUE), and parallel solution (ParaSol). The input data used in the SWAT were the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM), Landsat-8 satellite imagery, soil data, and daily meteorological data. The watershed of the study area was delineated into 46 sub-watersheds, and a land use/land cover (LULC) map and soil map were used to create hydrological response units (HRUs). Models utilizing SUFI-2, GLUE, and ParaSol methods were constructed, and these algorithms were compared based on five categories: their objective functions, the concepts used, their performances, the values of P-factors, and the values of R-factors. As a result, it was observed that SUFI-2 is a better performer than the other two algorithms for use in calibrating Indian watersheds, as this method requires fewer runs for a computational model and yields the best results among the three algorithms. ParaSol is the worst performer among the three algorithms. After calibrating using SUFI-2, five parameters including the effective channel hydraulic conductivity (CH_K2), the universal soil-loss equation (USLE) support parameter (USLE_P), Manning’s n value for the main channel (CH_N2), the surface runoff lag time (SURLAG), and the available water capacity of the soil layer (SOL_AWC) were observed to be the most sensitive parameters for modeling the present watershed. It was also found that the maximum runoff occurred in sub-watershed number 40 (SW#40), while the maximum sediment yield was 50 t·a−1 for SW#36, which comprised barren land. The average evapotranspiration for the basin was 411.55 mm·a−1. The calibrated model can be utilized in future to facilitate investigation of the impacts of LULC, climate change, and soil erosion.
Microseismic source location is the essential factor in microseismic monitoring technology, and its location precision has a large impact on the performance of the technique. Here, we discuss the problem of low-precision location identification for microseismic events in a mine, as may be obtained using conventional location methods that are based on arrival time. In this paper, microseismic location characteristics in mining are analyzed according to the characteristics of the mine’s microseismic wavefield. We review research progress in mine-related microseismic source location methods in recent years, including the combination of the Geiger method with the linear method, combined microseismic event location method, optimization of relative location method, location method without pre-measured velocity, and location method without arrival time picking. The advantages and disadvantages of these methods are discussed, along with their feasible conditions. The influences of geophone distribution, first arrival time picking, and the velocity model on microseismic source location are analyzed, and measures are proposed to influence these factors. Approaches to solve the problem under study include adopting information fusion, combining and optimizing existing methods, and creating new methods to realize highprecision microseismic source location. Optimization of the velocity structure, along with applications of the time-reversal imaging technique, passive time-reversal mirror, and relative interferometric imaging, are expected to greatly improve microseismic location precision in mines. This paper also discusses the potential application of information fusion and deep learning methods in microseismic source location in mines. These new and innovative location methods for microseismic source location have extensive prospects for development.
In ground-penetrating radar (GPR) imaging, it is common for the depth of investigation to be on the same order as the variability in surface topography. In such cases, migration fails when it is carried out from a datum after the application of elevation statics. We introduce a reverse-time migration (RTM) algorithm based on the second-order decoupled form of Maxwell’s equations, which requires computation of only the electric field. The wavefield extrapolation is computed directly from the acquisition surface without the need for datuming. In a synthetic case study, the algorithm significantly improves image accuracy over a processing sequence in which migration is performed after elevation statics. In addition, we acquired a field dataset at the Coral Pink Sand Dunes (CPSD) in Utah, USA. The data were acquired over rugged topography and have the complex internal stratigraphy of multiply eroded, modern, and ancient eolian deposits. The RTM algorithm significantly improves radar depth images in this challenging environment.
The Anjialing No. 1 Coal Mine in Shanxi Province, China, contains a complicated old goaf and an unknown water distribution that hold high potential for serious water hazards. Due to poor detection resolution, previous attempts have failed to determine the scope of the old goaf and the water distribution in the mine by separate use of various exploration methods such as seismic method, direct current resistivity, audio magnetotellurics, controlled-source audio-frequency magnetotellurics, and transient electromagnetics. To solve this difficult problem, a combination of the wide-field electromagnetic method and the flow field fitting method with three-dimensional resistivity data inversion was applied to determine the precise scope of the goaf and the locations where water is present, and to identify the hydraulic connection between the water layers so as to provide reliable technical support for safe coal production. Reasonable results were achieved, with all these goals being met. As a result, a mining area of nearly 4 km2 has been released for operation.
Depositional units preserved on coastal plains worldwide control lithologic distribution in the shallow subsurface that is critical to infrastructure design and construction, and are also an important repository of information about the large-scale climate change that has occurred during many Quaternary glacialinterglacial cycles. The lateral and vertical lithologic and stratigraphic complexity of these depositional units and their response to climatic and sea-level change are poorly understood, making it difficult to predict lithologic distribution and to place historical and future climate and sea-level change within a natural geologic context. Mapping Quaternary siliciclastic depositional units on low-relief coastal plains traditionally has been based on their expression in aerial photographs and low-resolution topographic maps. Accuracy and detail have been hindered by low relief and lack of exposure. High-resolution airborne lidar surveys, along with surface and borehole geophysical measurements, are being used to identify subtle lateral and vertical boundaries of lithologic units on the Texas Coastal Plain within Quaternary strata. Ground and borehole conductivity measurements discriminate sandy barrier island and fluvial and deltaic channel deposits from muddy floodplain, delta-plain, and estuarine deposits. Borehole conductivity and natural gamma logs similarly distinguish distinct lithologic units in the subsurface and identify erosional unconformities that likely separate units deposited during different glacial-interglacial stages. High-resolution digital elevation models obtained from airborne lidar surveys reveal previously unrecognized topographic detail that aids identification of surface features such as sandy channels, clay-rich interchannel deposits, and accretionary features on Pleistocene barrier islands. An optimal approach to identify lithologic and stratigraphic distribution in low-relief coastal-plain environments employs ① an initial lidar survey to produce a detailed elevation model; ② selective surface sampling and geophysical measurements based on preliminary mapping derived from lidar data and aerial imagery; and ③ borehole sampling, logging, and analysis at key sites selected after lidar and surface measurements are complete.
Passive surface-wave utilization has been intensively studied as a means of compensating for the shortage of low-frequency information in active surface-wave measurement. In general, passive surface-wave methods cannot provide phase velocities up to several tens of hertz; thus, active surface-wave methods are often required in order to increase the frequency range. To reduce the amount of field work, we propose a strategy for a high-frequency passive surface-wave survey that imposes active sources during continuous passive surface-wave observation; we call our strategy ‘‘mixed-source surface-wave (MSW) measurement.” Short-duration (within 10 min) passive surface waves and mixed-source surface waves were recorded at three sites with different noise levels: namely, inside a school, along a road, and along a railway. Spectral analysis indicates that the high-frequency energy is improved by imposing active sources during continuous passive surface-wave observation. The spatial autocorrelation (SPAC) method and the multichannel analysis of passive surface waves (MAPS) method based on cross-correlations were performed on the recorded time sequences. The results demonstrate the flexibility and applicability of the proposed method for high-frequency phase velocity analysis. We suggest that it will be constructive to perform MSW measurement in a seismic investigation, rather than exclusively performing either active surface-wave measurement or passive surface-wave measurement.
Assessing subsurface characteristics and imaging geologic features (e.g., faults, cavities, low-velocity layers, etc.) are typical problems in near-surface geophysics. These questions often have adverse geotechnical engineering implications, and can be especially acute when associated with high-hazard structures such as large earthen flood-control dams. Dam-related issues are becoming more frequent in the United States, because a large part of this major infrastructure was designed and constructed in the early- to mid-twentieth century; these dams are thus passing into the latter stages of their design life, where minute flaws that were overlooked or thought to be insignificant in design/construction are now proving problematic. The high-hydraulic heads associated with these structures can quicken degradation of weak areas and compromise long-term integrity. Addressing dam-related problems solely with traditional invasive drilling techniques is often inadequate (i.e., lack of lateral resolution) and/or economically exorbitant at this scale. However, strategic geotechnical drilling integrated with the broad utility of near-surface geophysics, particularly the horizontally polarized shear-wave (SH-mode) seismic-reflection technique for imaging the internal structural detail and geological foundation conditions of earthfill embankment dams can cost-effectively improve the overall subsurface definition needed for remedial engineering. Demonstrative evidence for this supposition is provided in the form of SH-wave seismicreflection imaging of in situ and engineered as-built components of flood-control embankment dams at two example sites in the central United States.
Breast cancer is the most commonly diagnosed cancer in women. A strong treatment candidate is highintensity focused ultrasound (HIFU), a non-invasive therapeutic method that has already demonstrated its promise. To improve the precision and lower the cost of HIFU treatment, our group has developed an ultrasound (US)-guided, five-degree-of-freedom (DOF), robot-assisted HIFU system. We constructed a fully functional prototype enabling easy three-dimensional (3D) US image reconstruction, target segmentation, treatment path generation, and automatic HIFU irradiation. The position was calibrated using a wire phantom and the coagulated area was assessed on heterogeneous tissue phantoms. Under the US guidance, the centroids of the HIFU-ablated area deviated by less than 2 mm from the planned treatment region. The overshoot around the planned region was well below the tolerance of clinical usage. Our system is considered to be sufficiently accurate for breast cancer treatment.
Ethylene production by the thermal cracking of naphtha is an energy-intensive process (up to 40 GJ heat per tonne ethylene), leading to significant formation of coke and nitrogen oxide (NOx), along with 1.8–2 kg of carbon dioxide (CO2) emission per kilogram of ethylene produced. We propose an alternative process for the redox oxy-cracking (ROC) of naphtha. In this two-step process, hydrogen (H2) from naphtha cracking is selectively combusted by a redox catalyst with its lattice oxygen first. The redox catalyst is subsequently re-oxidized by air and releases heat, which is used to satisfy the heat requirement for the cracking reactions. This intensified process reduces parasitic energy consumption and CO2 and NOx emissions. Moreover, the formation of ethylene and propylene can be enhanced due to the selective combustion of H2. In this study, the ROC process is simulated with ASPEN Plus based on experimental data from recently developed redox catalysts. Compared with traditional naphtha cracking, the ROC process can provide up to 52% reduction in energy consumption and CO2 emissions. The upstream section of the process consumes approximately 67% less energy while producing 28% more ethylene and propylene for every kilogram of naphtha feedstock.
Many articles have been published on intelligent manufacturing, most of which focus on hardware, software, additive manufacturing, robotics, the Internet of Things, and Industry 4.0. This paper provides a different perspective by examining relevant challenges and providing examples of some less-talked-about yet essential topics, such as hybrid systems, redefining advanced manufacturing, basic building blocks of new manufacturing, ecosystem readiness, and technology scalability. The first major challenge is to (re-)define what the manufacturing of the future will be, if we wish to: ① raise public awareness of new manufacturing’s economic and societal impacts, and ② garner the unequivocal support of policymakers. The second major challenge is to recognize that manufacturing in the future will consist of systems of hybrid systems of human and robotic operators; additive and subtractive processes; metal and composite materials; and cyber and physical systems. Therefore, studying the interfaces between constituencies and standards becomes important and essential. The third challenge is to develop a common framework in which the technology, manufacturing business case, and ecosystem readiness can be evaluated concurrently in order to shorten the time it takes for products to reach customers. Integral to this is having accepted measures of ‘‘scalability” of non-information technologies. The last, but not least, challenge is to examine successful modalities of industry–academia–government collaborations through public–private partnerships. This article discusses these challenges in detail.
Donor shortages for organ transplantations are a major clinical challenge worldwide. Potential risks that are inevitably encountered with traditional methods include complications, secondary injuries, and limited source donors. Three-dimensional (3D) printing technology holds the potential to solve these limitations; it can be used to rapidly manufacture personalized tissue engineering scaffolds, repair tissue defects in situ with cells, and even directly print tissue and organs. Such printed implants and organs not only perfectly match the patient’s damaged tissue, but can also have engineered material microstructures and cell arrangements to promote cell growth and differentiation. Thus, such implants allow the desired tissue repair to be achieved, and could eventually solve the donor-shortage problem. This review summarizes relevant studies and recent progress on four levels, introduces different types of biomedical materials, and discusses existing problems and development issues with 3D printing that are related to materials and to the construction of extracellular matrix in vitro for medical applications
Bio-syncretic robots consisting of both living biological materials and non-living systems possess desirable attributes such as high energy efficiency, intrinsic safety, high sensitivity, and self-repairing capabilities. Compared with living biological materials or non-living traditional robots based on electromechanical systems, the combined system of a bio-syncretic robot holds many advantages. Therefore, developing bio-syncretic robots has been a topic of great interest, and significant progress has been achieved in this area over the past decade. This review systematically summarizes the development of bio-syncretic robots. First, potential trends in the development of bio-syncretic robots are discussed. Next, the current performance of bio-syncretic robots, including simple movement and controllability of velocity and direction, is reviewed. The living biological materials and non-living materials that are used in bio-syncretic robots, and the corresponding fabrication methods, are then discussed. In addition, recently developed control methods for bio-syncretic robots, including physical and chemical control methods, are described. Finally, challenges in the development of bio-syncretic robots are discussed from multiple viewpoints, including sensing and intelligence, living and non-living materials, control approaches, and information technology.
The type, model, quantity, and location of sensors installed on the intelligent vehicle test platform are different, resulting in different sensor information processing modules. The driving map used in intelligent vehicle test platform has no uniform standard, which leads to different granularity of driving map information. The sensor information processing module is directly associated with the driving map information and decision-making module, which leads to the interface of intelligent driving system software module has no uniform standard. Based on the software and hardware architecture of intelligent vehicle, the sensor information and driving map information are processed by using the formal language of driving cognition to form a driving situation graph cluster and output to a decision-making module, and the output result of the decision-making module is shown as a cognitive arrow cluster, so that the whole process of intelligent driving from perception to decision-making is completed. The formalization of driving cognition reduces the influence of sensor type, model, quantity, and location on the whole software architecture, which makes the software architecture portable on different intelligent driving hardware platforms.
Cycling is an eco-friendly method of transport and recreation. With the intent of reducing the energy cost of cycling without providing an additional energy source, we have proposed the use of a torsion spring for knee-extension support. We developed an exoskeleton prototype using a crossing four-bar mechanism as a knee joint with an embedded torsion spring. This study evaluates the passive knee exoskeleton using constant-power cycling tests performed by eight healthy male participants. We recorded the surface electromyography over the rectus femoris muscles of both legs, while the participants cycled at 200 and 225W on a trainer with the developed wheel-accelerating system. We then analyzed these data in time–frequency via a continuous wavelet transform. At the same cycling speed and leg cadence, the median power spectral frequency of the electromyography increases with cycling load. At the same cycling load, the median power spectral frequency decreases when cycling with the exoskeleton. Quadriceps activity can be relieved despite the exoskeleton consuming no electrical energy and not delivering net-positive mechanical work. This fundamental can be applied to the further development of wearable devices for cycling assistance.
The randomness and complexity of urban traffic scenes make it a difficult task for self-driving cars to detect drivable areas. Inspired by human driving behaviors, we propose a novel method of drivable area detection for self-driving cars based on fusing pixel information from a monocular camera with spatial information from a light detection and ranging (LIDAR) scanner. Similar to the bijection of collineation, a new concept called co-point mapping, which is a bijection that maps points from the LIDAR scanner to points on the edge of the image segmentation, is introduced in the proposed method. Our method positions candidate drivable areas through self-learning models based on the initial drivable areas that are obtained by fusing obstacle information with superpixels. In addition, a fusion of four features is applied in order to achieve a more robust performance. In particular, a feature called drivable degree (DD) is proposed to characterize the drivable degree of the LIDAR points. After the initial drivable area is characterized by the features obtained through self-learning, a Bayesian framework is utilized to calculate the final probability map of the drivable area. Our approach introduces no common hypothesis and requires no training steps; yet it yields a state-of-art performance when tested on the ROAD-KITTI benchmark. Experimental results demonstrate that the proposed method is a general and efficient approach for detecting drivable area.
Finding an optimal trajectory from an initial point to a final point through closely packed obstacles, and controlling a Hilare robot through this trajectory, are challenging tasks. To serve this purpose, path planners and trajectory-tracking controllers are usually included in a control loop. This paper highlights the implementation of a trajectory-tracking controller on a stepper motor-driven Hilare robot, with a trajectory that is described as a set of waypoints. The controller was designed to handle discrete waypoints with directional discontinuity and to consider different constraints on the actuator velocity. The control parameters were tuned with the help of multi-objective particle swarm optimization to minimize the average cross-track error and average linear velocity error of the mobile robot when tracking a predefined trajectory. Experiments were conducted to control the mobile robot from a start position to a destination position along a trajectory described by the waypoints. Experimental results for tracking the trajectory generated by a path planner and the trajectory specified by a user are also demonstrated. Experiments conducted on the mobile robot validate the effectiveness of the proposed strategy for tracking different types of trajectories.
Powdery mildew, which is caused by Blumeria graminis f. sp. tritici (Bgt), is an important leaf disease that affects wheat yield. Powdery mildew-resistance (Pm) gene Pm21 was first transferred into wheat in the 1980s, by translocating the Heuchera villosa chromosome arm 6VS to the wheat chromosome arm 6AL (6VS6AL). Recently, new Bgt isolates that are virulent to Pm21 have been identified in some wheat fields, indicating that wheat breeders should be aware of the risk of deploying Pm21, although pathological details regarding these virulent isolates still remain to be discovered. Pm40 was identified and mapped on the wheat chromosome arm 7BS from several wheat lines developed from the progenies of a wild cross between wheat and Thinopyrum intermedium. Pm40 offers a broad spectrum of resistance to Bgt, which suggests that it is likely to provide potentially durable resistance. Cytological methods did not detect any large alien chromosomal segment in the wheat lines carrying Pm40. Lines with Pm40 and promising agronomical traits have been released by several wheat-breeding programs in the past several years. Therefore, we believe that Pm40 will play a role in powdery mildew-resistance wheat breeding after Pm21 resistance is overcome by Bgt isolates. In addition, both Pm21 and Pm40 were derived from alien species, suggesting that the resistance genes derived from alien species are potentially more durable or effective than those identified from wheat.
Wheatgrasses (Thinopyrum spp.), which are relatives of wheat (Triticum aestivum L.), have a perennial growth habit and offer resistance to a diversity of biotic and abiotic stresses, making them useful in wheat improvement. Many of these desirable traits from Thinopyrum spp. have been used to develop wheat cultivars by introgression breeding. The perennial growth habit of wheatgrasses inherits as a complex quantitative trait that is controlled by many unknown genes. Previous studies have indicated that Thinopyrum spp. are able to hybridize with wheat and produce viable/stable amphiploids or partial amphiploids. Meanwhile, efforts have been made to develop perennial wheat by domestication of Thinopyrum spp. The most promising perennial wheat–Thinopyrum lines can be used as grain and/or forage crops, which combine the desirable traits of both parents. The wheat–Thinopyrum lines can adapt to diverse agricultural systems. This paper summarizes the development of perennial wheat based on Thinopyrum, and the genetic aspects, breeding methods, and perspectives of wheat–Thinopyrum hybrids.
Wheat grown under rain-fed conditions is often affected by drought worldwide. Future projections from a climate simulation model predict that the combined effects of increasing temperature and changing rainfall patterns will aggravate this drought scenario and may significantly reduce wheat yields unless appropriate varieties are adopted. Wheat is adapted to a wide range of environments due to the diversity in its phenology genes. Wheat phenology offers the opportunity to fight against drought by modifying crop developmental phases according to water availability in target environments. This review summarizes recent advances in wheat phenology research, including vernalization (Vrn), photoperiod (Ppd), and also dwarfing (Rht) genes. The alleles, haplotypes, and copy number variation identified for Vrn and Ppd genes respond differently in different climatic conditions, and thus could alter not only the development phases but also the yield. Compared with the model plant Arabidopsis, more phenology genes have not yet been identified in wheat; quantifying their effects in target environments would benefit the breeding of wheat for improved drought tolerance. Hence, there is scope to maximize yields in water-limited environments by deploying appropriate phenology gene combinations along with Rht genes and other important physiological traits that are associated with drought resistance.
Global demand for vegetable oil is anticipated to double by 2030. The current vegetable oil production platforms, including oil palm and temperate oilseeds, are unlikely to produce such an expansion. Therefore, the exploration of novel vegetable oil sources has become increasingly important in order to make up this future vegetable oil shortfall. Triacylglycerol (TAG), as the dominant form of vegetable oil, has recently attracted immense interest in terms of being produced in plant vegetative tissues via genetic engineering technologies. Multidiscipline-based ‘‘-omics” studies are increasingly enhancing our understanding of plant lipid biochemistry and metabolism. As a result, the identification of biochemical pathways and the annotation of key genes contributing to fatty acid biosynthesis and to lipid assembly and turnover have been effectively updated. In recent years, there has been a rapid development in the genetic enhancement of TAG accumulation in high-biomass plant vegetative tissues and oilseeds through the genetic manipulation of the key genes and regulators involved in TAG biosynthesis. In this review, current genetic engineering strategies ranging from single-gene manipulation to multigene stacking aimed at increasing plant biomass TAG accumulation are summarized. New directions and suggestions for plant oil production that may help to further alleviate the potential shortage of edible oil and biodiesel are discussed.
Heterodera glycines (i.e., soybean cyst nematode, SCN) is the most damaging nematode pest affecting soybean crop worldwide. This nematode is managed by means of crop rotation with selected resistant sources. With increasing reports of virulent SCN populations that are able to break the resistance within commonly used sources, there is an increasing need to find new sources of resistance or to broaden the resistance background. This review summarizes recent findings about the genes controlling SCN resistance in soybean, and about how these genes interact to confer resistance against SCN in soybean. It also provides an update on molecular mapping and molecular markers that can be used for the mass selection and differentiation of different resistance lines and cultivars in order to expedite conventional breeding programs. In-depth knowledge of SCN parasitism proteins and soybean resistance responses to the pathogen is critical for the diversification of resistant sources through gene modification, gene stacking, or incorporation of novel sources of resistance through backcrossing or genetic engineering.
Field pea (Pisum sativum var. arvense L.) is an important legume crop around the world. It produces grains with high protein content and can improve the amount of available nitrogen in the soil. Aphanomyces root rot (ARR), caused by the soil-borne oomycete Aphanomyces euteiches Drechs. (A. euteiches), is a major threat to pea production in many pea-growing regions including Canada; it can cause severe root damage, wilting, and considerable yield losses under wet soil conditions. Traditional disease management strategies, such as crop rotations and seed treatments, cannot fully prevent ARR under conditions conducive for the disease, due to the longevity of the pathogen oospores, which can infect field pea plants at any growth stage. The development of pea cultivars with partial resistance or tolerance to ARR may be a promising approach to analyze the variability and physiologic specialization of A. euteiches in field pea and to improve the management of this disease. As such, the detection of quantitative trait loci (QTL) for resistance is essential to field pea-breeding programs. In this paper, the pathogenic characteristics of A. euteiches are reviewed along with various ARR management strategies and the QTL associated with partial resistance to ARR.
In recent years, wheat yield per hectare appears to have reached a plateau, leading to concerns for future food security with an increasing world population. Since its invention, synthetic hexaploid wheat (SHW) has been shown to be an effective genetic resource for transferring agronomically important genes from wild relatives to common wheat. It provides new sources for yield potential, drought tolerance, disease resistance, and nutrient-use efficiency when bred conventionally with modern wheat varieties. SHW is becoming more and more important for modern wheat breeding. Here, we review the current status of SHW generation, study, and application, with a particular focus on its contribution to wheat breeding. We also briefly introduce the most recent progress in our understanding of the molecular mechanisms for growth vigor in SHW. Advances in new technologies have made the complete wheat reference genome available, which offers a promising future for the study and applications of SHW in wheat improvement that are essential to meet global food demand.
Assessing the adsorption properties of nanoporous materials and determining their structural characterization is critical for progressing the use of such materials for many applications, including gas storage. Gas adsorption can be used for this characterization because it assesses a broad range of pore sizes, from micropore to mesopore. In the past 20 years, key developments have been achieved both in the knowledge of the adsorption and phase behavior of fluids in ordered nanoporous materials and in the creation and advancement of state-of-the-art approaches based on statistical mechanics, such as molecular simulation and density functional theory. Together with high-resolution experimental procedures for the adsorption of subcritical and supercritical fluids, this has led to significant advances in physical adsorption textural characterization. In this short, selective review paper, we discuss a few important and central features of the underlying adsorption mechanisms of fluids in a variety of nanoporous materials with well-defined pore structure. The significance of these features for advancing physical adsorption characterization and gas storage applications is also discussed.