Intelligent manufacturing is a general concept that is under continuous development. It can be categorized into three basic paradigms: digital manufacturing, digital-networked manufacturing, and newgeneration intelligent manufacturing. New-generation intelligent manufacturing represents an indepth integration of new-generation artificial intelligence (AI) technology and advanced manufacturing technology. It runs through every link in the full life-cycle of design, production, product, and service. The concept also relates to the optimization and integration of corresponding systems; the continuous improvement of enterprises’ product quality, performance, and service levels; and reduction in resources consumption. New-generation intelligent manufacturing acts as the core driving force of the new industrial revolution and will continue to be the main pathway for the transformation and upgrading of the manufacturing industry in the decades to come. Human-cyber-physical systems (HCPSs) reveal the technological mechanisms of new-generation intelligent manufacturing and can effectively guide related theoretical research and engineering practice. Given the sequential development, cross interaction, and iterative upgrading characteristics of the three basic paradigms of intelligent manufacturing, a technology roadmap for ‘‘parallel promotion and integrated development” should be developed in order to drive forward the intelligent transformation of the manufacturing industry in China.
With the popularization of the Internet, permeation of sensor networks, emergence of big data, increase in size of the information community, and interlinking and fusion of data and information throughout human society, physical space, and cyberspace, the information environment related to the current development of artificial intelligence (AI) has profoundly changed. AI faces important adjustments, and scientific foundations are confronted with new breakthroughs, as AI enters a new stage: AI 2.0. This paper briefly reviews the 60-year developmental history of AI, analyzes the external environment promoting the formation of AI 2.0 along with changes in goals, and describes both the beginning of the technology and the core idea behind AI 2.0 development. Furthermore, based on combined social demands and the information environment that exists in relation to Chinese development, suggestions on the development of AI 2.0 are given.
HPR1000 is an advanced nuclear power plant (NPP) with the significant feature of an active and passive safety design philosophy, developed by the China National Nuclear Corporation. On one hand, it is an evolutionary design based on proven technology of the existing pressurized water reactor NPP; on the other hand, it incorporates advanced design features including a 177-fuel-assembly core loaded with CF3 fuel assemblies, active and passive safety systems, comprehensive severe accident prevention and mitigation measures, enhanced protection against external events, and improved emergency response capability. Extensive verification experiments and tests have been performed for critical innovative improvements on passive systems, the reactor core, and the main equipment. The design of HPR1000 fulfills the international utility requirements for advanced light water reactors and the latest nuclear safety requirements, and addresses the safety issues relevant to the Fukushima accident. Along with its outstanding safety and economy, HPR1000 provides an excellent and practicable solution for both domestic and international nuclear power markets.
This paper presents findings from an investigation of the large-scale construction solid waste (CSW) landslide that occurred at a landfill at Shenzhen, Guangdong, China, on December 20, 2015, and which killed 77 people and destroyed 33 houses. The landslide involved 2.73×106 m3 of CSW and affected an area about 1100?m in length and 630?m in maximum width, making it the largest landfill landslide in the world. The investigation of this disaster used a combination of unmanned aerial vehicle surveillance and multistage remote-sensing images to reveal the increasing volume of waste in the landfill and the shifting shape of the landfill slope for nearly two years before the landslide took place, beginning with the creation of the CSW landfill in March, 2014, that resulted in the uncertain conditions of the landfill’s boundaries and the unstable state of the hydrologic performance. As a result, applying conventional stability analysis methods used for natural landslides to this case would be difficult. In order to analyze this disaster, we took a multistage modeling technique to analyze the varied characteristics of the landfill slope’s structure at various stages of CSW dumping and used the non-steady?flow?theory to explain the groundwater seepage problem. The investigation showed that the landfill could be divided into two units based on the moisture in the land: ① a front uint, consisted of the landfill slope, which had low water content; and ② a rear unit, consisted of fresh waste, which had a high water content. This structure caused two effects—surface-water infiltration and consolidation seepage that triggered the landslide in the landfill. Surface-water infiltration induced a gradual increase in pore water pressure head, or piezometric head, in the front slope because the infiltrating position rose as the volume of waste placement increased. Consolidation seepage led to higher excess pore water pressures as the loading of waste increased. We also investigated the post-failure soil dynamics parameters of the landslide deposit using cone penetration, triaxial, and ring-shear tests in order to simulate the characteristics of a flowing slide with a long run-out due to the liquefaction effect. Finally, we conclude the paper with lessons from the tens of catastrophic landslides of municipal solid waste around the world and discuss how to better manage the geotechnical risks of urbanization.
Our next generation of industry—Industry 4.0—holds the promise of increased flexibility in manufacturing, along with mass customization, better quality, and improved productivity. It thus enables companies to cope with the challenges of producing increasingly individualized products with a short lead-time to market and higher quality. Intelligent manufacturing plays an important role in Industry 4.0. Typical resources are converted into intelligent objects so that they are able to sense, act, and behave within a smart environment. In order to fully understand intelligent manufacturing in the context of Industry 4.0, this paper provides a comprehensive review of associated topics such as intelligent manufacturing, Internet of Things (IoT)-enabled manufacturing, and cloud manufacturing. Similarities and differences in these topics are highlighted based on our analysis. We also review key technologies such as the IoT, cyber-physical systems (CPSs), cloud computing, big data analytics (BDA), and information and communications technology (ICT) that are used to enable intelligent manufacturing. Next, we describe worldwide movements in intelligent manufacturing, including governmental strategic plans from different countries and strategic plans from major international companies in the European Union, United States, Japan, and China. Finally, we present current challenges and future research directions. The concepts discussed in this paper will spark new ideas in the effort to realize the much-anticipated Fourth Industrial Revolution.
A future smart grid must fulfill the vision of the Energy Internet in which millions of people produce their own energy from renewables in their homes, offices, and factories and share it with each other. Electric vehicles and local energy storage will be widely deployed. Internet technology will be utilized to transform the power grid into an energy-sharing inter-grid. To prepare for the future, a smart grid with intelligent periphery, or smart GRIP, is proposed. The building blocks of GRIP architecture are called clusters and include an energy-management system (EMS)-controlled transmission grid in the core and distribution grids, micro-grids, and smart buildings and homes on the periphery; all of which are hierarchically structured. The layered architecture of GRIP allows a seamless transition from the present to the future and plug-and-play interoperability. The basic functions of a cluster consist of ① dispatch, ② smoothing, and ③ mitigation. A risk-limiting dispatch methodology is presented; a new device, called the electric spring, is developed for smoothing out fluctuations in periphery clusters; and means to mitigate failures are discussed.
In 2011, the Chinese Academy of Sciences launched an engineering project to develop an accelerator-driven subcritical system (ADS) for nuclear waste transmutation. The China Lead-based Reactor (CLEAR), proposed by the Institute of Nuclear Energy Safety Technology, was selected as the reference reactor for ADS development, as well as for the technology development of the Generation IV lead-cooled fast reactor. The conceptual design of CLEAR-I with 10 MW thermal power has been completed. KYLIN series lead-bismuth eutectic experimental loops have been constructed to investigate the technologies of the coolant, key components, structural materials, fuel assembly, operation, and control. In order to validate and test the key components and integrated operating technology of the lead-based reactor, the lead alloy-cooled non-nuclear reactor CLEAR-S, the lead-based zero-power nuclear reactor CLEAR-0, and the lead-based virtual reactor CLEAR-V are under realization.
A historical review of in-vessel melt retention (IVR) is given, which is a severe accident mitigation measure extensively applied in Generation III pressurized water reactors (PWRs). The idea of IVR actually originated from the back-fitting of the Generation II reactor Loviisa VVER-440 in order to cope with the core-melt risk. It was then employed in the new deigns such as Westinghouse AP1000, the Korean APR1400 as well as Chinese advanced PWR designs HPR1000 and CAP1400. The most influential phenomena on the IVR strategy are in-vessel core melt evolution, the heat fluxes imposed on the vessel by the molten core, and the external cooling of the reactor pressure vessel (RPV). For in-vessel melt evolution, past focus has only been placed on the melt pool convection in the lower plenum of the RPV; however, through our review and analysis, we believe that other in-vessel phenomena, including core degradation and relocation, debris formation, and coolability and melt pool formation, may all contribute to the final state of the melt pool and its thermal loads on the lower head. By looking into previous research on relevant topics, we aim to identify the missing pieces in the picture. Based on the state of the art, we conclude by proposing future research needs.
This paper summarizes the development of hydro-projects in China, blended with an international perspective. It expounds major technical progress toward ensuring the safe construction of high dams and river harnessing, and covers the theorization of uneven non-equilibrium sediment transport, inter-basin water diversion, giant hydro-generator units, pumped storage power stations, underground caverns, ecological protection, and so on.
After the first concrete was poured on December 9, 2012 at the Shidao Bay site in Rongcheng, Shandong Province, China, the construction of the reactor building for the world’s first high-temperature gas-cooled reactor pebble-bed module (HTR-PM) demonstration power plant was completed in June, 2015. Installation of the main equipment then began, and the power plant is currently progressing well toward connecting to the grid at the end of 2017. The thermal power of a single HTR-PM reactor module is 250 MWth, the helium temperatures at the reactor core inlet/outlet are 250/750 °C, and a steam of 13.25 MPa/567 °C is produced at the steam generator outlet. Two HTR-PM reactor modules are connected to a steam turbine to form a 210 MWe nuclear power plant. Due to China’s industrial capability, we were able to overcome great difficulties, manufacture first-of-a-kind equipment, and realize series major technological innovations. We have achieved successful results in many aspects, including planning and implementing R&D, establishing an industrial partnership, manufacturing equipment, fuel production, licensing, site preparation, and balancing safety and economics; these obtained experiences may also be referenced by the global nuclear community.
Energy production based on fossil fuel reserves is largely responsible for carbon emissions, and hence global warming. The planet needs concerted action to reduce fossil fuel usage and to implement carbon mitigation measures. Ocean energy has huge potential, but there are major interdisciplinary problems to be overcome regarding technology, cost reduction, investment, environmental impact, governance, and so forth. This article briefly reviews ocean energy production from offshore wind, tidal stream, ocean current, tidal range, wave, thermal, salinity gradients, and biomass sources. Future areas of research and development are outlined that could make exploitation of the marine renewable energy (MRE) seascape a viable proposition; these areas include energy storage, advanced materials, robotics, and informatics. The article concludes with a sustainability perspective on the MRE seascape encompassing ethics, legislation, the regulatory environment, governance and consenting, economic, social, and environmental constraints. A new generation of engineers is needed with the ingenuity and spirit of adventure to meet the global challenge posed by MRE.
The pressurized water reactor CAP1400 is one of the sixteen National Science and Technology Major Projects. Developed from China’s nuclear R&D system and manufacturing capability, as well as AP1000 technology introduction and assimilation, CAP1400 is an advanced large passive nuclear power plant with independent intellectual property rights. By discussing the top design principle, main performance objectives, general parameters, safety design, and important improvements in safety, economy, and other advanced features, this paper reveals the technology innovation and competitiveness of CAP1400 as an internationally promising Gen-III PWR model. Moreover, the R&D of CAP1400 has greatly promoted China’s domestic nuclear power industry from the Gen-II to the Gen-III level.
The installation of vast quantities of additional new sensing and communication equipment, in conjunction with building the computing infrastructure to store and manage data gathered by this equipment, has been the first step in the creation of what is generically referred to as the “smart grid” for the electric transmission system. With this enormous capital investment in equipment having been made, attention is now focused on developing methods to analyze and visualize this large data set. The most direct use of this large set of new data will be in data visualization. This paper presents a survey of some visualization techniques that have been deployed by the electric power industry for visualizing data over the past several years. These techniques include pie charts, animation, contouring, time-varying graphs, geographic-based displays, image blending, and data aggregation techniques. The paper then emphasizes a newer concept of using word-sized graphics called sparklines as an extremely effective method of showing large amounts of time-varying data.
Method development has always been and will continue to be a core driving force of microbiome science. In this perspective, we argue that in the next decade, method development in microbiome analysis will be driven by three key changes in both ways of thinking and technological platforms: ① a shift from dissecting microbiota structureby sequencing to tracking microbiota state, function, and intercellular interaction via imaging; ② a shift from interrogating a consortium or population of cells to probing individual cells; and ③ a shift from microbiome data analysis to microbiome data science. Some of the recent method-development efforts by Chinese microbiome scientists and their international collaborators that underlie these technological trends are highlighted here. It is our belief that the China Microbiome Initiative has the opportunity to deliver outstanding “Made-in-China” tools to the international research community, by building an ambitious, competitive, and collaborative program at the forefront of method development for microbiome science.
The stiffness and nanotopographical characteristics of the extracellular matrix (ECM) influence numerous developmental, physiological, and pathological processes in vivo. These biophysical cues have therefore been applied to modulate almost all aspects of cell behavior, from cell adhesion and spreading to proliferation and differentiation. Delineation of the biophysical modulation of cell behavior is critical to the rational design of new biomaterials, implants, and medical devices. The effects of stiffness and topographical cues on cell behavior have previously been reviewed, respectively; however, the interwoven effects of stiffness and nanotopographical cues on cell behavior have not been well described, despite similarities in phenotypic manifestations. Herein, we first review the effects of substrate stiffness and nanotopography on cell behavior, and then focus on intracellular transmission of the biophysical signals from integrins to nucleus. Attempts are made to connect extracellular regulation of cell behavior with the biophysical cues. We then discuss the challenges in dissecting the biophysical regulation of cell behavior and in translating the mechanistic understanding of these cues to tissue engineering and regenerative medicine.
Knee osteoarthritis (OA) is the most common form of arthritis worldwide. The incidence of this disease is rising and its treatment poses an economic burden. Two early targets of knee OA treatment include the predominant symptom of pain, and cartilage damage in the knee joint. Current treatments have been beneficial in treating the disease but none is as effective as total knee arthroplasty (TKA). However, while TKA is an end-stage solution of the disease, it is an invasive and expensive procedure. Therefore, innovative regenerative engineering strategies should be established as these could defer or annul the need for a TKA. Several biomaterial and cell-based therapies are currently in development and have shown early promise in both preclinical and clinical studies. The use of advanced biomaterials and stem cells independently or in conjunction to treat knee OA could potentially reduce pain and regenerate focal articular cartilage damage. In this review, we discuss the pathogenesis of pain and cartilage damage in knee OA and explore novel treatment options currently being studied, along with some of their limitations.
This study provides a definition for urban big data while exploring its features and applications of China’s city intelligence. The differences between city intelligence in China and the “smart city” concept in other countries are compared to highlight and contrast the unique definition and model for China’s city intelligence in this paper. Furthermore, this paper examines the role of urban big data in city intelligence by showing that it not only serves as the cornerstone of this trend as it also plays a core role in the diffusion of city intelligence technology and serves as an inexhaustible resource for the sustained development of city intelligence. This study also points out the challenges of shaping and developing of China’s urban big data. Considering the supporting and core role that urban big data plays in city intelligence, the study then expounds on the key points of urban big data, including infrastructure support, urban governance, public services, and economic and industrial development. Finally, this study points out that the utility of city intelligence as an ideal policy tool for advancing the goals of China’s urban development. In conclusion, it is imperative that China make full use of its unique advantages—including using the nation’s current state of development and resources, geographical advantages, and good human relations—in subjective and objective conditions to promote the development of city intelligence through the proper application of urban big data.
Bionics (the imitation or abstraction of the “inventions of nature) and, to an even greater extent, synthetic biology, will be as relevant to engineering development and industry as the silicon chip was over the last 50 years. Chemical industries already use so-called “white biotechnology” for new processes, new raw materials, and more sustainable use of resources. Synthetic biology is also used for the development of second-generation biofuels and for harvesting the sun's energy with the help of tailor-made microorganisms or biometrically designed catalysts. The market potential for bionics in medicine, engineering processes, and DNA storage is huge. “Moonshot” projects are already aggressively focusing on diseases and new materials, and a US-led competition is currently underway with the aim of creating a thousand new molecules. This article describes a timeline that starts with current projects and then moves on to code engineering projects and their implications, artificial DNA, signaling molecules, and biological circuitry. Beyond these projects, one of the next frontiers in bionics is the design of synthetic metabolisms that include artificial food chains and foods, and the bioengineering of raw materials; all of which will lead to new insights into biological principles. Bioengineering will be an innovation motor just as digitalization is today. This article discusses pertinent examples of bioengineering, particularly the use of alternative carbon-based biofuels and the techniques and perils of cell modification. Big data, analytics, and massive storage are important factors in this next frontier. Although synthetic biology will be as pervasive and transformative in the next 50 years as digitization and the Internet are today, its applications and impacts are still in nascent stages. This article provides a general taxonomy in which the development of bioengineering is classified in five stages (DNA analysis, bio-circuits, minimal genomes, protocells, xenobiology) from the familiar to the unknown, with implications for safety and security, industrial development, and the development of bioengineering and biotechnology as an interdisciplinary field. Ethical issues and the importance of a public debate about the consequences of bionics and synthetic biology are discussed.
It has long been a dream in the electronics industry to be able to write out electronics directly, as simply as printing a picture onto paper with an office printer. The first-ever prototype of a liquid-metal printer has been invented and demonstrated by our lab, bringing this goal a key step closer. As part of a continuous endeavor, this work is dedicated to significantly extending such technology to the consumer level by making a very practical desktop liquid-metal printer for society in the near future. Through the industrial design and technical optimization of a series of key technical issues such as working reliability, printing resolution, automatic control, human-machine interface design, software, hardware, and integration between software and hardware, a high-quality personal desktop liquid-metal printer that is ready for mass production in industry was fabricated. Its basic features and important technical mechanisms are explained in this paper, along with demonstrations of several possible consumer end-uses for making functional devices such as light-emitting diode (LED) displays. This liquid-metal printer is an automatic, easy-to-use, and low-cost personal electronics manufacturing tool with many possible applications. This paper discusses important roles that the new machine may play for a group of emerging needs. The prospective future of this cutting-edge technology is outlined, along with a comparative interpretation of several historical printing methods. This desktop liquid-metal printer is expected to become a basic electronics manufacturing tool for a wide variety of emerging practices in the academic realm, in industry, and in education as well as for individual end-users in the near future.
This paper presents an overview of the current status of the development of the smart grid in Great Britain (GB). The definition, policy and technical drivers, incentive mechanisms, technological focus, and the industry's progress in developing the smart grid are described. In particular, the Low Carbon Networks Fund and Electricity Network Innovation Competition projects, together with the rollout of smart metering, are detailed. A more observable, controllable, automated, and integrated electricity network will be supported by these investments in conjunction with smart meter installation. It is found that the focus has mainly been on distribution networks as well as on real-time flows of information and interaction between suppliers and consumers facilitated by improved information and communications technology, active power flow management, demand management, and energy storage. The learning from the GB smart grid initiatives will provide valuable guidelines for future smart grid development in GB and other countries.
The emerging prototype for a Smart City is one of an urban environment with a new generation of innovative services for transportation, energy distribution, healthcare, environmental monitoring, business, commerce, emergency response, and social activities. Enabling the technology for such a setting requires a viewpoint of Smart Cities as cyber-physical systems (CPSs) that include new software platforms and strict requirements for mobility, security, safety, privacy, and the processing of massive amounts of information. This paper identifies some key defining characteristics of a Smart City, discusses some lessons learned from viewing them as CPSs, and outlines some fundamental research issues that remain largely open.
Given the significant requirements for transforming and promoting the process industry, we present the major limitations of current petrochemical enterprises, including limitations in decision-making, production operation, efficiency and security, information integration, and so forth. To promote a vision of the process industry with efficient, green, and smart production, modern information technology should be utilized throughout the entire optimization process for production, management, and marketing. To focus on smart equipment in manufacturing processes, as well as on the adaptive intelligent optimization of the manufacturing process, operating mode, and supply chain management, we put forward several key scientific problems in engineering in a demand-driven and application-oriented manner, namely: ① intelligent sensing and integration of all process information, including production and management information; ② collaborative decision-making in the supply chain, industry chain, and value chain, driven by knowledge; ③ cooperative control and optimization of plant-wide production processes via human-cyber-physical interaction; and ④life-cycle assessments for safety and environmental footprint monitoring, in addition to tracing analysis and risk control. In order to solve these limitations and core scientific problems, we further present fundamental theories and key technologies for smart and optimal manufacturing in the process industry. Although this paper discusses the process industry in China, the conclusions in this paper can be extended to the process industry around the world.
Additive manufacturing (AM) permits the fabrication of functionally optimized components with high geometrical complexity. The opportunity of using porous infill as an integrated part of the manufacturing process is an example of a unique AM feature. Automated design methods are still incapable of fully exploiting this design freedom. In this work, we show how the so-called coating approach to topology optimization provides a means for designing infill-based components that possess a strongly improved buckling load and, as a result, improved structural stability. The suggested approach thereby addresses an important inadequacy of the standard minimum compliance topology optimization approach, in which buckling is rarely accounted for; rather, a satisfactory buckling load is usually assured through a post-processing step that may lead to sub-optimal components. The present work compares the standard and coating approaches to topology optimization for the MBB beam benchmark case. The optimized structures are additively manufactured using a filamentary technique. This experimental study validates the numerical model used in the coating approach. Depending on the properties of the infill material, the buckling load may be more than four times higher than that of solid structures optimized under the same conditions.
The rise of big data has led to new demands for machine learning (ML) systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area that lies between ML and systems.
This article provides in-depth insights into the necessary technologies for automated driving in future cities. State of science is reflected from different perspectives such as in-car computing and data management, road side infrastructure, and cloud solutions. Especially the challenges for the application of HD maps as core technology for automated driving are depicted in this article.
The traveling wave reactor (TWR) is a once-through reactor that uses in situ breeding to greatly reduce the need for enrichment and reprocessing. Breeding converts incoming subcritical reload fuel into new critical fuel, allowing a breed-burn wave to propagate. The concept works on the basis that breed-burn waves and the fuel move relative to one another. Thus either the fuel or the waves may move relative to the stationary observer. The most practical embodiments of the TWR involve moving the fuel while keeping the nuclear reactions in one place−sometimes referred to as the standing wave reactor (SWR). TWRs can operate with uranium reload fuels including totally depleted uranium, natural uranium, and low-enriched fuel (e.g., 5.5% 235U and below), which ordinarily would not be critical in a fast spectrum. Spent light water reactor (LWR) fuel may also serve as TWR reload fuel. In each of these cases, very efficient fuel usage and significant reduction of waste volumes are achieved without the need for reprocessing. The ultimate advantages of the TWR are realized when the reload fuel is depleted uranium, where after the startup period, no enrichment facilities are needed to sustain the first reactor and a chain of successor reactors. TerraPower’s conceptual and engineering design and associated technology development activities have been underway since late 2006, with over 50 institutions working in a highly coordinated effort to place the first unit in operation by 2026. This paper summarizes the TWR technology: its development program, its progress, and an analysis of its social and economic benefits.
This paper describes the combinational surface kinetics enhancement and surface states passivation of nickel-borate (Ni-Bi) co-catalyst for a hematite (Fe2O3) photoanode. The Ni-Bi-modified Fe2O3 photoanode exhibits a cathodic onset potential shift of 230 mV and a 2.3-fold enhancement of the photocurrent at 1.23 V, versus the reversible hydrogen electrode (RHE). The borate (Bi) in the Ni-Bi film promotes the release of protons for the oxygen evolution reaction (OER).
Coal is the dominant primary energy source in China and the major source of greenhouse gases and air pollutants. To facilitate the use of coal in an environmentally satisfactory and economically viable way, clean coal technologies (CCTs) are necessary. This paper presents a review of recent research and development of four kinds of CCTs: coal power generation; coal conversion; pollution control; and carbon capture, utilization, and storage. It also outlines future perspectives on directions for technology research and development (R&D). This review shows that China has made remarkable progress in the R&D of CCTs, and that a number of CCTs have now entered into the commercialization stage.
Global water security is a severe issue that threatens human health and well-being. Finding sustainable alternative water resources has become a matter of great urgency. For coastal urban areas, desalinated seawater could serve as a freshwater supply. However, since 20%–30% of the water supply is used for flushing waste from the city, seawater with simple treatment could also partly replace the use of freshwater. In this work, the freshwater saving potential and environmental impacts of the urban water system (water-wastewater closed loop) adopting seawater desalination, seawater for toilet flushing (SWTF), or reclaimed water for toilet flushing (RWTF) are compared with those of a conventional freshwater system, through a life-cycle assessment and sensitivity analysis. The potential applications of these processes are also assessed. The results support the environmental sustainability of the SWTF approach, but its potential application depends on the coastal distance and effective population density of a city. Developed coastal cities with an effective population density exceeding 3000 persons·km–2 and located less than 30?km from the seashore (for the main pipe supplying seawater to the city) would benefit from applying SWTF, regardless of other impact parameters. By further applying the sulfate reduction, autotrophic denitrification, and nitrification integrated (SANI) process for wastewater treatment, the maximum distance from the seashore can be extended to 60?km. Considering that most modern urbanized cities fulfill these criteria, the next generation of water supply systems could consist of a freshwater supply coupled with a seawater supply for sustainable urban development.
This paper collects and synthesizes the technical requirements, implementation, and validation methods for quasi-steady agent-based simulations of interconnection-scale models with particular attention to the integration of renewable generation and controllable loads. Approaches for modeling aggregated controllable loads are presented and placed in the same control and economic modeling framework as generation resources for interconnection planning studies. Model performance is examined with system parameters that are typical for an interconnection approximately the size of the Western Electricity Coordinating Council (WECC) and a control area about 1/100 the size of the system. These results are used to demonstrate and validate the methods presented.