Intelligent manufacturing is a general concept that is under continuous development. It can be categorized into three basic paradigms: digital manufacturing, digital-networked manufacturing, and newgeneration intelligent manufacturing. New-generation intelligent manufacturing represents an indepth integration of new-generation artificial intelligence (AI) technology and advanced manufacturing technology. It runs through every link in the full life-cycle of design, production, product, and service. The concept also relates to the optimization and integration of corresponding systems; the continuous improvement of enterprises’ product quality, performance, and service levels; and reduction in resources consumption. New-generation intelligent manufacturing acts as the core driving force of the new industrial revolution and will continue to be the main pathway for the transformation and upgrading of the manufacturing industry in the decades to come. Human-cyber-physical systems (HCPSs) reveal the technological mechanisms of new-generation intelligent manufacturing and can effectively guide related theoretical research and engineering practice. Given the sequential development, cross interaction, and iterative upgrading characteristics of the three basic paradigms of intelligent manufacturing, a technology roadmap for ‘‘parallel promotion and integrated development” should be developed in order to drive forward the intelligent transformation of the manufacturing industry in China.
Energy production based on fossil fuel reserves is largely responsible for carbon emissions, and hence global warming. The planet needs concerted action to reduce fossil fuel usage and to implement carbon mitigation measures. Ocean energy has huge potential, but there are major interdisciplinary problems to be overcome regarding technology, cost reduction, investment, environmental impact, governance, and so forth. This article briefly reviews ocean energy production from offshore wind, tidal stream, ocean current, tidal range, wave, thermal, salinity gradients, and biomass sources. Future areas of research and development are outlined that could make exploitation of the marine renewable energy (MRE) seascape a viable proposition; these areas include energy storage, advanced materials, robotics, and informatics. The article concludes with a sustainability perspective on the MRE seascape encompassing ethics, legislation, the regulatory environment, governance and consenting, economic, social, and environmental constraints. A new generation of engineers is needed with the ingenuity and spirit of adventure to meet the global challenge posed by MRE.
With the popularization of the Internet, permeation of sensor networks, emergence of big data, increase in size of the information community, and interlinking and fusion of data and information throughout human society, physical space, and cyberspace, the information environment related to the current development of artificial intelligence (AI) has profoundly changed. AI faces important adjustments, and scientific foundations are confronted with new breakthroughs, as AI enters a new stage: AI 2.0. This paper briefly reviews the 60-year developmental history of AI, analyzes the external environment promoting the formation of AI 2.0 along with changes in goals, and describes both the beginning of the technology and the core idea behind AI 2.0 development. Furthermore, based on combined social demands and the information environment that exists in relation to Chinese development, suggestions on the development of AI 2.0 are given.
In 2005, the US passed the Energy Policy Act of 2005 mandating the construction and operation of a high-temperature gas reactor (HTGR) by 2021. This law was passed after a multiyear study by national experts on what future nuclear technologies should be developed. As a result of the Act, the US Congress chose to develop the so-called Next-Generation Nuclear Plant, which was to be an HTGR designed to produce process heat for hydrogen production. Despite high hopes and expectations, the current status is that high temperature reactors have been relegated to completing research programs on advanced fuels, graphite and materials with no plans to build a demonstration plant as required by the US Congress in 2005. There are many reasons behind this diminution of HTGR development, including but not limited to insufficient government funding requirements for research, unrealistically high temperature requirements for the reactor, the delay in the need for a “hydrogen” economy, competition from light water small modular light water reactors, little utility interest in new technologies, very low natural gas prices in the US, and a challenging licensing process in the US for non-water reactors.
Cutting-edge technologies in optical molecular imaging have ushered in new frontiers in cancer research, clinical translation, and medical practice, as evidenced by recent advances in optical multimodality imaging, Cerenkov luminescence imaging (CLI), and optical image-guided surgeries. New abilities allow in vivo cancer imaging with sensitivity and accuracy that are unprecedented in conventional imaging approaches. The visualization of cellular and molecular behaviors and events within tumors in living subjects is improving our deeper understanding of tumors at a systems level. These advances are being rapidly used to acquire tumor-to-tumor molecular heterogeneity, both dynamically and quantitatively, as well as to achieve more effective therapeutic interventions with the assistance of real-time imaging. In the era of molecular imaging, optical technologies hold great promise to facilitate the development of highly sensitive cancer diagnoses as well as personalized patient treatment—one of the ultimate goals of precision medicine.
In this paper, we review the current state-of-the-art techniques used for understanding the inner workings of the brain at a systems level. The neural activity that governs our everyday lives involves an intricate coordination of many processes that can be attributed to a variety of brain regions. On the surface, many of these functions can appear to be controlled by specific anatomical structures; however, in reality, numerous dynamic networks within the brain contribute to its function through an interconnected web of neuronal and synaptic pathways. The brain, in its healthy or pathological state, can therefore be best understood by taking a systems-level approach. While numerous neuroengineering technologies exist, we focus here on three major thrusts in the field of systems neuroengineering: neuroimaging, neural interfacing, and neuromodulation. Neuroimaging enables us to delineate the structural and functional organization of the brain, which is key in understanding how the neural system functions in both normal and disease states. Based on such knowledge, devices can be used either to communicate with the neural system, as in neural interface systems, or to modulate brain activity, as in neuromodulation systems. The consideration of these three fields is key to the development and application of neuro-devices. Feedback-based neuro-devices require the ability to sense neural activity (via a neuroimaging modality) through a neural interface (invasive or noninvasive) and ultimately to select a set of stimulation parameters in order to alter neural function via a neuromodulation modality. Systems neuroengineering refers to the use of engineering tools and technologies to image, decode, and modulate the brain in order to comprehend its functions and to repair its dysfunction. Interactions between these fields will help to shape the future of systems neuroengineering—to develop neurotechniques for enhancing the understanding of whole-brain function and dysfunction, and the management of neurological and mental disorders.
This article focuses on the potential impact of big data analysis to improve health, prevent and detect disease at an earlier stage, and personalize interventions. The role that big data analytics may have in interrogating the patient electronic health record toward improved clinical decision support is discussed. We examine developments in pharmacogenetics that have increased our appreciation of the reasons why patients respond differently to chemotherapy. We also assess the expansion of online health communications and the way in which this data may be capitalized on in order to detect public health threats and control or contain epidemics. Finally, we describe how a new generation of wearable and implantable body sensors may improve wellbeing, streamline management of chronic diseases, and improve the quality of surgical implants.
A high-throughput multi-plume pulsed-laser deposition (MPPLD) system has been demonstrated and compared to previous techniques. Whereas most combinatorial pulsed-laser deposition (PLD) systems have focused on achieving thickness uniformity using sequential multilayer deposition and masking followed by post-deposition annealing, MPPLD directly deposits a compositionally varied library of compounds using the directionality of PLD plumes and the resulting spatial variations of deposition rate. This system is more suitable for high-throughput compound thin-film fabrication.
This paper summarizes the development of hydro-projects in China, blended with an international perspective. It expounds major technical progress toward ensuring the safe construction of high dams and river harnessing, and covers the theorization of uneven non-equilibrium sediment transport, inter-basin water diversion, giant hydro-generator units, pumped storage power stations, underground caverns, ecological protection, and so on.
The research roots of 19fluorine (19F) magnetic resonance imaging (MRI) date back over 35 years. Over that time span, 1H imaging flourished and was adopted worldwide with an endless array of applications and imaging approaches, making magnetic resonance an indispensable pillar of biomedical diagnostic imaging. For many years during this timeframe, 19F imaging research continued at a slow pace as the various attributes of the technique were explored. However, over the last decade and particularly the last several years, the pace and clinical relevance of 19F imaging has exploded. In part, this is due to advances in MRI instrumentation, 19F/1H coil designs, and ultrafast pulse sequence development for both preclinical and clinical scanners. These achievements, coupled with interest in the molecular imaging of anatomy and physiology, and combined with a cadre of innovative agents, have brought the concept of 19F into early clinical evaluation. In this review, we attempt to provide a slice of this rich history of research and development, with a particular focus on liquid perfluorocarbon compound-based agents.
A future smart grid must fulfill the vision of the Energy Internet in which millions of people produce their own energy from renewables in their homes, offices, and factories and share it with each other. Electric vehicles and local energy storage will be widely deployed. Internet technology will be utilized to transform the power grid into an energy-sharing inter-grid. To prepare for the future, a smart grid with intelligent periphery, or smart GRIP, is proposed. The building blocks of GRIP architecture are called clusters and include an energy-management system (EMS)-controlled transmission grid in the core and distribution grids, micro-grids, and smart buildings and homes on the periphery; all of which are hierarchically structured. The layered architecture of GRIP allows a seamless transition from the present to the future and plug-and-play interoperability. The basic functions of a cluster consist of ① dispatch, ② smoothing, and ③ mitigation. A risk-limiting dispatch methodology is presented; a new device, called the electric spring, is developed for smoothing out fluctuations in periphery clusters; and means to mitigate failures are discussed.
The rise of big data has led to new demands for machine learning (ML) systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area that lies between ML and systems.
This paper presents findings from an investigation of the large-scale construction solid waste (CSW) landslide that occurred at a landfill at Shenzhen, Guangdong, China, on December 20, 2015, and which killed 77 people and destroyed 33 houses. The landslide involved 2.73×106 m3 of CSW and affected an area about 1100?m in length and 630?m in maximum width, making it the largest landfill landslide in the world. The investigation of this disaster used a combination of unmanned aerial vehicle surveillance and multistage remote-sensing images to reveal the increasing volume of waste in the landfill and the shifting shape of the landfill slope for nearly two years before the landslide took place, beginning with the creation of the CSW landfill in March, 2014, that resulted in the uncertain conditions of the landfill’s boundaries and the unstable state of the hydrologic performance. As a result, applying conventional stability analysis methods used for natural landslides to this case would be difficult. In order to analyze this disaster, we took a multistage modeling technique to analyze the varied characteristics of the landfill slope’s structure at various stages of CSW dumping and used the non-steady?flow?theory to explain the groundwater seepage problem. The investigation showed that the landfill could be divided into two units based on the moisture in the land: ① a front uint, consisted of the landfill slope, which had low water content; and ② a rear unit, consisted of fresh waste, which had a high water content. This structure caused two effects—surface-water infiltration and consolidation seepage that triggered the landslide in the landfill. Surface-water infiltration induced a gradual increase in pore water pressure head, or piezometric head, in the front slope because the infiltrating position rose as the volume of waste placement increased. Consolidation seepage led to higher excess pore water pressures as the loading of waste increased. We also investigated the post-failure soil dynamics parameters of the landslide deposit using cone penetration, triaxial, and ring-shear tests in order to simulate the characteristics of a flowing slide with a long run-out due to the liquefaction effect. Finally, we conclude the paper with lessons from the tens of catastrophic landslides of municipal solid waste around the world and discuss how to better manage the geotechnical risks of urbanization.
Ultrasonic backscatter technique has shown promise as a noninvasive cancellous bone assessment tool. A novel ultrasonic backscatter bone diagnostic (UBBD) instrument and an in vivo application for neonatal bone evaluation are introduced in this study. The UBBD provides several advantages, including noninvasiveness, non-ionizing radiation, portability, and simplicity. In this study, the backscatter signal could be measured within 5 s using the UBBD. Ultrasonic backscatter measurements were performed on 467 neonates (268 males and 199 females) at the left calcaneus. The backscatter signal was measured at a central frequency of 3.5 MHz. The delay (T1) and duration (T2) of the backscatter signal of interest (SOI) were varied, and the apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), zero frequency intercept of apparent backscatter (FIAB), and spectral centroid shift (SCS) were calculated. The results showed that the SOI selection had a direct influence on cancellous bone evaluation. The AIB and FIAB were positively correlated with the gestational age (|R| up to 0.45, P<0.001) when T1 was short (<8 µs), while negative correlations (|R| up to 0.56, P<0.001) were commonly observed for T1>10 µs. Moderate positive correlations (|R| up to 0.45, P<0.001) were observed for FSAB and SCS with gestational age when T1 was long (>10 µs). The T2 mainly introduced fluctuations in the observed correlation coefficients. The moderate correlations observed with UBBD demonstrate the feasibility of using the backscatter signal to evaluate neonatal bone status. This study also proposes an explicit standard for in vivo SOI selection and neonatal cancellous bone assessment.
Starting with the Ertan arch dam (240 m high, 3300 MW) in 2000, China successfully built a total of seven ultra-high arch dams over 200 m tall by the end of 2014. Among these, the Jinping I (305 m), Xiaowan (294.5m), and Xiluodu (285.5 m) arch dams have reached the 300 m height level (i.e., near or over 300 m), making them the tallest arch dams in the world. The design and construction of these 300 m ultra-high arch dams posed significant challenges, due to high water pressures, high seismic design criteria, and complex geological conditions. The engineering team successfully tackled these challenges and made critical breakthroughs, especially in the area of safety control. In this paper, the author summarizes various key technological aspects involved in the design and construction of 300?m ultra-high arch dams, including the strength and stability of foundation rock, excavation of the dam base and surface treatment, dam shape optimization, safety design guidelines, seismic analysis and design, treatment of a complex foundation, concrete temperature control, and crack prevention. The experience gained from these projects should be valuable for future practitioners.
Since its inception, endoscopy has aimed to establish an immediate diagnosis that is virtually consistent with a histologic diagnosis. In the past decade, confocal laser scanning microscopy has been brought into endoscopy, thus enabling in vivo microscopic tissue visualization with a magnification and resolution comparable to that obtained with the ex vivo microscopy of histological specimens. The major challenge in the development of instrumentation lies in the miniaturization of a fiber-optic probe for microscopic imaging with micron-scale resolution. Here, we present the design and construction of a confocal endoscope based on a fiber bundle with 1.4-μm lateral resolution and 8-frames per second (fps) imaging speed. The fiber-optic probe has a diameter of 2.6 mm that is compatible with the biopsy channel of a conventional endoscope. The prototype of a confocal endoscope has been used to observe epithelial cells of the gastrointestinal tracts of mice and will be further demonstrated in clinical trials. In addition, the confocal endoscope can be used for translational studies of epithelial function in order to monitor how molecules work and how cells interact in their natural environment.
A historical review of in-vessel melt retention (IVR) is given, which is a severe accident mitigation measure extensively applied in Generation III pressurized water reactors (PWRs). The idea of IVR actually originated from the back-fitting of the Generation II reactor Loviisa VVER-440 in order to cope with the core-melt risk. It was then employed in the new deigns such as Westinghouse AP1000, the Korean APR1400 as well as Chinese advanced PWR designs HPR1000 and CAP1400. The most influential phenomena on the IVR strategy are in-vessel core melt evolution, the heat fluxes imposed on the vessel by the molten core, and the external cooling of the reactor pressure vessel (RPV). For in-vessel melt evolution, past focus has only been placed on the melt pool convection in the lower plenum of the RPV; however, through our review and analysis, we believe that other in-vessel phenomena, including core degradation and relocation, debris formation, and coolability and melt pool formation, may all contribute to the final state of the melt pool and its thermal loads on the lower head. By looking into previous research on relevant topics, we aim to identify the missing pieces in the picture. Based on the state of the art, we conclude by proposing future research needs.
This study provides a definition for urban big data while exploring its features and applications of China’s city intelligence. The differences between city intelligence in China and the “smart city” concept in other countries are compared to highlight and contrast the unique definition and model for China’s city intelligence in this paper. Furthermore, this paper examines the role of urban big data in city intelligence by showing that it not only serves as the cornerstone of this trend as it also plays a core role in the diffusion of city intelligence technology and serves as an inexhaustible resource for the sustained development of city intelligence. This study also points out the challenges of shaping and developing of China’s urban big data. Considering the supporting and core role that urban big data plays in city intelligence, the study then expounds on the key points of urban big data, including infrastructure support, urban governance, public services, and economic and industrial development. Finally, this study points out that the utility of city intelligence as an ideal policy tool for advancing the goals of China’s urban development. In conclusion, it is imperative that China make full use of its unique advantages—including using the nation’s current state of development and resources, geographical advantages, and good human relations—in subjective and objective conditions to promote the development of city intelligence through the proper application of urban big data.
After the first concrete was poured on December 9, 2012 at the Shidao Bay site in Rongcheng, Shandong Province, China, the construction of the reactor building for the world’s first high-temperature gas-cooled reactor pebble-bed module (HTR-PM) demonstration power plant was completed in June, 2015. Installation of the main equipment then began, and the power plant is currently progressing well toward connecting to the grid at the end of 2017. The thermal power of a single HTR-PM reactor module is 250 MWth, the helium temperatures at the reactor core inlet/outlet are 250/750 °C, and a steam of 13.25 MPa/567 °C is produced at the steam generator outlet. Two HTR-PM reactor modules are connected to a steam turbine to form a 210 MWe nuclear power plant. Due to China’s industrial capability, we were able to overcome great difficulties, manufacture first-of-a-kind equipment, and realize series major technological innovations. We have achieved successful results in many aspects, including planning and implementing R&D, establishing an industrial partnership, manufacturing equipment, fuel production, licensing, site preparation, and balancing safety and economics; these obtained experiences may also be referenced by the global nuclear community.
Based on the construction of the 8-inch fabrication line, advanced process technology of 8-inch wafer, as well as the fourth-generation high-voltage double-diffused metal-oxide semiconductor (DMOS+) insulated-gate bipolar transistor (IGBT) technology and the fifth-generation trench gate IGBT technology, have been developed, realizing a great-leap forward technological development for the manufacturing of high-voltage IGBT from 6-inch to 8-inch. The 1600 A/1.7 kV and 1500 A/3.3 kV IGBT modules have been successfully fabricated, qualified, and applied in rail transportation traction system.
The pressurized water reactor CAP1400 is one of the sixteen National Science and Technology Major Projects. Developed from China’s nuclear R&D system and manufacturing capability, as well as AP1000 technology introduction and assimilation, CAP1400 is an advanced large passive nuclear power plant with independent intellectual property rights. By discussing the top design principle, main performance objectives, general parameters, safety design, and important improvements in safety, economy, and other advanced features, this paper reveals the technology innovation and competitiveness of CAP1400 as an internationally promising Gen-III PWR model. Moreover, the R&D of CAP1400 has greatly promoted China’s domestic nuclear power industry from the Gen-II to the Gen-III level.
Our next generation of industry—Industry 4.0—holds the promise of increased flexibility in manufacturing, along with mass customization, better quality, and improved productivity. It thus enables companies to cope with the challenges of producing increasingly individualized products with a short lead-time to market and higher quality. Intelligent manufacturing plays an important role in Industry 4.0. Typical resources are converted into intelligent objects so that they are able to sense, act, and behave within a smart environment. In order to fully understand intelligent manufacturing in the context of Industry 4.0, this paper provides a comprehensive review of associated topics such as intelligent manufacturing, Internet of Things (IoT)-enabled manufacturing, and cloud manufacturing. Similarities and differences in these topics are highlighted based on our analysis. We also review key technologies such as the IoT, cyber-physical systems (CPSs), cloud computing, big data analytics (BDA), and information and communications technology (ICT) that are used to enable intelligent manufacturing. Next, we describe worldwide movements in intelligent manufacturing, including governmental strategic plans from different countries and strategic plans from major international companies in the European Union, United States, Japan, and China. Finally, we present current challenges and future research directions. The concepts discussed in this paper will spark new ideas in the effort to realize the much-anticipated Fourth Industrial Revolution.
In 2011, the Chinese Academy of Sciences launched an engineering project to develop an accelerator-driven subcritical system (ADS) for nuclear waste transmutation. The China Lead-based Reactor (CLEAR), proposed by the Institute of Nuclear Energy Safety Technology, was selected as the reference reactor for ADS development, as well as for the technology development of the Generation IV lead-cooled fast reactor. The conceptual design of CLEAR-I with 10 MW thermal power has been completed. KYLIN series lead-bismuth eutectic experimental loops have been constructed to investigate the technologies of the coolant, key components, structural materials, fuel assembly, operation, and control. In order to validate and test the key components and integrated operating technology of the lead-based reactor, the lead alloy-cooled non-nuclear reactor CLEAR-S, the lead-based zero-power nuclear reactor CLEAR-0, and the lead-based virtual reactor CLEAR-V are under realization.