Intelligent manufacturing is a general concept that is under continuous development. It can be categorized into three basic paradigms: digital manufacturing, digital-networked manufacturing, and newgeneration intelligent manufacturing. New-generation intelligent manufacturing represents an indepth integration of new-generation artificial intelligence (AI) technology and advanced manufacturing technology. It runs through every link in the full life-cycle of design, production, product, and service. The concept also relates to the optimization and integration of corresponding systems; the continuous improvement of enterprises’ product quality, performance, and service levels; and reduction in resources consumption. New-generation intelligent manufacturing acts as the core driving force of the new industrial revolution and will continue to be the main pathway for the transformation and upgrading of the manufacturing industry in the decades to come. Human-cyber-physical systems (HCPSs) reveal the technological mechanisms of new-generation intelligent manufacturing and can effectively guide related theoretical research and engineering practice. Given the sequential development, cross interaction, and iterative upgrading characteristics of the three basic paradigms of intelligent manufacturing, a technology roadmap for ‘‘parallel promotion and integrated development” should be developed in order to drive forward the intelligent transformation of the manufacturing industry in China.
In 2005, the US passed the Energy Policy Act of 2005 mandating the construction and operation of a high-temperature gas reactor (HTGR) by 2021. This law was passed after a multiyear study by national experts on what future nuclear technologies should be developed. As a result of the Act, the US Congress chose to develop the so-called Next-Generation Nuclear Plant, which was to be an HTGR designed to produce process heat for hydrogen production. Despite high hopes and expectations, the current status is that high temperature reactors have been relegated to completing research programs on advanced fuels, graphite and materials with no plans to build a demonstration plant as required by the US Congress in 2005. There are many reasons behind this diminution of HTGR development, including but not limited to insufficient government funding requirements for research, unrealistically high temperature requirements for the reactor, the delay in the need for a “hydrogen” economy, competition from light water small modular light water reactors, little utility interest in new technologies, very low natural gas prices in the US, and a challenging licensing process in the US for non-water reactors.
With the popularization of the Internet, permeation of sensor networks, emergence of big data, increase in size of the information community, and interlinking and fusion of data and information throughout human society, physical space, and cyberspace, the information environment related to the current development of artificial intelligence (AI) has profoundly changed. AI faces important adjustments, and scientific foundations are confronted with new breakthroughs, as AI enters a new stage: AI 2.0. This paper briefly reviews the 60-year developmental history of AI, analyzes the external environment promoting the formation of AI 2.0 along with changes in goals, and describes both the beginning of the technology and the core idea behind AI 2.0 development. Furthermore, based on combined social demands and the information environment that exists in relation to Chinese development, suggestions on the development of AI 2.0 are given.
Our next generation of industry—Industry 4.0—holds the promise of increased flexibility in manufacturing, along with mass customization, better quality, and improved productivity. It thus enables companies to cope with the challenges of producing increasingly individualized products with a short lead-time to market and higher quality. Intelligent manufacturing plays an important role in Industry 4.0. Typical resources are converted into intelligent objects so that they are able to sense, act, and behave within a smart environment. In order to fully understand intelligent manufacturing in the context of Industry 4.0, this paper provides a comprehensive review of associated topics such as intelligent manufacturing, Internet of Things (IoT)-enabled manufacturing, and cloud manufacturing. Similarities and differences in these topics are highlighted based on our analysis. We also review key technologies such as the IoT, cyber-physical systems (CPSs), cloud computing, big data analytics (BDA), and information and communications technology (ICT) that are used to enable intelligent manufacturing. Next, we describe worldwide movements in intelligent manufacturing, including governmental strategic plans from different countries and strategic plans from major international companies in the European Union, United States, Japan, and China. Finally, we present current challenges and future research directions. The concepts discussed in this paper will spark new ideas in the effort to realize the much-anticipated Fourth Industrial Revolution.
Oxyfuel combustion with carbon capture and sequestration (CCS) is a carbon-reduction technology for use in large-scale coal-fired power plants. Significant progress has been achieved in the research and development of this technology during its scaling up from 0.4 MWth to 3 MWth and 35 MWth by the combined efforts of universities and industries in China. A prefeasibility study on a 200 MWe large-scale demonstration has progressed well, and is ready for implementation. The overall research development and demonstration (RD&D) roadmap for oxyfuel combustion in China has become a critical component of the global RD&D roadmap for oxyfuel combustion. An air combustion/oxyfuel combustion compatible design philosophy was developed during the RD&D process. In this paper, we briefly address fundamental research and technology innovation efforts regarding several technical challenges, including combustion stability, heat transfer, system operation, mineral impurities, and corrosion. To further reduce the cost of carbon capture, in addition to the large-scale deployment of oxyfuel technology, increasing interest is anticipated in the novel and next-generation oxyfuel combustion technologies that are briefly introduced here, including a new oxygen-production concept and flameless oxyfuel combustion.
With ever-increasing market competition and advances in technology, more and more countries are prioritizing advanced manufacturing technology as their top priority for economic growth. Germany announced the Industry 4.0 strategy in 2013. The US government launched the Advanced Manufacturing Partnership (AMP) in 2011 and the National Network for Manufacturing Innovation (NNMI) in 2014. Most recently, the Manufacturing USA initiative was officially rolled out to further “leverage existing resources… to nurture manufacturing innovation and accelerate commercialization” by fostering close collaboration between industry, academia, and government partners. In 2015, the Chinese government officially published a 10-year plan and roadmap toward manufacturing: Made in China 2025. In all these national initiatives, the core technology development and implementation is in the area of advanced manufacturing systems. A new manufacturing paradigm is emerging, which can be characterized by two unique features: integrated manufacturing and intelligent manufacturing. This trend is in line with the progress of industrial revolutions, in which higher efficiency in production systems is being continuously pursued. To this end, 10 major technologies can be identified for the new manufacturing paradigm. This paper describes the rationales and needs for integrated and intelligent manufacturing (i2M) systems. Related technologies from different fields are also described. In particular, key technological enablers, such as the Internet of Things and Services (IoTS), cyber-physical systems (CPSs), and cloud computing are discussed. Challenges are addressed with applications that are based on commercially available platforms such as General Electric (GE)’s Predix and PTC’s ThingWorx.
A high-throughput multi-plume pulsed-laser deposition (MPPLD) system has been demonstrated and compared to previous techniques. Whereas most combinatorial pulsed-laser deposition (PLD) systems have focused on achieving thickness uniformity using sequential multilayer deposition and masking followed by post-deposition annealing, MPPLD directly deposits a compositionally varied library of compounds using the directionality of PLD plumes and the resulting spatial variations of deposition rate. This system is more suitable for high-throughput compound thin-film fabrication.
The rise of big data has led to new demands for machine learning (ML) systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area that lies between ML and systems.
In order to build a ceramic component by inkjet printing, the object must be fabricated through the interaction and solidification of drops, typically in the range of 10−100 pL. In order to achieve this goal, stable ceramic inks must be developed. These inks should satisfy specific rheological conditions that can be illustrated within a parameter space defined by the Reynolds and Weber numbers. Printed drops initially deform on impact with a surface by dynamic dissipative processes, but then spread to an equilibrium shape defined by capillarity. We can identify the processes by which these drops interact to form linear features during printing, but there is a poorer level of understanding as to how 2D and 3D structures form. The stability of 2D sheets of ink appears to be possible over a more limited range of process conditions that is seen with the formation of lines. In most cases, the ink solidifies through evaporation and there is a need to control the drying process to eliminate the: “coffee ring” defect. Despite these uncertainties, there have been a large number of reports on the successful use of inkjet printing for the manufacture of small ceramic components from a number of different ceramics. This technique offers good prospects as a future manufacturing technique. This review identifies potential areas for future research to improve our understanding of this manufacturing method.
Ultrasonic backscatter technique has shown promise as a noninvasive cancellous bone assessment tool. A novel ultrasonic backscatter bone diagnostic (UBBD) instrument and an in vivo application for neonatal bone evaluation are introduced in this study. The UBBD provides several advantages, including noninvasiveness, non-ionizing radiation, portability, and simplicity. In this study, the backscatter signal could be measured within 5 s using the UBBD. Ultrasonic backscatter measurements were performed on 467 neonates (268 males and 199 females) at the left calcaneus. The backscatter signal was measured at a central frequency of 3.5 MHz. The delay (T1) and duration (T2) of the backscatter signal of interest (SOI) were varied, and the apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), zero frequency intercept of apparent backscatter (FIAB), and spectral centroid shift (SCS) were calculated. The results showed that the SOI selection had a direct influence on cancellous bone evaluation. The AIB and FIAB were positively correlated with the gestational age (|R| up to 0.45, P<0.001) when T1 was short (<8 µs), while negative correlations (|R| up to 0.56, P<0.001) were commonly observed for T1>10 µs. Moderate positive correlations (|R| up to 0.45, P<0.001) were observed for FSAB and SCS with gestational age when T1 was long (>10 µs). The T2 mainly introduced fluctuations in the observed correlation coefficients. The moderate correlations observed with UBBD demonstrate the feasibility of using the backscatter signal to evaluate neonatal bone status. This study also proposes an explicit standard for in vivo SOI selection and neonatal cancellous bone assessment.
An increased global supply of minerals is essential to meet the needs and expectations of a rapidly rising world population. This implies extraction from greater depths. Autonomous mining systems, developed through sustained R&D by equipment suppliers, reduce miner exposure to hostile work environments and increase safety. This places increased focus on “ground control” and on rock mechanics to define the depth to which minerals may be extracted economically. Although significant efforts have been made since the end of World War II to apply mechanics to mine design, there have been both technological and organizational obstacles. Rock in situ is a more complex engineering material than is typically encountered in most other engineering disciplines. Mining engineering has relied heavily on empirical procedures in design for thousands of years. These are no longer adequate to address the challenges of the 21st century, as mines venture to increasingly greater depths. The development of the synthetic rock mass (SRM) in 2008 provides researchers with the ability to analyze the deformational behavior of rock masses that are anisotropic and discontinuous—attributes that were described as the defining characteristics of in situ rock by Leopold Müller, the president and founder of the International Society for Rock Mechanics (ISRM), in 1966. Recent developments in the numerical modeling of large-scale mining operations (e.g., caving) using the SRM reveal unanticipated deformational behavior of the rock. The application of massive parallelization and cloud computational techniques offers major opportunities: for example, to assess uncertainties in numerical predictions; to establish the mechanics basis for the empirical rules now used in rock engineering and their validity for the prediction of rock mass behavior beyond current experience; and to use the discrete element method (DEM) in the optimization of deep mine design. For the first time, mining—and rock engineering—will have its own mechanics-based “laboratory.” This promises to be a major tool in future planning for effective mining at depth. The paper concludes with a discussion of an opportunity to demonstrate the application of DEM and SRM procedures as a laboratory, by back-analysis of mining methods used over the 80-year history of the Mount Lyell Copper Mine in Tasmania.
In this paper, we review the current state-of-the-art techniques used for understanding the inner workings of the brain at a systems level. The neural activity that governs our everyday lives involves an intricate coordination of many processes that can be attributed to a variety of brain regions. On the surface, many of these functions can appear to be controlled by specific anatomical structures; however, in reality, numerous dynamic networks within the brain contribute to its function through an interconnected web of neuronal and synaptic pathways. The brain, in its healthy or pathological state, can therefore be best understood by taking a systems-level approach. While numerous neuroengineering technologies exist, we focus here on three major thrusts in the field of systems neuroengineering: neuroimaging, neural interfacing, and neuromodulation. Neuroimaging enables us to delineate the structural and functional organization of the brain, which is key in understanding how the neural system functions in both normal and disease states. Based on such knowledge, devices can be used either to communicate with the neural system, as in neural interface systems, or to modulate brain activity, as in neuromodulation systems. The consideration of these three fields is key to the development and application of neuro-devices. Feedback-based neuro-devices require the ability to sense neural activity (via a neuroimaging modality) through a neural interface (invasive or noninvasive) and ultimately to select a set of stimulation parameters in order to alter neural function via a neuromodulation modality. Systems neuroengineering refers to the use of engineering tools and technologies to image, decode, and modulate the brain in order to comprehend its functions and to repair its dysfunction. Interactions between these fields will help to shape the future of systems neuroengineering—to develop neurotechniques for enhancing the understanding of whole-brain function and dysfunction, and the management of neurological and mental disorders.
Materials-development projects for advanced ultra-supercritical (A-USC) power plants with steam temperatures of 700 °C and above have been performed in order to achieve high efficiency and low CO2 emissions in Europe, the US, Japan, and recently in China and India as well. These projects involve the replacement of martensitic 9%−12% Cr steels with nickel (Ni)-base alloys for the highest temperature boiler and turbine components in order to provide sufficient creep strength at 700°C and above. To minimize the requirement for expensive Ni-base alloys, martensitic 9%−12% Cr steels can be applied to the next highest temperature components of an A-USC power plant, up to a maximum of 650°C. This paper comprehensively describes the research and development of Ni-base alloys and martensitic 9%−12% Cr steels for thick section boiler and turbine components of A-USC power plants, mainly focusing on the long-term creep-rupture strength of base metal and welded joints, strength loss in welded joints, creep-fatigue properties, and microstructure evolution during exposure at elevated temperatures.
The rapid development of additive manufacturing and advances in shape memory materials have fueled the progress of four-dimensional (4D) printing. With the right external stimulus, the need for human interaction, sensors, and batteries will be eliminated, and by using additive manufacturing, more complex devices and parts can be produced. With the current understanding of shape memory mechanisms and with improved design for additive manufacturing, reversibility in 4D printing has recently been proven to be feasible. Conventional one-way 4D printing requires human interaction in the programming (or shape-setting) phase, but reversible 4D printing, or two-way 4D printing, will fully eliminate the need for human interference, as the programming stage is replaced with another stimulus. This allows reversible 4D printed parts to be fully dependent on external stimuli; parts can also be potentially reused after every recovery, or even used in continuous cycles—an aspect that carries industrial appeal. This paper presents a review on the mechanisms of shape memory materials that have led to 4D printing, current findings regarding 4D printing in alloys and polymers, and their respective limitations. The reversibility of shape memory materials and their feasibility to be fabricated using three-dimensional (3D) printing are summarized and critically analyzed. For reversible 4D printing, the methods of 3D printing, mechanisms used for actuation, and strategies to achieve reversibility are also highlighted. Finally, prospective future research directions in reversible 4D printing are suggested.
Systems for ambient assisted living (AAL) that integrate service robots with sensor networks and user monitoring can help elderly people with their daily activities, allowing them to stay in their homes and live active lives for as long as possible. In this paper, we outline the AAL system currently developed in the European project Robot-Era, and describe the engineering aspects and the service-oriented software architecture of the domestic robot, a service robot with advanced manipulation capabilities. Based on the robot operating system (ROS) middleware, our software integrates a large set of advanced algorithms for navigation, perception, and manipulation. In tests with real end users, the performance and acceptability of the platform are evaluated.
Electron beam selective melting (EBSM) is an additive manufacturing technique that directly fabricates three-dimensional parts in a layerwise fashion by using an electron beam to scan and melt metal powder. In recent years, EBSM has been successfully used in the additive manufacturing of a variety of materials. Previous research focused on the EBSM process of a single material. In this study, a novel EBSM process capable of building a gradient structure with dual metal materials was developed, and a powder-supplying method based on vibration was put forward. Two different powders can be supplied individually and then mixed. Two materials were used in this study: Ti6Al4V powder and Ti47Al2Cr2Nb powder. Ti6Al4V has excellent strength and plasticity at room temperature, while Ti47Al2Cr2Nb has excellent performance at high temperature, but is very brittle. A Ti6Al4V/Ti47Al2Cr2Nb gradient material was successfully fabricated by the developed system. The microstructures and chemical compositions were characterized by optical microscopy, scanning microscopy, and electron microprobe analysis. Results showed that the interface thickness was about 300 μm. The interface was free of cracks, and the chemical compositions exhibited a staircase-like change within the interface.
This paper presents findings from an investigation of the large-scale construction solid waste (CSW) landslide that occurred at a landfill at Shenzhen, Guangdong, China, on December 20, 2015, and which killed 77 people and destroyed 33 houses. The landslide involved 2.73×106 m3 of CSW and affected an area about 1100?m in length and 630?m in maximum width, making it the largest landfill landslide in the world. The investigation of this disaster used a combination of unmanned aerial vehicle surveillance and multistage remote-sensing images to reveal the increasing volume of waste in the landfill and the shifting shape of the landfill slope for nearly two years before the landslide took place, beginning with the creation of the CSW landfill in March, 2014, that resulted in the uncertain conditions of the landfill’s boundaries and the unstable state of the hydrologic performance. As a result, applying conventional stability analysis methods used for natural landslides to this case would be difficult. In order to analyze this disaster, we took a multistage modeling technique to analyze the varied characteristics of the landfill slope’s structure at various stages of CSW dumping and used the non-steady?flow?theory to explain the groundwater seepage problem. The investigation showed that the landfill could be divided into two units based on the moisture in the land: ① a front uint, consisted of the landfill slope, which had low water content; and ② a rear unit, consisted of fresh waste, which had a high water content. This structure caused two effects—surface-water infiltration and consolidation seepage that triggered the landslide in the landfill. Surface-water infiltration induced a gradual increase in pore water pressure head, or piezometric head, in the front slope because the infiltrating position rose as the volume of waste placement increased. Consolidation seepage led to higher excess pore water pressures as the loading of waste increased. We also investigated the post-failure soil dynamics parameters of the landslide deposit using cone penetration, triaxial, and ring-shear tests in order to simulate the characteristics of a flowing slide with a long run-out due to the liquefaction effect. Finally, we conclude the paper with lessons from the tens of catastrophic landslides of municipal solid waste around the world and discuss how to better manage the geotechnical risks of urbanization.
The Made in China 2025 initiative will require full automation in all sectors, from customers to production. This will result in great challenges to manufacturing systems in all sectors. In the future of manufacturing, all devices and systems should have sensing and basic intelligence capabilities for control and adaptation. In this study, after discussing multiscale dynamics of the modern manufacturing system, a five-layer functional structure is proposed for uncertainties processing. Multiscale dynamics include: multi-time scale, space-time scale, and multi-level dynamics. Control action will differ at different scales, with more design being required at both fast and slow time scales. More quantitative action is required in low-level operations, while more qualitative action is needed regarding high-level supervision. Intelligent manufacturing systems should have the capabilities of flexibility, adaptability, and intelligence. These capabilities will require the control action to be distributed and integrated with different approaches, including smart sensing, optimal design, and intelligent learning. Finally, a typical jet dispensing system is taken as a real-world example for multiscale modeling and control.
The possible mitigation of floods by dams and the risk to dams from floods are key problems. The People’s Republic of China is now leading world dam construction with great success and efficiency. This paper is devoted to relevant experiences from other countries, with a particular focus on lessons from accidents over the past two centuries and on new solutions. Accidents from floods are analyzed according to the dam’s height, storage, dam material, and spillway data. Most of the huge accidents that have been reported occurred for embankments storing over 10 hm3. New solutions appear promising for both dam safety and flood mitigation.