Intelligent manufacturing is a general concept that is under continuous development. It can be categorized into three basic paradigms: digital manufacturing, digital-networked manufacturing, and newgeneration intelligent manufacturing. New-generation intelligent manufacturing represents an indepth integration of new-generation artificial intelligence (AI) technology and advanced manufacturing technology. It runs through every link in the full life-cycle of design, production, product, and service. The concept also relates to the optimization and integration of corresponding systems; the continuous improvement of enterprises’ product quality, performance, and service levels; and reduction in resources consumption. New-generation intelligent manufacturing acts as the core driving force of the new industrial revolution and will continue to be the main pathway for the transformation and upgrading of the manufacturing industry in the decades to come. Human-cyber-physical systems (HCPSs) reveal the technological mechanisms of new-generation intelligent manufacturing and can effectively guide related theoretical research and engineering practice. Given the sequential development, cross interaction, and iterative upgrading characteristics of the three basic paradigms of intelligent manufacturing, a technology roadmap for ‘‘parallel promotion and integrated development” should be developed in order to drive forward the intelligent transformation of the manufacturing industry in China.
Bio-syncretic robots consisting of both living biological materials and non-living systems possess desirable attributes such as high energy efficiency, intrinsic safety, high sensitivity, and self-repairing capabilities. Compared with living biological materials or non-living traditional robots based on electromechanical systems, the combined system of a bio-syncretic robot holds many advantages. Therefore, developing bio-syncretic robots has been a topic of great interest, and significant progress has been achieved in this area over the past decade. This review systematically summarizes the development of bio-syncretic robots. First, potential trends in the development of bio-syncretic robots are discussed. Next, the current performance of bio-syncretic robots, including simple movement and controllability of velocity and direction, is reviewed. The living biological materials and non-living materials that are used in bio-syncretic robots, and the corresponding fabrication methods, are then discussed. In addition, recently developed control methods for bio-syncretic robots, including physical and chemical control methods, are described. Finally, challenges in the development of bio-syncretic robots are discussed from multiple viewpoints, including sensing and intelligence, living and non-living materials, control approaches, and information technology.
Our next generation of industry—Industry 4.0—holds the promise of increased flexibility in manufacturing, along with mass customization, better quality, and improved productivity. It thus enables companies to cope with the challenges of producing increasingly individualized products with a short lead-time to market and higher quality. Intelligent manufacturing plays an important role in Industry 4.0. Typical resources are converted into intelligent objects so that they are able to sense, act, and behave within a smart environment. In order to fully understand intelligent manufacturing in the context of Industry 4.0, this paper provides a comprehensive review of associated topics such as intelligent manufacturing, Internet of Things (IoT)-enabled manufacturing, and cloud manufacturing. Similarities and differences in these topics are highlighted based on our analysis. We also review key technologies such as the IoT, cyber-physical systems (CPSs), cloud computing, big data analytics (BDA), and information and communications technology (ICT) that are used to enable intelligent manufacturing. Next, we describe worldwide movements in intelligent manufacturing, including governmental strategic plans from different countries and strategic plans from major international companies in the European Union, United States, Japan, and China. Finally, we present current challenges and future research directions. The concepts discussed in this paper will spark new ideas in the effort to realize the much-anticipated Fourth Industrial Revolution.
Given the challenges facing the cyberspace of the nation, this paper presents the tripartite theory of cyberspace, based on the status quo of cyberspace. Corresponding strategies and a research architecture are proposed for common public networks (C space), secure classified networks (S space), and key infrastructure networks (K space), based on their individual characteristics. The features and security requirements of these networks are then discussed. Taking C space as an example, we introduce the SMCRC (which stands for “situation awareness, monitoring and management, cooperative defense, response and recovery, and countermeasures and traceback”) loop for constructing a cyberspace security ecosystem. Following a discussion on its characteristics and information exchange, our analysis focuses on the critical technologies of the SMCRC loop. To obtain more insight into national cyberspace security, special attention should be paid to global sensing and precise mapping, continuous detection and active management, cross-domain cooperation and systematic defense, autonomous response and rapid processing, and accurate traceback and countermeasure deterrence.
Approximately one quarter of the global edible food supply is wasted. The drivers of food waste can occur at any level between production, harvest, distribution, processing, and the consumer. While the drivers vary globally, the industrialized regions of North America, Europe, and Asia share similar situations; in each of these regions the largest loss of food waste occurs with the consumer, at approximately 51% of total waste generated. As a consequence, handling waste falls on municipal solid waste operations. In the United States, food waste constitutes 15% of the solid waste stream by weight, contributes 3.4 × 107t of carbon dioxide (CO2) equivalent emissions, and costs 1.9 billion USD in disposal fees. The levels of carbon, nutrients, and moisture in food waste make bioprocessing into higher value products an attractive method for mitigation. Opportunities include extraction of nutraceuticals and bioactive compounds, or conversion to a variety of volatile acids—including lactic, acetic, and propionic acids—that can be recovered and sold at a profit. The conversion of waste into volatile acids can be paired with bioenergy production, including hydrogen or biogas. This present review compares the potential for upgrading industrial food waste to either specialty products or methane. Higher value uses of industrial food waste could alleviate approximately 1.9 × 108 t of CO2equivalent emissions. As an example, potato peel could be upgraded to lactic acid via fermentation to recover 5600 million USD per year, or could be converted to methane via anaerobic digestion, resulting in a revenue of 900 million USD per year. The potential value to be recovered is significant, and food-waste valorization will help to close the loop for various food industries.
Systems for ambient assisted living (AAL) that integrate service robots with sensor networks and user monitoring can help elderly people with their daily activities, allowing them to stay in their homes and live active lives for as long as possible. In this paper, we outline the AAL system currently developed in the European project Robot-Era, and describe the engineering aspects and the service-oriented software architecture of the domestic robot, a service robot with advanced manipulation capabilities. Based on the robot operating system (ROS) middleware, our software integrates a large set of advanced algorithms for navigation, perception, and manipulation. In tests with real end users, the performance and acceptability of the platform are evaluated.
The rise of big data has led to new demands for machine learning (ML) systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area that lies between ML and systems.
Geotagging is the process of labeling data and information with geographical identification metadata, and text mining refers to the process of deriving information from text through data analytics. Geotagging and text mining are used to mine rich sources of social media data, such as video, website, text, and Quick Response (QR) code. They have been frequently used to model consumer behaviors and market trends. This study uses both techniques to understand the resilience of infrastructure in Chennai, India using data mined from the 2015 flood. This paper presents a conceptual study on the potential use of social media (Twitter in this case) to better understand infrastructure resiliency. Using featureextraction techniques, the research team extracted Twitter data from tweets generated by the Chennai population during the flood. First, this study shows that these techniques are useful in identifying locations, defects, and failure intensities of infrastructure using the location metadata from geotags, words containing the locations, and the frequencies of tweets from each location. However, more efforts are needed to better utilize the texts generated from the tweets, including a better understanding of the cultural contexts of the words used in the tweets, the contexts of the words used to describe the incidents, and the least frequently used words.
The type, model, quantity, and location of sensors installed on the intelligent vehicle test platform are different, resulting in different sensor information processing modules. The driving map used in intelligent vehicle test platform has no uniform standard, which leads to different granularity of driving map information. The sensor information processing module is directly associated with the driving map information and decision-making module, which leads to the interface of intelligent driving system software module has no uniform standard. Based on the software and hardware architecture of intelligent vehicle, the sensor information and driving map information are processed by using the formal language of driving cognition to form a driving situation graph cluster and output to a decision-making module, and the output result of the decision-making module is shown as a cognitive arrow cluster, so that the whole process of intelligent driving from perception to decision-making is completed. The formalization of driving cognition reduces the influence of sensor type, model, quantity, and location on the whole software architecture, which makes the software architecture portable on different intelligent driving hardware platforms.
This study provides a definition for urban big data while exploring its features and applications of China’s city intelligence. The differences between city intelligence in China and the “smart city” concept in other countries are compared to highlight and contrast the unique definition and model for China’s city intelligence in this paper. Furthermore, this paper examines the role of urban big data in city intelligence by showing that it not only serves as the cornerstone of this trend as it also plays a core role in the diffusion of city intelligence technology and serves as an inexhaustible resource for the sustained development of city intelligence. This study also points out the challenges of shaping and developing of China’s urban big data. Considering the supporting and core role that urban big data plays in city intelligence, the study then expounds on the key points of urban big data, including infrastructure support, urban governance, public services, and economic and industrial development. Finally, this study points out that the utility of city intelligence as an ideal policy tool for advancing the goals of China’s urban development. In conclusion, it is imperative that China make full use of its unique advantages—including using the nation’s current state of development and resources, geographical advantages, and good human relations—in subjective and objective conditions to promote the development of city intelligence through the proper application of urban big data.
Cellular spheroids serving as three-dimensional (3D) in vitro tissue models have attracted increasing interest for pathological study and drug-screening applications. Various methods, including microwells in particular, have been developed for engineering cellular spheroids. However, these methods usually suffer from either destructive molding operations or cell loss and non-uniform cell distribution among the wells due to two-step molding and cell seeding. We have developed a facile method that utilizes cell-embedded hydrogel arrays as templates for concave well fabrication and in situ MCF-7 cellular spheroid formation on a chip. A custom-built bioprinting system was applied for the fabrication of sacrificial gelatin arrays and sequentially concave wells in a high-throughput, flexible, and controlled manner. The ability to achieve in situ cell seeding for cellular spheroid construction was demonstrated with the advantage of uniform cell seeding and the potential for programmed fabrication of tissue models on chips. The developed method holds great potential for applications in tissue engineering, regenerative medicine, and drug screening.
A high-throughput multi-plume pulsed-laser deposition (MPPLD) system has been demonstrated and compared to previous techniques. Whereas most combinatorial pulsed-laser deposition (PLD) systems have focused on achieving thickness uniformity using sequential multilayer deposition and masking followed by post-deposition annealing, MPPLD directly deposits a compositionally varied library of compounds using the directionality of PLD plumes and the resulting spatial variations of deposition rate. This system is more suitable for high-throughput compound thin-film fabrication.
China’s energy supply-and-demand model and two related carbon emission scenarios, including a planned peak scenario and an advanced peak scenario, are designed taking into consideration China’s economic development, technological progress, policies, resources, environmental capacity, and other factors. The analysis of the defined scenarios provides the following conclusions: Primary energy and power demand will continue to grow leading up to 2030, and the growth rate of power demand will be much higher than that of primary energy demand. Moreover, low carbonization will be a basic feature of energy supply-and-demand structural changes, and non-fossil energy will replace oil as the second largest energy source. Finally, energy-related carbon emissions could peak in 2025 through the application of more efficient energy consumption patterns and more low-carbon energy supply modes. The push toward decarbonization of the power industry is essential for reducing the peak value of carbon emissions.
Mineral consumption is increasing rapidly as more consumers enter the market for minerals and as the global standard of living increases. As a result, underground mining continues to progress to deeper levels in order to tackle the mineral supply crisis in the 21st century. However, deep mining occurs in a very technical and challenging environment, in which significant innovative solutions and best practice are required and additional safety standards must be implemented in order to overcome the challenges and reap huge economic gains. These challenges include the catastrophic events that are often met in deep mining engineering: rockbursts, gas outbursts, high in situ and redistributed stresses, large deformation, squeezing and creeping rocks, and high temperature. This review paper presents the current global status of deep mining and highlights some of the newest technological achievements and opportunities associated with rock mechanics and geotechnical engineering in deep mining. Of the various technical achievements, unmanned working-faces and unmanned mines based on fully automated mining and mineral extraction processes have become important fields in the 21st century.
An increased global supply of minerals is essential to meet the needs and expectations of a rapidly rising world population. This implies extraction from greater depths. Autonomous mining systems, developed through sustained R&D by equipment suppliers, reduce miner exposure to hostile work environments and increase safety. This places increased focus on “ground control” and on rock mechanics to define the depth to which minerals may be extracted economically. Although significant efforts have been made since the end of World War II to apply mechanics to mine design, there have been both technological and organizational obstacles. Rock in situ is a more complex engineering material than is typically encountered in most other engineering disciplines. Mining engineering has relied heavily on empirical procedures in design for thousands of years. These are no longer adequate to address the challenges of the 21st century, as mines venture to increasingly greater depths. The development of the synthetic rock mass (SRM) in 2008 provides researchers with the ability to analyze the deformational behavior of rock masses that are anisotropic and discontinuous—attributes that were described as the defining characteristics of in situ rock by Leopold Müller, the president and founder of the International Society for Rock Mechanics (ISRM), in 1966. Recent developments in the numerical modeling of large-scale mining operations (e.g., caving) using the SRM reveal unanticipated deformational behavior of the rock. The application of massive parallelization and cloud computational techniques offers major opportunities: for example, to assess uncertainties in numerical predictions; to establish the mechanics basis for the empirical rules now used in rock engineering and their validity for the prediction of rock mass behavior beyond current experience; and to use the discrete element method (DEM) in the optimization of deep mine design. For the first time, mining—and rock engineering—will have its own mechanics-based “laboratory.” This promises to be a major tool in future planning for effective mining at depth. The paper concludes with a discussion of an opportunity to demonstrate the application of DEM and SRM procedures as a laboratory, by back-analysis of mining methods used over the 80-year history of the Mount Lyell Copper Mine in Tasmania.
Based on the construction of the 8-inch fabrication line, advanced process technology of 8-inch wafer, as well as the fourth-generation high-voltage double-diffused metal-oxide semiconductor (DMOS+) insulated-gate bipolar transistor (IGBT) technology and the fifth-generation trench gate IGBT technology, have been developed, realizing a great-leap forward technological development for the manufacturing of high-voltage IGBT from 6-inch to 8-inch. The 1600 A/1.7 kV and 1500 A/3.3 kV IGBT modules have been successfully fabricated, qualified, and applied in rail transportation traction system.