The grand challenges of climate change demand a new paradigm of urban design that takes the performance of urban systems into account, such as energy and water efficiency. Traditional urban design methods focus on the form-making process and lack performance dimensions. Geodesign is an emerging approach that emphasizes the links between systems thinking, digital technology, and geographic context. This paper presents the research results of the first phase of a larger research collaboration and proposes an extended geodesign method for a district-scale urban design to integrate systems of renewable energy production, energy consumption, and storm water management, as well as a measurement of human experiences in cities. The method incorporates geographic information system (GIS), parametric modeling techniques, and multidisciplinary design optimization (MDO) tools that enable collaborative design decision-making. The method is tested and refined in a test case with the objective of designing a near-zero-energy urban district. Our final method has three characteristics.①Integrated geodesign and parametric design: It uses a parametric design approach to generate focal-scale district prototypes by means of a custom procedural algorithm, and applies geodesign to evaluate the performances of design proposals. ② A focus on design flow: It elaborates how to define problems, what information is selected, and what criteria are used in making design decisions.③Multi-objective optimization: The test case produces indicators from performance modeling and derives principles through a multi-objective computational experiment to inform how the design can be improved. This paper concludes with issues and next steps in modeling urban design and infrastructure systems based on MDO tools.
Urbanization is a potential factor in economic development, which is a main route to social development. As the scale of urbanization expands, the quality of the urban water environment may deteriorate, which can have a negative impact on sustainable urbanization. Therefore, a comprehensive understanding of the functions of the urban water environment is necessary, including its security, resources, ecology, landscape, culture, and economy. Furthermore, a deep analysis is required of the theoretical basis of the urban water environment, which is associated with geographical location, landscape ecology, and a low-carbon economy. In this paper, we expound the main principles for constructing a system for the urban water environment (including sustainable development, ecological priority, and regional differences), and suggest the content of an urban water environmental system. Such a system contains a natural water environment, an economic water environment, and a social water environment. The natural water environment is the base, an effective economic water environment is the focus, and a healthy social water environment is the essence of such a system. The construction of an urban water environment should rely on a comprehensive security system, complete scientific theory, and advanced technology.
Low-impact development (LID) technologies have a great potential to reduce water usage and stormwater runoff and are therefore seen as sustainable improvements that can be made to traditional water infrastructure. These technologies include bioretention areas, rainwater capturing, and xeriscaping, all of which can be used in residential zones. Within the City of Atlanta, residential water usage accounts for 53% of the total water consumption; therefore, residential zones offer significant impact potential for the implementation of LID. This study analyzes the use of LID strategies within the different residential zones of the City of Atlanta from an ecological perspective by drawing analogies to natural ecosystems. The analysis shows that these technologies, especially with the addition of a graywater system, work to improve the conventional residential water network based upon these ecological metrics. The higher metric values suggest greater parity with healthy, natural ecosystems.
Pavements require maintenance to prevent undue distress or to restore performance; however, pavement maintenance and its impacts do not receive enough attention in many cases, and are either ignored or treated as a low priority. Most current maintenance activities have budget issues and only focus on removing deteriorated pavement sections. Deferred pavement maintenance has impacts on the environment and on society, and may thus affect the costs associated with maintenance. A sustainability rating tool is a good way to list, explain, and evaluate such impacts. Various sustainability rating tools have been developed for pavement; however, pavement maintenance has its own features that are different from those of the new construction, expansion, or reconstruction of pavements. This research project reviews nine sustainability rating tools for pavement, although none of these tools fully describe maintenance features or can be directly applied to evaluate maintenance projects. A new sustainability rating tool is then developed for pavement maintenance; this new tool can be used to evaluate individual projects and raise public awareness about the importance of pavement maintenance. Its details are described, and its use is demonstrated through an example to show the evaluation process and results.
Materials and energy are transferred between natural and industrial systems, providing a standard that can be used to deduce the interactions between these systems. An examination of these flows is an essential part of the conversation on how industry impacts the environment. We propose that biological systems, which embody sustainability, provide methods and principles that can lead to more useful ways to organize industrial activity. Transposing these biological methods to steel manufacturing is manifested through an efficient use of available materials, waste reduction, and decreased energy demand with currently available technology. In this paper, we use ecological metrics to examine the change in structure and flows of materials in the Chinese steel industry over time by means of a systems-based mass flow analysis. Utilizing available data, the results of our analysis indicate that the Chinese steel manufacturing industry has increased its efficiency and sustainable use of resources over time at the unit process level. However, the appropriate organization of the steel production ecosystem remains a work in progress. Our results suggest that through the intelligent placement of cooperative industries, which can utilize the waste generated from steel manufacturing, the future of the Chinese steel industry can better reflect ecosystem maturity and health while minimizing waste.
Geotagging is the process of labeling data and information with geographical identification metadata, and text mining refers to the process of deriving information from text through data analytics. Geotagging and text mining are used to mine rich sources of social media data, such as video, website, text, and Quick Response (QR) code. They have been frequently used to model consumer behaviors and market trends. This study uses both techniques to understand the resilience of infrastructure in Chennai, India using data mined from the 2015 flood. This paper presents a conceptual study on the potential use of social media (Twitter in this case) to better understand infrastructure resiliency. Using featureextraction techniques, the research team extracted Twitter data from tweets generated by the Chennai population during the flood. First, this study shows that these techniques are useful in identifying locations, defects, and failure intensities of infrastructure using the location metadata from geotags, words containing the locations, and the frequencies of tweets from each location. However, more efforts are needed to better utilize the texts generated from the tweets, including a better understanding of the cultural contexts of the words used in the tweets, the contexts of the words used to describe the incidents, and the least frequently used words.
With the rapid growth of vehicle population and vehicle miles traveled, automobile emission has become a severe issue in the metropolitan cities of China. There are policies that concentrate on the management of emission sources. However, improving the operation of the transportation system through apps on mobile devices, especially navigation apps, may have a unique role in promoting urban air quality. Real-time traveler information can not only help travelers avoid traffic congestion, but also advise them to adjust their departure time, mode, or route, or even to cancel trips. Will such changes in personal travel patterns have a significant impact in decreasing emissions? If so, to what extent will they impact urban air quality? The aim of this study is to determine how urban traffic emission is affected by the use of navigation apps. With this work, we attempt to answer the question of whether the real-time traffic information provided by navigation apps can help to improve urban air quality. Some of these findings may provide references for the formulation of urban traffic and environmental policies.
Urban eco-environmental degradation is becoming inevitable due to the extensive urbanization, population growth, and socioeconomic development in China. One of the traffic arteries in Shenzhen is an urban expressway that is under construction and that runs across environmentally sensitive areas (ESAs). The environmental pollution from urban expressways is critical, due to the characteristics of expressways such as high runoff coefficients, considerable contaminant accumulation, and complex pollutant ingredients. ESAs are vulnerable to anthropogenic disturbances and hence should be given special attention. In order to evaluate the environmental sensitivity along this urban expressway and minimize the influences of the ongoing road construction and future operation on the surrounding ecosystem, the environmental sensitivity of the relevant area was evaluated based on the application of a geographic information system (GIS). A final ESA map was classified into four environmental sensitivity levels; this classification indicates that a large proportion of the expressway passes through areas of high sensitivity, representing 11.93 km or 52.3% of the total expressway, and more than 90% of the total expressway passes through ESAs. This study provides beneficial information for optimal layout schemes of initial rainfall runoff treatment facilities developed from low-impact development (LID) techniques in order to minimize the impact of polluted road runoff on the surrounding ecological environment.
This paper summarizes the experience that was gained during the construction of the 15.4 km long Ceneri Base Tunnel (CBT), which is the southern part of the flat railway line crossing the Swiss Alps from north to south. The project consisted of a twin tube with a diameter of 9 m interconnected by crosspassages, each 325 m long. In the middle of the alignment and at its southern end, large caverns were excavated for logistical and operational requirements. The total excavation length amounted to approximately 40 km. The tunnel crossed Alpine rock formations comprising a variety of rock typologies and several fault zones. The maximum overburden amounted to 850 m. The excavation of the main tunnels and of the cross-passages was executed by means of drill-and-blast (D&B) excavation. The support consisted of bolts, meshes, fiber-reinforced shotcrete and, when required, steel ribs. A gripper tunnel boring machine (TBM) was used in order to excavate the access tunnel. The high overburden caused squeezing rock conditions, which are characterized by large anisotropic convergences when crossing weaker rock formations. The latter required the installation of a deformable support. At the north portal, the tunnel (with an enlarged cross-section) passed underneath the A2 Swiss highway (the major road axis connecting the north and south of Switzerland) at a small overburden and through soft ground. Vertical and subhorizontal jet grouting in combination with partial-face excavation was successfully implemented in order to limit the surface settlements. The south portal was located in a dense urban area. The excavation from the south portal included an approximately 220 m long cut-and-cover tunnel, followed by about 300 m of D&B excavation in a bad rock formation. The very low overburden, poor rock quality, and demanding crossing with an existing road tunnel (at a vertical distance of only 4 m) required special excavation methods through reduced sectors and special blasting techniques in order to limit the blast-induced vibrations. The application of a comprehensive risk management procedure, the execution of an intensive surface survey, and the adaptability of the tunnel design to the encountered geological conditions allowed the successful completion of the excavation works.
Long undersea tunnels, and particularly those that are built for transportation purposes, are not commonplace infrastructure. Although their planning and construction take a considerable amount of time, they form important fixed links once in operation. The fact that these tunnels are located under the sea generally involves unique challenges including complex issues with construction and operations, which relate to the lack of intermediate access points along the final route of the tunnel. Similar issues are associated with long under-land tunnels, such as those under mountain ranges such as the Alps. This paper identifies the key issues related to the design and construction of such tunnels, and suggests a potential solution using proven technology from another engineering discipline.
The successful completion of the Zhengzhou–Xi’an high-speed railway project has greatly improved the construction level of China’s large-section loess tunnels, and has resulted in significant progress being made in both design theory and construction technology. This paper systematically summarizes the technical characteristics and main problems of the large-section loess tunnels on China’s high-speed railway, including classification of the surrounding rock, design of the supporting structure, surface settlement and cracking control, and safe and rapid construction methods. On this basis, the key construction techniques of loess tunnels with large sections for high-speed railway are expounded from the aspects of design and construction. The research results show that the classification of loess strata surrounding large tunnels should be based on the geological age of the loess, and be determined by combining the plastic index and the water content. In addition, the influence of the buried depth should be considered. During tunnel excavation disturbance, if the tensile stress exceeds the soil tensile or shear strength, the surface part of the sliding trend plane can be damaged, and visible cracks can form. The pressure of the surrounding rock of a large-section loess tunnel should be calculated according to the buried depth, using the corresponding formula. A three-bench seven-step excavation method of construction was used as the core technology system to ensure the safe and rapid construction of a large-section loess tunnel, following a field test to optimize the construction parameters and determine the engineering measures to stabilize the tunnel face. The conclusions and methods presented here are of great significance in revealing the strata and supporting mechanics of large-section loess tunnels, and in optimizing the supporting structure design and the technical parameters for construction.
The Upper Lillooet River Hydroelectric Project (ULHP) is a run-of-river power generation scheme located near Pemberton, British Columbia, Canada, consisting of two separate hydroelectric facilities (HEFs) with a combined capacity of 106.7 MW. These HEFs are owned by the Upper Lillooet River Power Limited Partnership and the Boulder Creek Power Limited Partnership, and civil and tunnel construction was completed by CRT-ebc. The Upper Lillooet River HEF includes the excavation of a 6 m wide by 5.5 m high and approximately 2500 m long tunnel along the Upper Lillooet River Valley. The project is in a mountainous area; severe restrictions imposed by weather conditions and the presence of sensitive wildlife species constrained the site operations in order to limit environmental impacts. The site is adjacent to the Mount Meager Volcanic Complex, the most recently active volcano in Western Canada. Tunneling conditions were very challenging, including a section through deposits associated with the most recent eruption from Mount Meager Volcanic Complex (~2360 years before the present). This tunnel section included welded breccia and unconsolidated deposits composed of loose pumice, organics (that represent an old forest floor), and till, before entering the underlying tonalite bedrock. The construction of this section of the tunnel required cover grouting, umbrella support, and excavation with a combination of roadheader, hydraulic hammer, and drilling-and-blasting method. This paper provides an overview of the project, a summary of the key design and construction schedule challenges, and a description of the successful excavation of the tunnel through deposits associated with the recent volcanic activity.
The objective of a bridge design is to produce a safe bridge that is elegant and satisfies all functionality requirements, at a cost that is acceptable to the owner. A successful bridge design must be natural, simple, original, and harmonious with its surroundings. Aesthetics is not an additional consideration in the design of a bridge, but is rather an integral part of bridge design. Both the structural configuration and the aesthetics of a bridge must be considered together during the conceptual design stage. To achieve such a task, the bridge design engineer must have a good understanding of structural theory and bridge aesthetics.
Topology optimization is a powerful design approach that is used to determine the optimal topology in order to obtain the desired functional performance. It has been widely used to improve structural performance in engineering fields such as in the aerospace and automobile industries. However, some gaps still exist between topology optimization and engineering application, which significantly hinder the application of topology optimization. One of these gaps is how to interpret topology results, especially those obtained using the density framework, into parametric computer-aided design (CAD) models that are ready for subsequent shape optimization and manufacturing. In this paper, a new method for interpreting topology optimization results into stereolithography (STL) models and parametric CAD models is proposed. First, we extract the skeleton of the topology optimization result in order to ensure shape preservation and use a filtering method to ensure characteristics preservation. After this process, the distribution of the nodes in the boundary of the topology optimization result is denser, which will benefit the subsequent curve fitting. Using the curvature and the derivative of curvature of the uniform B-spline curve, an adaptive B-spline fitting method is proposed in order to obtain a parametric CAD model with the fewest control points meeting the requirement of the fitting error. A case study is presented to provide a detailed description of the proposed method, and two more examples are shown to demonstrate the validity and versatility of the proposed method.
Tissue engineering, which involves the creation of new tissue by the deliberate and controlled stimulation of selected target cells through a systematic combination of molecular and mechanical signals, usually involves the assistance of biomaterials-based structures to deliver these signals and to give shape to the resulting tissue mass. The specifications for these structures, which used to be described as scaffolds but are now more correctly termed templates, have rarely been defined, mainly because this is difficult to do. Primarily, however, these specifications must relate to the need to develop the right microenvironment for the cells to create new tissue and to the need for the interactions between the cells and the template material to be consistent with the demands of the new viable tissues. These features are encompassed by the phenomena that are collectively called biocompatibility. However, the theories and putative mechanisms of conventional biocompatibility (mostly conceived through experiences with implantable medical devices) are inadequate to describe phenomena in tissue-engineering processes. The present author has recently redefined biocompatibility in terms of specific materials- and biology-based pathways; this opinion paper places tissue-engineering biocompatibility mechanisms in the context of these pathways.
An airbag is an effective protective device for vehicle occupant safety, but may cause unexpected injury from the excessive energy of ignition when it is deployed. This paper focuses on the design of a new tubular driver airbag from the perspective of reducing the dosage of gas generant. Three different dummies were selected for computer simulation to investigate the stiffness and protection performance of the new airbag. Next, a multi-objective optimization of the 50th percentile dummy was conducted. The results show that the static volume of the new airbag is only about 1/3 of the volume of an ordinary one, and the injury value of each type of dummy can meet legal requirements while reducing the gas dosage by at least 30%. The combined injury index (Pcomb) decreases by 22% and the gas dosage is reduced by 32% after optimization. This study demonstrates that the new tubular driver airbag has great potential for protection in terms of reducing the gas dosage.
Intelligent manufacturing is a general concept that is under continuous development. It can be categorized into three basic paradigms: digital manufacturing, digital-networked manufacturing, and newgeneration intelligent manufacturing. New-generation intelligent manufacturing represents an indepth integration of new-generation artificial intelligence (AI) technology and advanced manufacturing technology. It runs through every link in the full life-cycle of design, production, product, and service. The concept also relates to the optimization and integration of corresponding systems; the continuous improvement of enterprises’ product quality, performance, and service levels; and reduction in resources consumption. New-generation intelligent manufacturing acts as the core driving force of the new industrial revolution and will continue to be the main pathway for the transformation and upgrading of the manufacturing industry in the decades to come. Human-cyber-physical systems (HCPSs) reveal the technological mechanisms of new-generation intelligent manufacturing and can effectively guide related theoretical research and engineering practice. Given the sequential development, cross interaction, and iterative upgrading characteristics of the three basic paradigms of intelligent manufacturing, a technology roadmap for ‘‘parallel promotion and integrated development” should be developed in order to drive forward the intelligent transformation of the manufacturing industry in China.
Recommendation systems are crucially important for the delivery of personalized services to users. With personalized recommendation services, users can enjoy a variety of targeted recommendations such as movies, books, ads, restaurants, and more. In addition, personalized recommendation services have become extremely effective revenue drivers for online business. Despite the great benefits, deploying personalized recommendation services typically requires the collection of users’ personal data for processing and analytics, which undesirably makes users susceptible to serious privacy violation issues. Therefore, it is of paramount importance to develop practical privacy-preserving techniques to maintain the intelligence of personalized recommendation services while respecting user privacy. In this paper, we provide a comprehensive survey of the literature related to personalized recommendation services with privacy protection. We present the general architecture of personalized recommendation systems, the privacy issues therein, and existing works that focus on privacy-preserving personalized recommendation services. We classify the existing works according to their underlying techniques for personalized recommendation and privacy protection, and thoroughly discuss and compare their merits and demerits, especially in terms of privacy and recommendation accuracy. We also identity some future research directions.
With the development of sophisticated image editing and manipulation tools, the originality and authenticity of a digital image is usually hard to determine visually. In order to detect digital image forgeries, various kinds of digital image forensics techniques have been proposed in the last decade. Compared with active forensics approaches that require embedding additional information, passive forensics approaches are more popular due to their wider application scenario, and have attracted increasing academic and industrial research interests. Generally speaking, passive digital image forensics detects image forgeries based on the fact that there are certain intrinsic patterns in the original image left during image acquisition or storage, or specific patterns in image forgeries left during the image storage or editing. By analyzing the above patterns, the originality of an image can be authenticated. In this paper, a brief review on passive digital image forensic methods is presented in order to provide a comprehensive introduction on recent advances in this rapidly developing research area. These forensics approaches are divided into three categories based on the various kinds of traces they can be used to track—that is, traces left in image acquisition, traces left in image storage, and traces left in image editing. For each category, the forensics scenario, the underlying rationale, and state-of-the-art methodologies are elaborated. Moreover, the major limitations of the current image forensics approaches are discussed in order to point out some possible research directions or focuses in these areas.
Social influence analysis (SIA) is a vast research field that has attracted research interest in many areas. In this paper, we present a survey of representative and state-of-the-art work in models, methods, and evaluation aspects related to SIA. We divide SIA models into two types: microscopic and macroscopic models. Microscopic models consider human interactions and the structure of the influence process, whereas macroscopic models consider the same transmission probability and identical influential power for all users. We analyze social influence methods including influence maximization, influence minimization, flow of influence, and individual influence. In social influence evaluation, influence evaluation metrics are introduced and social influence evaluation models are then analyzed. The objectives of this paper are to provide a comprehensive analysis, aid in understanding social behaviors, provide a theoretical basis for influencing public opinion, and unveil future research directions and potential applications.
Given the challenges facing the cyberspace of the nation, this paper presents the tripartite theory of cyberspace, based on the status quo of cyberspace. Corresponding strategies and a research architecture are proposed for common public networks (C space), secure classified networks (S space), and key infrastructure networks (K space), based on their individual characteristics. The features and security requirements of these networks are then discussed. Taking C space as an example, we introduce the SMCRC (which stands for “situation awareness, monitoring and management, cooperative defense, response and recovery, and countermeasures and traceback”) loop for constructing a cyberspace security ecosystem. Following a discussion on its characteristics and information exchange, our analysis focuses on the critical technologies of the SMCRC loop. To obtain more insight into national cyberspace security, special attention should be paid to global sensing and precise mapping, continuous detection and active management, cross-domain cooperation and systematic defense, autonomous response and rapid processing, and accurate traceback and countermeasure deterrence.
Cyberattack forms are complex and varied, and the detection and prediction of dynamic types of attack are always challenging tasks. Research on knowledge graphs is becoming increasingly mature in many fields. At present, it is very significant that certain scholars have combined the concept of the knowledge graph with cybersecurity in order to construct a cybersecurity knowledge base. This paper presents a cybersecurity knowledge base and deduction rules based on a quintuple model. Using machine learning, we extract entities and build ontology to obtain a cybersecurity knowledge base. New rules are then deduced by calculating formulas and using the path-ranking algorithm. The Stanford named entity recognizer (NER) is also used to train an extractor to extract useful information. Experimental results show that the Stanford NER provides many features and the useGazettes parameter may be used to train a recognizer in the cybersecurity domain in preparation for future work.
The biggest bottleneck in DNA computing is exponential explosion, in which the DNA molecules used as data in information processing grow exponentially with an increase of problem size. To overcome this bottleneck and improve the processing speed, we propose a DNA computing model to solve the graph vertex coloring problem. The main points of the model are as follows:①The exponential explosion problem is solved by dividing subgraphs, reducing the vertex colors without losing the solutions, and ordering the vertices in subgraphs; and②the bio-operation times are reduced considerably by a designed parallel polymerase chain reaction (PCR) technology that dramatically improves the processing speed. In this article, a 3-colorable graph with 61 vertices is used to illustrate the capability of the DNA computing model. The experiment showed that not only are all the solutions of the graph found, but also more than 99% of false solutions are deleted when the initial solution space is constructed. The powerful computational capability of the model was based on specific reactions among the large number of nanoscale oligonucleotide strands. All these tiny strands are operated by DNA self-assembly and parallel PCR. After thousands of accurate PCR operations, the solutions were found by recognizing, splicing, and assembling. We also prove that the searching capability of this model is up to O(359). By means of an exhaustive search, it would take more than 896 000 years for an electronic computer (5 1014 s1) to achieve this enormous task. This searching capability is the largest among both the electronic and non-electronic computers that have been developed since the DNA computing model was proposed by Adleman’s research group in 2002 (with a searching capability of O(220)).
The service and application of a network is a behavioral process that is oriented toward its operations and tasks, whose metrics and evaluation are still somewhat of a rough comparison. This paper describes scenes of network behavior as differential manifolds. Using the homeomorphic transformation of smooth differential manifolds, we provide a mathematical definition of network behavior and propose a mathematical description of the network behavior path and behavior utility. Based on the principle of differential geometry, this paper puts forward the function of network behavior and a calculation method to determine behavior utility, and establishes the calculation principle of network behavior utility. We also provide a calculation framework for assessment of the network’s attack-defense confrontation on the strength of behavior utility. Therefore, this paper establishes a mathematical foundation for the objective measurement and precise evaluation of network behavior.
With the development of online social networks (OSNs) and modern smartphones, sharing photos with friends has become one of the most popular social activities. Since people usually prefer to give others a positive impression, impression management during photo sharing is becoming increasingly important. However, most of the existing privacy-aware solutions have two main drawbacks: ① Users must decide manually whether to share each photo with others or not, in order to build the desired impression; and ② users run a high risk of leaking sensitive relational information in group photos during photo sharing, such as their position as part of a couple, or their sexual identity. In this paper, we propose a social relation impression-management (SRIM) scheme to protect relational privacy and to automatically recommend an appropriate photo-sharing policy to users. To be more specific, we have designed a lightweight face-distance measurement that calculates the distances between users’ faces within group photos by relying on photo metadata and face-detection results. These distances are then transformed into relations using proxemics. Furthermore, we propose a relation impression evaluation algorithm to evaluate and manage relational impressions. We developed a prototype and employed 21 volunteers to verify the functionalities of the SRIM scheme. The evaluation results show the effectiveness and efficiency of our proposed scheme.
Although many different views of social media coexist in the field of information systems (IS), such theories are usually not introduced in a consistent framework based on philosophical foundations. This paper introduces the dimensions of lifeworld and consideration of others. The concept of lifeworld includes Descartes’ rationality and Heidegger’s historicity, and consideration of others is based on instrumentalism and Heidegger’s ‘‘being-with.” These philosophical foundations elaborate a framework where different archetypal theories applied to social media may be compared: Goffman’s presentation of self, Bourdieu’s social capital, Sartre’s existential project, and Heidegger’s ‘‘shared-world.” While Goffman has become a frequent reference in social media, the three other references are innovative in IS research. The concepts of these four theories of social media are compared with empirical findings in IS literature. While some of these concepts match the empirical findings, some other concepts have not yet been investigated in the use of social media, suggesting future research directions.
Given the increasingly notable segmentation of underground space by existing subway tunnels, it is difficult to effectively and adequately develop and utilize underground space in busy parts of a city. This study presents a combined construction technology that has been developed for use in underground spaces; it includes a deformation buffer layer, a special grouting technique, jump excavation by compartment, back-pressure portal frame technology, a reinforcement technique, and the technology of a steel portioning drum or plate. These technologies have been successfully used in practical engineering. The combined construction technology presented in this paper provides a new method of solving key technical problems in underground spaces in effectively used cross-subway tunnels. As this technology has achieved significant economic and social benefits, it has valuable future applications.
The total length of the second stage of the water supply project in the northern areas of the Xinjiang Uygur Autonomous Region is 540 km, of which the total length of the tunnels is 516 km. The total tunneling mileage is 569 km, which includes 49 slow-inclined shafts and vertical shafts. Among the tunnels constructed in the project, the Ka–Shuang tunnel, which is a single tunnel with a length of 283 km, is currently the longest water-conveyance tunnel in the world. The main tunnel of the Ka–Shuang tunnel is divided into 18 tunnel-boring machine (TBM) sections, and 34 drilling-and-blasting sections, with 91 tunnel faces. The construction of the Ka–Shuang tunnel has been regarded as an unprecedented challenge for project construction management, risk control, and safe and efficient construction; it has also presented higher requirements for the design, manufacture, operation, and maintenance of the TBMs and their supporting equipment. Based on the engineering characteristics and adverse geological conditions, it is necessary to analyze the major problems confronted by the construction and systematically locate disaster sources. In addition, the risk level should be reasonably ranked, responsibility should be clearly identified, and a hierarchical-control mechanism should be established. Several techniques are put forward in this paper to achieve the objectives mentioned above; these include advanced geological prospecting techniques, intelligent tunneling techniques combined with the sensing and fusion of information about rock parameters and mechanical parameters, monitoring and early-warning techniques, and modern information technologies. The application of these techniques offers scientific guidance for risk control and puts forward technical ideas about improving the efficiency of safe tunneling. These techniques and ideas have great significance for the development of modern tunneling technologies and research into major construction equipment.
The New Austrian Tunneling Method (NATM) has been widely used in the construction of mountain tunnels, urban metro lines, underground storage tanks, underground power houses, mining roadways, and so on. The variation patterns of advance geological prediction data, stress–strain data of supporting structures, and deformation data of the surrounding rock are vitally important in assessing the rationality and reliability of construction schemes, and provide essential information to ensure the safety and scheduling of tunnel construction. However, as the quantity of these data increases significantly, the uncertainty and discreteness of the mass data make it extremely difficult to produce a reasonable construction scheme; they also reduce the forecast accuracy of accidents and dangerous situations, creating huge challenges in tunnel construction safety. In order to solve this problem, a novel data service system is proposed that uses data-association technology and the NATM, with the support of a big data environment. This system can integrate data resources from distributed monitoring sensors during the construction process, and then identify associations and build relations among data resources under the same construction conditions. These data associations and relations are then stored in a data pool. With the development and supplementation of the data pool, similar relations can then be used under similar conditions, in order to provide data references for construction schematic designs and resource allocation. The proposed data service system also provides valuable guidance for the construction of similar projects
An increasing number of tunnels are being constructed with tunnel-boring machines (TBMs) due to the increased efficiency and shorter completion time resulting from their use. However, when a TBM encounters adverse geological conditions in the course of tunnel construction (e.g., karst caves, faults, or fractured zones), disasters such as water and mud inrush, collapse, or machine blockage may result, and may severely imperil construction safety. Therefore, the advance detection of adverse geology and water-bearing conditions in front of the tunnel face is of great importance. This paper uses the TBM tunneling of the water conveyance project from Songhua River as a case study in order to propose a comprehensive forward geological prospecting technical system that is suitable for TBM tunnel construction under complicated geological conditions. By combining geological analysis with forward geological prospecting using a three-dimensional (3D) induced polarization method and a 3D seismic method, a comprehensive forward geological prospecting technical system can accurately forecast water inrush geo-hazards or faults in front of the TBM tunnel face. In this way, disasters such as water and mud inrush, collapse, or machine blockage can be avoided. This prospecting technical system also has reference value for carrying out the forward prospecting of adverse geology for potential TBM tunneling and for ensuring that a TBM can work efficiently.