With the popularization of the Internet, permeation of sensor networks, emergence of big data, increase in size of the information community, and interlinking and fusion of data and information throughout human society, physical space, and cyberspace, the information environment related to the current development of artificial intelligence (AI) has profoundly changed. AI faces important adjustments, and scientific foundations are confronted with new breakthroughs, as AI enters a new stage: AI 2.0. This paper briefly reviews the 60-year developmental history of AI, analyzes the external environment promoting the formation of AI 2.0 along with changes in goals, and describes both the beginning of the technology and the core idea behind AI 2.0 development. Furthermore, based on combined social demands and the information environment that exists in relation to Chinese development, suggestions on the development of AI 2.0 are given.
In this paper, we review the current state-of-the-art techniques used for understanding the inner workings of the brain at a systems level. The neural activity that governs our everyday lives involves an intricate coordination of many processes that can be attributed to a variety of brain regions. On the surface, many of these functions can appear to be controlled by specific anatomical structures; however, in reality, numerous dynamic networks within the brain contribute to its function through an interconnected web of neuronal and synaptic pathways. The brain, in its healthy or pathological state, can therefore be best understood by taking a systems-level approach. While numerous neuroengineering technologies exist, we focus here on three major thrusts in the field of systems neuroengineering: neuroimaging, neural interfacing, and neuromodulation. Neuroimaging enables us to delineate the structural and functional organization of the brain, which is key in understanding how the neural system functions in both normal and disease states. Based on such knowledge, devices can be used either to communicate with the neural system, as in neural interface systems, or to modulate brain activity, as in neuromodulation systems. The consideration of these three fields is key to the development and application of neuro-devices. Feedback-based neuro-devices require the ability to sense neural activity (via a neuroimaging modality) through a neural interface (invasive or noninvasive) and ultimately to select a set of stimulation parameters in order to alter neural function via a neuromodulation modality. Systems neuroengineering refers to the use of engineering tools and technologies to image, decode, and modulate the brain in order to comprehend its functions and to repair its dysfunction. Interactions between these fields will help to shape the future of systems neuroengineering—to develop neurotechniques for enhancing the understanding of whole-brain function and dysfunction, and the management of neurological and mental disorders.
Since its inception, endoscopy has aimed to establish an immediate diagnosis that is virtually consistent with a histologic diagnosis. In the past decade, confocal laser scanning microscopy has been brought into endoscopy, thus enabling in vivo microscopic tissue visualization with a magnification and resolution comparable to that obtained with the ex vivo microscopy of histological specimens. The major challenge in the development of instrumentation lies in the miniaturization of a fiber-optic probe for microscopic imaging with micron-scale resolution. Here, we present the design and construction of a confocal endoscope based on a fiber bundle with 1.4-μm lateral resolution and 8-frames per second (fps) imaging speed. The fiber-optic probe has a diameter of 2.6 mm that is compatible with the biopsy channel of a conventional endoscope. The prototype of a confocal endoscope has been used to observe epithelial cells of the gastrointestinal tracts of mice and will be further demonstrated in clinical trials. In addition, the confocal endoscope can be used for translational studies of epithelial function in order to monitor how molecules work and how cells interact in their natural environment.
Viral load measurements are an essential tool for the long-term clinical care of human immunodeficiency virus (HIV)-positive individuals. The gold standards in viral load instrumentation, however, are still too limited by their size, cost, and sophisticated operation for these measurements to be ubiquitous in remote settings with poor healthcare infrastructure, including parts of the world that are disproportionately affected by HIV infection. The challenge of developing a point-of-care platform capable of making viral load more accessible has been frequently approached but no solution has yet emerged that meets the practical requirements of low cost, portability, and ease-of-use. In this paper, we perform reverse-transcription loop-mediated isothermal amplification (RT-LAMP) on minimally processed HIV-spiked whole blood samples with a microfluidic and silicon microchip platform, and perform fluorescence measurements with a consumer smartphone. Our integrated assay shows amplification from as few as three viruses in a ~ 60 nL RT-LAMP droplet, corresponding to a whole blood concentration of 670 viruses per μL of whole blood. The technology contains greater power in a digital RT-LAMP approach that could be scaled up for the determination of viral load from a finger prick of blood in the clinical care of HIV-positive individuals. We demonstrate that all aspects of this viral load approach, from a drop of blood to imaging the RT-LAMP reaction, are compatible with lab-on-a-chip components and mobile instrumentation.
Cutting-edge technologies in optical molecular imaging have ushered in new frontiers in cancer research, clinical translation, and medical practice, as evidenced by recent advances in optical multimodality imaging, Cerenkov luminescence imaging (CLI), and optical image-guided surgeries. New abilities allow in vivo cancer imaging with sensitivity and accuracy that are unprecedented in conventional imaging approaches. The visualization of cellular and molecular behaviors and events within tumors in living subjects is improving our deeper understanding of tumors at a systems level. These advances are being rapidly used to acquire tumor-to-tumor molecular heterogeneity, both dynamically and quantitatively, as well as to achieve more effective therapeutic interventions with the assistance of real-time imaging. In the era of molecular imaging, optical technologies hold great promise to facilitate the development of highly sensitive cancer diagnoses as well as personalized patient treatment—one of the ultimate goals of precision medicine.
This paper presents findings from an investigation of the large-scale construction solid waste (CSW) landslide that occurred at a landfill at Shenzhen, Guangdong, China, on December 20, 2015, and which killed 77 people and destroyed 33 houses. The landslide involved 2.73×106 m3 of CSW and affected an area about 1100?m in length and 630?m in maximum width, making it the largest landfill landslide in the world. The investigation of this disaster used a combination of unmanned aerial vehicle surveillance and multistage remote-sensing images to reveal the increasing volume of waste in the landfill and the shifting shape of the landfill slope for nearly two years before the landslide took place, beginning with the creation of the CSW landfill in March, 2014, that resulted in the uncertain conditions of the landfill’s boundaries and the unstable state of the hydrologic performance. As a result, applying conventional stability analysis methods used for natural landslides to this case would be difficult. In order to analyze this disaster, we took a multistage modeling technique to analyze the varied characteristics of the landfill slope’s structure at various stages of CSW dumping and used the non-steady?flow?theory to explain the groundwater seepage problem. The investigation showed that the landfill could be divided into two units based on the moisture in the land: ① a front uint, consisted of the landfill slope, which had low water content; and ② a rear unit, consisted of fresh waste, which had a high water content. This structure caused two effects—surface-water infiltration and consolidation seepage that triggered the landslide in the landfill. Surface-water infiltration induced a gradual increase in pore water pressure head, or piezometric head, in the front slope because the infiltrating position rose as the volume of waste placement increased. Consolidation seepage led to higher excess pore water pressures as the loading of waste increased. We also investigated the post-failure soil dynamics parameters of the landslide deposit using cone penetration, triaxial, and ring-shear tests in order to simulate the characteristics of a flowing slide with a long run-out due to the liquefaction effect. Finally, we conclude the paper with lessons from the tens of catastrophic landslides of municipal solid waste around the world and discuss how to better manage the geotechnical risks of urbanization.
HPR1000 is an advanced nuclear power plant (NPP) with the significant feature of an active and passive safety design philosophy, developed by the China National Nuclear Corporation. On one hand, it is an evolutionary design based on proven technology of the existing pressurized water reactor NPP; on the other hand, it incorporates advanced design features including a 177-fuel-assembly core loaded with CF3 fuel assemblies, active and passive safety systems, comprehensive severe accident prevention and mitigation measures, enhanced protection against external events, and improved emergency response capability. Extensive verification experiments and tests have been performed for critical innovative improvements on passive systems, the reactor core, and the main equipment. The design of HPR1000 fulfills the international utility requirements for advanced light water reactors and the latest nuclear safety requirements, and addresses the safety issues relevant to the Fukushima accident. Along with its outstanding safety and economy, HPR1000 provides an excellent and practicable solution for both domestic and international nuclear power markets.
A future smart grid must fulfill the vision of the Energy Internet in which millions of people produce their own energy from renewables in their homes, offices, and factories and share it with each other. Electric vehicles and local energy storage will be widely deployed. Internet technology will be utilized to transform the power grid into an energy-sharing inter-grid. To prepare for the future, a smart grid with intelligent periphery, or smart GRIP, is proposed. The building blocks of GRIP architecture are called clusters and include an energy-management system (EMS)-controlled transmission grid in the core and distribution grids, micro-grids, and smart buildings and homes on the periphery; all of which are hierarchically structured. The layered architecture of GRIP allows a seamless transition from the present to the future and plug-and-play interoperability. The basic functions of a cluster consist of ① dispatch, ② smoothing, and ③ mitigation. A risk-limiting dispatch methodology is presented; a new device, called the electric spring, is developed for smoothing out fluctuations in periphery clusters; and means to mitigate failures are discussed.
Based on the construction of the 8-inch fabrication line, advanced process technology of 8-inch wafer, as well as the fourth-generation high-voltage double-diffused metal-oxide semiconductor (DMOS+) insulated-gate bipolar transistor (IGBT) technology and the fifth-generation trench gate IGBT technology, have been developed, realizing a great-leap forward technological development for the manufacturing of high-voltage IGBT from 6-inch to 8-inch. The 1600 A/1.7 kV and 1500 A/3.3 kV IGBT modules have been successfully fabricated, qualified, and applied in rail transportation traction system.
Visual prostheses are now entering the clinical marketplace. Such prostheses were originally targeted for patients suffering from blindness through retinitis pigmentosa (RP). However, in late July of this year, for the first time a patient was given a retinal implant in order to treat dry age-related macular degeneration. Retinal implants are suitable solutions for diseases that attack photoreceptors but spare most of the remaining retinal neurons. For eye diseases that result in loss of retinal output, implants that interface with more central structures in the visual system are needed. The standard site for central visual prostheses under development is the visual cortex. This perspective discusses the technical and socioeconomic challenges faced by visual prostheses.
In 2011, the Chinese Academy of Sciences launched an engineering project to develop an accelerator-driven subcritical system (ADS) for nuclear waste transmutation. The China Lead-based Reactor (CLEAR), proposed by the Institute of Nuclear Energy Safety Technology, was selected as the reference reactor for ADS development, as well as for the technology development of the Generation IV lead-cooled fast reactor. The conceptual design of CLEAR-I with 10 MW thermal power has been completed. KYLIN series lead-bismuth eutectic experimental loops have been constructed to investigate the technologies of the coolant, key components, structural materials, fuel assembly, operation, and control. In order to validate and test the key components and integrated operating technology of the lead-based reactor, the lead alloy-cooled non-nuclear reactor CLEAR-S, the lead-based zero-power nuclear reactor CLEAR-0, and the lead-based virtual reactor CLEAR-V are under realization.
This paper summarizes the development of hydro-projects in China, blended with an international perspective. It expounds major technical progress toward ensuring the safe construction of high dams and river harnessing, and covers the theorization of uneven non-equilibrium sediment transport, inter-basin water diversion, giant hydro-generator units, pumped storage power stations, underground caverns, ecological protection, and so on.
After the first concrete was poured on December 9, 2012 at the Shidao Bay site in Rongcheng, Shandong Province, China, the construction of the reactor building for the world’s first high-temperature gas-cooled reactor pebble-bed module (HTR-PM) demonstration power plant was completed in June, 2015. Installation of the main equipment then began, and the power plant is currently progressing well toward connecting to the grid at the end of 2017. The thermal power of a single HTR-PM reactor module is 250 MWth, the helium temperatures at the reactor core inlet/outlet are 250/750 °C, and a steam of 13.25 MPa/567 °C is produced at the steam generator outlet. Two HTR-PM reactor modules are connected to a steam turbine to form a 210 MWe nuclear power plant. Due to China’s industrial capability, we were able to overcome great difficulties, manufacture first-of-a-kind equipment, and realize series major technological innovations. We have achieved successful results in many aspects, including planning and implementing R&D, establishing an industrial partnership, manufacturing equipment, fuel production, licensing, site preparation, and balancing safety and economics; these obtained experiences may also be referenced by the global nuclear community.
To grow high-quality and large-size monocrystal-line silicon at low cost, we proposed a single-seed casting technique. To realize this technique, two challenges—polycrystalline nucleation on the crucible wall and dislocation multiplication inside the crystal—needed to be addressed. Numerical analysis was used to develop solutions for these challenges. Based on an optimized furnace structure and operating conditions from numerical analysis, experiments were performed to grow monocrystalline silicon using the single-seed casting technique. The results revealed that this technique is highly superior to the popular high-performance multicrystalline and multiseed casting mono-like techniques.
The pressurized water reactor CAP1400 is one of the sixteen National Science and Technology Major Projects. Developed from China’s nuclear R&D system and manufacturing capability, as well as AP1000 technology introduction and assimilation, CAP1400 is an advanced large passive nuclear power plant with independent intellectual property rights. By discussing the top design principle, main performance objectives, general parameters, safety design, and important improvements in safety, economy, and other advanced features, this paper reveals the technology innovation and competitiveness of CAP1400 as an internationally promising Gen-III PWR model. Moreover, the R&D of CAP1400 has greatly promoted China’s domestic nuclear power industry from the Gen-II to the Gen-III level.
A historical review of in-vessel melt retention (IVR) is given, which is a severe accident mitigation measure extensively applied in Generation III pressurized water reactors (PWRs). The idea of IVR actually originated from the back-fitting of the Generation II reactor Loviisa VVER-440 in order to cope with the core-melt risk. It was then employed in the new deigns such as Westinghouse AP1000, the Korean APR1400 as well as Chinese advanced PWR designs HPR1000 and CAP1400. The most influential phenomena on the IVR strategy are in-vessel core melt evolution, the heat fluxes imposed on the vessel by the molten core, and the external cooling of the reactor pressure vessel (RPV). For in-vessel melt evolution, past focus has only been placed on the melt pool convection in the lower plenum of the RPV; however, through our review and analysis, we believe that other in-vessel phenomena, including core degradation and relocation, debris formation, and coolability and melt pool formation, may all contribute to the final state of the melt pool and its thermal loads on the lower head. By looking into previous research on relevant topics, we aim to identify the missing pieces in the picture. Based on the state of the art, we conclude by proposing future research needs.
The installation of vast quantities of additional new sensing and communication equipment, in conjunction with building the computing infrastructure to store and manage data gathered by this equipment, has been the first step in the creation of what is generically referred to as the “smart grid” for the electric transmission system. With this enormous capital investment in equipment having been made, attention is now focused on developing methods to analyze and visualize this large data set. The most direct use of this large set of new data will be in data visualization. This paper presents a survey of some visualization techniques that have been deployed by the electric power industry for visualizing data over the past several years. These techniques include pie charts, animation, contouring, time-varying graphs, geographic-based displays, image blending, and data aggregation techniques. The paper then emphasizes a newer concept of using word-sized graphics called sparklines as an extremely effective method of showing large amounts of time-varying data.
This paper presents an overview of the current status of the development of the smart grid in Great Britain (GB). The definition, policy and technical drivers, incentive mechanisms, technological focus, and the industry's progress in developing the smart grid are described. In particular, the Low Carbon Networks Fund and Electricity Network Innovation Competition projects, together with the rollout of smart metering, are detailed. A more observable, controllable, automated, and integrated electricity network will be supported by these investments in conjunction with smart meter installation. It is found that the focus has mainly been on distribution networks as well as on real-time flows of information and interaction between suppliers and consumers facilitated by improved information and communications technology, active power flow management, demand management, and energy storage. The learning from the GB smart grid initiatives will provide valuable guidelines for future smart grid development in GB and other countries.
Energy production based on fossil fuel reserves is largely responsible for carbon emissions, and hence global warming. The planet needs concerted action to reduce fossil fuel usage and to implement carbon mitigation measures. Ocean energy has huge potential, but there are major interdisciplinary problems to be overcome regarding technology, cost reduction, investment, environmental impact, governance, and so forth. This article briefly reviews ocean energy production from offshore wind, tidal stream, ocean current, tidal range, wave, thermal, salinity gradients, and biomass sources. Future areas of research and development are outlined that could make exploitation of the marine renewable energy (MRE) seascape a viable proposition; these areas include energy storage, advanced materials, robotics, and informatics. The article concludes with a sustainability perspective on the MRE seascape encompassing ethics, legislation, the regulatory environment, governance and consenting, economic, social, and environmental constraints. A new generation of engineers is needed with the ingenuity and spirit of adventure to meet the global challenge posed by MRE.
High-speed and precision positioning are fundamental requirements for high-acceleration low-load mechanisms in integrated circuit (IC) packaging equipment. In this paper, we derive the transient nonlinear dynamicresponse equations of high-acceleration mechanisms, which reveal that stiffness, frequency, damping, and driving frequency are the primary factors. Therefore, we propose a new structural optimization and velocity-planning method for the precision positioning of a high-acceleration mechanism based on optimal spatial and temporal distribution of inertial energy. For structural optimization, we first reviewed the commonly flexible multibody dynamic optimization using equivalent static loads method (ESLM), and then we selected the modified ESLM for optimal spatial distribution of inertial energy; hence, not only the stiffness but also the inertia and frequency of the real modal shapes are considered. For velocity planning, we developed a new velocity-planning method based on nonlinear dynamic-response optimization with varying motion conditions. Our method was verified on a high-acceleration die bonder. The amplitude of residual vibration could be decreased by more than 20% via structural optimization and the positioning time could be reduced by more than 40% via asymmetric variable velocity planning. This method provides an effective theoretical support for the precision positioning of high-acceleration low-load mechanisms.
Modeling vapor pressure is crucial for studying the moisture reliability of microelectronics, as high vapor pressure can cause device failures in environments with high temperature and humidity. To minimize the impact of vapor pressure, a super-hydrophobic (SH) coating can be applied on the exterior surface of devices in order to prevent moisture penetration. The underlying mechanism of SH coating for enhancing device reliability, however, is still not fully understood. In this paper, we present several existing theories for predicting vapor pressure within microelectronic materials. In addition, we discuss the mechanism and effectiveness of SH coating in preventing water vapor from entering a device, based on experimental results. Two theoretical models, a micro-mechanics-based whole-field vapor pressure model and a convection-diffusion model, are described for predicting vapor pressure. Both methods have been successfully used to explain experimental results on uncoated samples. However, when a device was coated with an SH nanocomposite, weight gain was still observed, likely due to vapor penetration through the SH surface. This phenomenon may cast doubt on the effectiveness of SH coatings in microelectronic devices. Based on current theories and the available experimental results, we conclude that it is necessary to develop a new theory to understand how water vapor penetrates through SH coatings and impacts the materials underneath. Such a theory could greatly improve microelectronics reliability.
This paper collects and synthesizes the technical requirements, implementation, and validation methods for quasi-steady agent-based simulations of interconnection-scale models with particular attention to the integration of renewable generation and controllable loads. Approaches for modeling aggregated controllable loads are presented and placed in the same control and economic modeling framework as generation resources for interconnection planning studies. Model performance is examined with system parameters that are typical for an interconnection approximately the size of the Western Electricity Coordinating Council (WECC) and a control area about 1/100 the size of the system. These results are used to demonstrate and validate the methods presented.
The research roots of 19fluorine (19F) magnetic resonance imaging (MRI) date back over 35 years. Over that time span, 1H imaging flourished and was adopted worldwide with an endless array of applications and imaging approaches, making magnetic resonance an indispensable pillar of biomedical diagnostic imaging. For many years during this timeframe, 19F imaging research continued at a slow pace as the various attributes of the technique were explored. However, over the last decade and particularly the last several years, the pace and clinical relevance of 19F imaging has exploded. In part, this is due to advances in MRI instrumentation, 19F/1H coil designs, and ultrafast pulse sequence development for both preclinical and clinical scanners. These achievements, coupled with interest in the molecular imaging of anatomy and physiology, and combined with a cadre of innovative agents, have brought the concept of 19F into early clinical evaluation. In this review, we attempt to provide a slice of this rich history of research and development, with a particular focus on liquid perfluorocarbon compound-based agents.
A high-throughput multi-plume pulsed-laser deposition (MPPLD) system has been demonstrated and compared to previous techniques. Whereas most combinatorial pulsed-laser deposition (PLD) systems have focused on achieving thickness uniformity using sequential multilayer deposition and masking followed by post-deposition annealing, MPPLD directly deposits a compositionally varied library of compounds using the directionality of PLD plumes and the resulting spatial variations of deposition rate. This system is more suitable for high-throughput compound thin-film fabrication.
Ultrasonic backscatter technique has shown promise as a noninvasive cancellous bone assessment tool. A novel ultrasonic backscatter bone diagnostic (UBBD) instrument and an in vivo application for neonatal bone evaluation are introduced in this study. The UBBD provides several advantages, including noninvasiveness, non-ionizing radiation, portability, and simplicity. In this study, the backscatter signal could be measured within 5 s using the UBBD. Ultrasonic backscatter measurements were performed on 467 neonates (268 males and 199 females) at the left calcaneus. The backscatter signal was measured at a central frequency of 3.5 MHz. The delay (T1) and duration (T2) of the backscatter signal of interest (SOI) were varied, and the apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), zero frequency intercept of apparent backscatter (FIAB), and spectral centroid shift (SCS) were calculated. The results showed that the SOI selection had a direct influence on cancellous bone evaluation. The AIB and FIAB were positively correlated with the gestational age (|R| up to 0.45, P<0.001) when T1 was short (<8 µs), while negative correlations (|R| up to 0.56, P<0.001) were commonly observed for T1>10 µs. Moderate positive correlations (|R| up to 0.45, P<0.001) were observed for FSAB and SCS with gestational age when T1 was long (>10 µs). The T2 mainly introduced fluctuations in the observed correlation coefficients. The moderate correlations observed with UBBD demonstrate the feasibility of using the backscatter signal to evaluate neonatal bone status. This study also proposes an explicit standard for in vivo SOI selection and neonatal cancellous bone assessment.
Bionics (the imitation or abstraction of the “inventions of nature) and, to an even greater extent, synthetic biology, will be as relevant to engineering development and industry as the silicon chip was over the last 50 years. Chemical industries already use so-called “white biotechnology” for new processes, new raw materials, and more sustainable use of resources. Synthetic biology is also used for the development of second-generation biofuels and for harvesting the sun's energy with the help of tailor-made microorganisms or biometrically designed catalysts. The market potential for bionics in medicine, engineering processes, and DNA storage is huge. “Moonshot” projects are already aggressively focusing on diseases and new materials, and a US-led competition is currently underway with the aim of creating a thousand new molecules. This article describes a timeline that starts with current projects and then moves on to code engineering projects and their implications, artificial DNA, signaling molecules, and biological circuitry. Beyond these projects, one of the next frontiers in bionics is the design of synthetic metabolisms that include artificial food chains and foods, and the bioengineering of raw materials; all of which will lead to new insights into biological principles. Bioengineering will be an innovation motor just as digitalization is today. This article discusses pertinent examples of bioengineering, particularly the use of alternative carbon-based biofuels and the techniques and perils of cell modification. Big data, analytics, and massive storage are important factors in this next frontier. Although synthetic biology will be as pervasive and transformative in the next 50 years as digitization and the Internet are today, its applications and impacts are still in nascent stages. This article provides a general taxonomy in which the development of bioengineering is classified in five stages (DNA analysis, bio-circuits, minimal genomes, protocells, xenobiology) from the familiar to the unknown, with implications for safety and security, industrial development, and the development of bioengineering and biotechnology as an interdisciplinary field. Ethical issues and the importance of a public debate about the consequences of bionics and synthetic biology are discussed.
It has long been a dream in the electronics industry to be able to write out electronics directly, as simply as printing a picture onto paper with an office printer. The first-ever prototype of a liquid-metal printer has been invented and demonstrated by our lab, bringing this goal a key step closer. As part of a continuous endeavor, this work is dedicated to significantly extending such technology to the consumer level by making a very practical desktop liquid-metal printer for society in the near future. Through the industrial design and technical optimization of a series of key technical issues such as working reliability, printing resolution, automatic control, human-machine interface design, software, hardware, and integration between software and hardware, a high-quality personal desktop liquid-metal printer that is ready for mass production in industry was fabricated. Its basic features and important technical mechanisms are explained in this paper, along with demonstrations of several possible consumer end-uses for making functional devices such as light-emitting diode (LED) displays. This liquid-metal printer is an automatic, easy-to-use, and low-cost personal electronics manufacturing tool with many possible applications. This paper discusses important roles that the new machine may play for a group of emerging needs. The prospective future of this cutting-edge technology is outlined, along with a comparative interpretation of several historical printing methods. This desktop liquid-metal printer is expected to become a basic electronics manufacturing tool for a wide variety of emerging practices in the academic realm, in industry, and in education as well as for individual end-users in the near future.
Convection-enhanced delivery (CED) is a promising technique leveraging pressure-driven flow to increase penetration of infused drugs into interstitial spaces. We have developed a fiberoptic microneedle device for inducing local sub-lethal hyperthermia to further improve CED drug distribution volumes, and this study seeks to quantitatively characterize this approach in agarose tissue phantoms. Infusions of dye were conducted in 0.6% (w/w) agarose tissue phantoms with isothermal conditions at 15 °C, 20 °C, 25 °C, and 30 °C. Infusion metrics were quantified using a custom shadowgraphy setup and image-processing algorithm. These data were used to build an empirical predictive temporal model of distribution volume as a function of phantom temperature. A second set of proof-of-concept experiments was conducted to evaluate a novel fiberoptic device capable of generating local photothermal heating during fluid infusion. The isothermal infusions showed a positive correlation between temperature and distribution volume, with the volume at 30 °C showing a 7-fold increase at 100 min over the 15 °C isothermal case. Infusions during photothermal heating (1064 nm at 500 mW) showed a similar effect with a 3.5-fold increase at 4 h over the control (0 mW). These results and analyses serve to provide insight into and characterization of heat-mediated enhancement of volumetric dispersal.
Given the demand for constantly scaling microelectronic devices to ever smaller dimensions, a SiO2 gate dielectric was substituted with a higher dielectric-constant material, Hf(Zr)O2, in order to minimize current leakage through dielectric thin film. However, upon interfacing with high dielectric constant (high-κ) dielectrics, the electron mobility in the conventional Si channel degrades due to Coulomb scattering, surface-roughness scattering, remote-phonon scattering, and dielectric-charge trapping. III-V and Ge are two promising candidates with superior mobility over Si. Nevertheless, Hf(Zr)O2/III-V(Ge) has much more complicated interface bonding than Si-based interfaces. Successful fabrication of a high-quality device critically depends on understanding and engineering the bonding configurations at Hf(Zr)O2/III-V(Ge) interfaces for the optimal design of device interfaces. Thus, an accurate atomic insight into the interface bonding and mechanism of interface gap states formation becomes essential. Here, we utilize first-principle calculations to investigate the interface between HfO2 and GaAs. Our study shows that As−As dimer bonding, Ga partial oxidation (between 3+ and 1+) and Ga− dangling bonds constitute the major contributions to gap states. These findings provide insightful guidance for optimum interface passivation.
Based on systematic experiments on the influence of air entrainment on rock block stability in plunge pools impacted by high-velocity jets, this study presents adaptations of a physically based scour model. The modifications regarding jet aeration are implemented in the Comprehensive Scour Model (CSM), allowing it to reproduce the physical-mechanical processes involved in scour formation concerning the three phases; namely, water, rock, and air. The enhanced method considers the reduction of momentum of an aerated jet as well as the decrease of energy dissipation in the jet diffusive shear layer, both resulting from the entrainment of air bubbles. Block ejection from the rock mass depends on a combination of the aerated time-averaged pressure coefficient and the modified maximum dynamic impulsion coefficient, which was found to be a constant value of 0.2 for high-velocity jets in deep pools. The modified model is applied to the case of the observed scour hole at the Kariba Dam, with good agreement.