High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
Jaqueline Stauffenberg et al 2024 Meas. Sci. Technol. 35 085011
This paper explores large area application of tip-based nanofabrication by field emission scanning probe lithography and showcases the simultaneous possibility of atomic force microscopy on macroscopic scales. This is made possible by the combination of tip-based technology and a planar nanopositioning and nanomeasuring machine. Using long range atomic force microscopy measurement of regular grating structures, the performance of the machine is thoroughly characterized over the full 100 mm range of motion of the positioning machine, which was confirmed by repeated measurements. After initially focussing on achieving the minimum line width of 40 nm in microscopic areas, a grating with a pitch of 1 μm is additionally fabricated over a total length of 10 mm, whereby the dimensions and deviations are also considered.
Guanglin Chen et al 2024 Meas. Sci. Technol. 35 086202
Multi rotor unmanned aerial vehicles (UAVs) are extensively utilized across various domains, and the motor constitutes a pivotal element in the UAV power system. The majority of UAV failures and crashes stem from motor malfunctions, underscoring the imperative need for comprehensive research on fault diagnosis in UAV motors to ensure the stable and reliable execution of flight tasks. This study focuses on quadrotor UAVs as the research subject and devises targeted fault simulation experiments based on the structural features and operational characteristics of the DC brushless motor used in quadrotor UAVs, specifically examining the stator, rotor, and bearings. To address challenges related to the UAV's own loads, limited space for redundant parts, and the high cost and difficulty associated with installing sensors for traditional fault diagnostic signals such as vibration and temperature, this study opts to use current signals as a substitute. This approach resolves the issue of challenging data collection for UAVs and investigates a current signal based fault diagnosis method for UAV motors. Lastly, in response to the limited training samples available for fault data due to the UAV's highly sensitive characteristics regarding the health status of its components and flight stability, traditional machine learning and deep learning methods encounter difficulties in identifying representative features with a small number of training samples, leading to the risk of overfitting and reduced model accuracy in fault diagnosis. To overcome this challenge, we propose a hybrid neural network fault diagnosis model that incorporates a width learning system and a convolutional neural network (CNN). The width learning system eliminates temporal characteristics from the original current signal, capturing more comprehensive and representative sample features in the width feature space. Subsequently, the CNN is employed for feature extraction and classification tasks. In empirical small sample fault diagnosis experiments using current signal data for UAV motors, our proposed model outperforms other models used for comparison.
Simon Laflamme et al 2023 Meas. Sci. Technol. 34 093001
Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data 'twin-friendly' and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
Gustavo Quino et al 2021 Meas. Sci. Technol. 32 015203
Digital image correlation (DIC) is a widely used technique in experimental mechanics for full field measurement of displacements and strains. The subset matching based DIC requires surfaces containing a random pattern. Even though there are several techniques to create random speckle patterns, their applicability is still limited. For instance, traditional methods such as airbrush painting are not suitable in the following challenging scenarios: (i) when time available to produce the speckle pattern is limited and (ii) when dynamic loading conditions trigger peeling of the pattern. The development and application of some novel techniques to address these situations is presented in this paper. The developed techniques make use of commercially available materials such as temporary tattoo paper, adhesives and stamp kits. The presented techniques are shown to be quick, repeatable, consistent and stable even under impact loads and large deformations. Additionally, they offer the possibility to optimise and customise the speckle pattern. The speckling techniques presented in the paper are also versatile and can be quickly applied in a variety of materials.
Adam Thompson et al 2021 Meas. Sci. Technol. 32 105013
Maximum permissible errors (MPEs) are an important measurement system specification and form the basis of periodic verification of a measurement system's performance. However, there is no standard methodology for determining MPEs, so when they are not provided, or not suitable for the measurement procedure performed, it is unclear how to generate an appropriate value with which to verify the system. Whilst a simple approach might be to take many measurements of a calibrated artefact and then use the maximum observed error as the MPE, this method requires a large number of repeat measurements for high confidence in the calculated MPE. Here, we present a statistical method of MPE determination, capable of providing MPEs with high confidence and minimum data collection. The method is presented with 1000 synthetic experiments and is shown to determine an overestimated MPE within 10% of an analytically true value in 99.2% of experiments, while underestimating the MPE with respect to the analytically true value in 0.8% of experiments (overestimating the value, on average, by 1.24%). The method is then applied to a real test case (probing form error for a commercial fringe projection system), where the efficiently determined MPE is overestimated by 0.3% with respect to an MPE determined using an arbitrarily chosen large number of measurements.
Aleksi Mattila et al 2024 Meas. Sci. Technol. 35 085025
Spectral scatterometry is a technique that allows rapid measurements of diffraction efficiencies of diffractive optical elements (DOEs). The analysis of such diffraction efficiencies has traditionally been laborious and time consuming. However, machine learning can be employed to aid in the analysis of measured diffraction efficiencies. In this paper we describe a novel system for providing measurements of multiple measurands rapidly and concurrently using a spectral scatterometer and an artificial neural network (ANN) which is trained utilising transfer learning. The ANN provides values for the pitch, height, and line widths of the DOEs. In addition, an uncertainty evaluation was performed. In the majority of the studied cases, the discrepancies between the values obtained using a scanning electron microscope (SEM) and artificial neural network assisted spectral scatterometer (ANNASS) for the grating parameters were below 5 nm. Furthermore, independent reference samples were used to perform a metrological validation. An expanded uncertainty (k = 2) of 5.3 nm was obtained from the uncertainty evaluation for the measurand height. The height value measurements performed employing ANNASS and SEM are demonstrated to be in agreement within this uncertainty.
Open all abstracts, in this tab
Yi Tian et al 2024 Meas. Sci. Technol. 35 095004
The stable operation of power transformers depends on the oil-paper insulation system, in which the aging degree of insulating paper is the key to evaluate the remaining life of the transformer. Currently, methanol extraction is widely used to detect the furfural content in oil to evaluate the aging state of insulating paper, but it ignores the influence of methanol produced during the operation of the transformer on the extraction results. To address this issue, this paper proposes a new method that can replace methanol extraction for detecting furfural in oil through simulation and experiment. Firstly, through simulation and experiment, it is proved that the strong vibration absorption peaks at 1677 cm−1 for furfural carbonyl and 2240 cm−1 for acetonitrile cyano can be used to establish a new method for extracting furfural content in oil using acetonitrile. A quantitative model between the concentration x of furfural in the furfural-acetonitrile mixed solution and the area y of the infrared absorption peak at 1677 cm−1 is established, with a goodness of fit of 0.9974. Secondly, a comparison between direct detection and acetonitrile extraction methods is conducted. The results show that direct detection is simple to operate, but the minimum detection concentration is 40 mg l−1, which is difficult to meet practical requirements. Acetonitrile extraction can reduce the minimum detection concentration to 0.1 mg l−1. At the same time, the extraction conditions are analyzed to determine the extraction ratio of 30, extraction times of 5, and extraction rate of 60%. Finally, the proposed detection method is applied to thermal aging tests on insulating paper of different types. The experimental results show that the proposed method has good repeatability and improves the detection resolution of furfural content. This paper combines infrared spectroscopy with acetonitrile extraction technology to open up a more efficient and practical new method for detecting furfural content in transformer oil.
Jinrui Wang et al 2024 Meas. Sci. Technol. 35 096109
Considerable attention has been garnered by the bearing fault diagnostic method, founded on sparse feature learning, for its exceptional robustness and adaptability to noise. Being an effective sparse learning method, parallel sparse filtering (PSF) has found successful applications in intelligent fault diagnosis. However, influenced by mechanical structural collisions and complex transmission paths, the obtained bearing signals come with abnormal impacts and noise, resulting in a reduced reliability of the PSF. Additionally, PSF constrains features and samples with norm, which is significantly disturbed by noise. To address these issues, generalized nonlinear hybrid-norm PSF (GNHNPSF) is proposed. Firstly, the Sigmoid nonlinear function is examined to suppress the amplitude of the anomalous shock. Subsequently, the hybrid-norm constraints, which involve the application of a norm to the column features of the input signal and a norm to the row features, can be employed to effectively extract distinctive information from the original data. The GNHNPSF is demonstrated to improve the accuracy of bearing fault diagnosis under complex interference, as evidenced by the results of simulation and experiments.
Meiying Qiao et al 2024 Meas. Sci. Technol. 35 096308
This work proposes an algorithm based on improved mixture correntropy cubature Kalman filtering to address the issues of low accuracy and susceptibility to complex non-Gaussian noise and outlier interference in inertial navigation attitude estimation. First, a combination of Gaussian kernel and Cauchy kernel is proposed to construct the mixture correntropy, aiming to address the issue of single kernel-based correntropy being inadequate when dealing with complex non-Gaussian noise. Second, the objective function is established utilizing the model fitting loss based on mean square error and measurement fitting loss based on mixture correntropy. The maximum correntropy criterion is utilized to replace the minimum mean square error criterion, and the fixed-point iteration method is used to solve the objective function. This process derives the mixture correntropy matrix, which adjusts the measurement noise covariance. Finally, a membership function is used to determine the mixture correntropy coefficient. Accordingly, the algorithm can adaptively select the proportions of each kernel function based on the respective noise interference scenario. Simulations and dynamic–static experiments have been conducted. The algorithm has been compared with single kernel-based correntropy algorithms and other robust algorithms to confirm its superior precision and stability under complex non-Gaussian noise and outlier interference conditions.
Qing Li et al 2024 Meas. Sci. Technol. 35 096108
In practical engineering applications, the accuracy and stability of fault identification for centrifugal pump will be significantly reduced due to unbalanced distribution between normal and fault datasets, i.e., the number of normal working samples is far more than the fault samples. To alleviate this bottleneck issue, this paper explores the fault identification of centrifugal pump based on Wasserstein generative adversarial network with gradient penalty (WGAN-GP) through combining kinematics simulation and experimental case. Specifically, ideal unbalanced vibration datasets from failure patterns such as damaged impeller of centrifugal pump are simulated and collected by prototype ADAMS software, then the unbalanced vibration signals are transformed into 2D grey-scale images. Furtherly, the generated grey-scale image datasets are feed into the original grey-scale image dataset as new datasets for training when the Nash equilibrium of the WGAN-GP model is reached. Eventually, the fault patterns of centrifugal pump are identified using confusion matrix graph. Meanwhile, another public dataset of centrifugal pump is employed for verifying the accuracy of the WGAN-GP model. Results indicate that fault identification accuracies with 95.07% and 98.0% of both kinematics simulation and experimental case are obtained, respectively, and the issues of unbalanced distribution and insufficient dataset can be overcome effectively.
Pengfei Song et al 2024 Meas. Sci. Technol. 35 095203
In response to the problem where current traditional denoising algorithms cannot effectively remove noise from inclined tunnel point cloud data with irregular contours. This paper proposes a denoising algorithm for inclined tunnel point cloud data based on irregular contour features. The algorithm combines the DBSCAN clustering algorithm with polynomial curve fitting to obtain sequential point cloud slices along the perpendicular direction to the centerline of the inclined tunnel. By identifying and extracting irregular contour feature points from these slices, it achieves the extraction of irregular wall shapes inside the tunnel. Based on these irregular wall shape features, noise points are effectively removed using distance iteration calculations. Experimental results demonstrate that the proposed algorithm can effectively handle the irregular shapes and elevation variations in inclined tunnel point cloud data and achieve good denoising performance for various types of noise within the tunnel. This algorithm lays a solid foundation for subsequent three-dimensional modeling of tunnels with high precision.
Open all abstracts, in this tab
Mohanraj T et al 2024 Meas. Sci. Technol. 35 092002
Milling is an extremely adaptable process that can be utilized to fabricate a wide range of shapes and intricate 3D geometries. The versatility of the milling process renders it useful for the production of a diverse range of components and products in several industries, including aerospace, automotive, electronics, and medical equipment. Monitoring tool conditions is essential for maintaining product quality, minimizing production downtime, and maximizing tool life. Advances in this field have been driven by the need for increased productivity, reduced tool wear, and improved process efficiency. Tool condition monitoring (TCM) in the milling process is a critical aspect of machining operations. TCM involves assessing the health and performance of cutting tools used in milling machines. As technology evolves, staying updated with the latest developments in this field is essential for manufacturers seeking to optimize their milling operations. However, addressing the challenges associated with sensor integration, data analysis, and cost-effectiveness remains crucial. To fill this research gap, this paper provides an overview of the extensive literature on monitoring milling tool conditions. It summarizes the key focus areas, including tool wear sensors and the application of various machine learning and deep learning algorithms. It also discusses the potential applications of TCM beyond wear detection, such as predicting tool breakage, tool wear, the cutting tool's remaining lifetime, and the challenges faced by TCMs. This review also provides suggestions for potential future research endeavors and is anticipated to offer valuable insights for the development of advanced TCMs in terms of tool wear monitoring and predicting remaining useful life.
Junning Li et al 2024 Meas. Sci. Technol. 35 092001
Rolling bearings are critical components that are prone to faults in the operation of rotating equipment. Therefore, it is of utmost importance to accurately diagnose the state of rolling bearings. This review comprehensively discusses classical algorithms for fault diagnosis of rolling bearings based on vibration signal, focusing on three key aspects: data preprocessing, fault feature extraction, and fault feature identification. The main principles, key features, application difficulties, and suitable occasions for various algorithms are thoroughly examined. Additionally, different fault diagnosis methods are reviewed and compared using the Case Western Reserve University bearing dataset. Based on the current research status in bearing fault diagnosis, future development directions are also anticipated. It is expected that this review will serve as a valuable reference for researchers aiming to enhance their understanding and improve the technology of rolling bearing fault diagnosis.
Hanlin Guan et al 2024 Meas. Sci. Technol. 35 082001
Hydraulic component faults have the characteristics of nonlinear time-varying signal, strong concealment, and difficult feature extraction, etc. Timely and accurately fault diagnosis of hydraulic components is helpful to curb economic losses and accidents, so researches have carried out a lot of research on hydraulic components. Information fusion technology can combine multi-source data from multiple dimensions to mine fault data features, which effectively improves the accuracy and reliability of fault diagnosis results. However, there is currently a lack of a comprehensive and systematic review in this domain. Therefore, in this paper, the hydraulic components information fusion fault diagnosis technologies are summarized and analyzed, encompassing the main process information fusion fault diagnosis and the research status of information fusion fault diagnosis of hydraulic system. The methods and techniques involved in the fusion process, data source and fusion method of fault diagnosis of hydraulic components information fusion are elaborated and summarized. The problems of information fusion in fault diagnosis of hydraulic components are analyzed, the solutions are discussed, and the research ideas of improving information fusion fault diagnosis are put forward. Finally, digital twin (DT) technology is introduced, and the advantages and research status of intelligent fault diagnosis based on DT are summarized. On this basis, the intelligent fault diagnosis of hydraulic components based on information fusion is summarized, and the challenges and future research ideas of applying information fusion and DT to intelligent fault diagnosis of hydraulic components are put forward and analyzed comprehensively.
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Open all abstracts, in this tab
zhu et al
Most point cloud simplification algorithms use k-order neighborhood parameters, which are set by human experience; thus, the accuracy of point feature information is not high, and each point is repeatedly calculated simultaneously. The proposed method avoids this problem. The first ordinal point of the original point cloud file was used as the starting point, and the same spatial domain was then described. The design method filters out points located in the same spatial domain and stores them in the same V-P container. The normal vector angle information entropy was calculated for each point in each container. Points with information entropy values that met the threshold requirements were extracted and stored as simplified points and new seed points. In the second operation, a point from the seed point set was selected as the starting point for the operation. The same process was repeated as the first operation. After the operation, the point from the seed point set was deleted. This process was repeated until the seed point set was empty and the algorithm ended. The simplified point set thus obtained was the simplified result. Five experimental datasets were selected and compared using the five advanced methods. The results indicate that the proposed method maintains a simplification rate of over 82% and reduces the maximum error, average error, and Hausdorff distance by 0.1099, 0.074, and 0.0062 (the highest values among the five datasets), respectively. This method has superior performance for single object and multi object point cloud sets, particularly as a reference for the study of simplified algorithms for more complex, multi object and ultra-large point cloud sets obtained using terrestrial laser scanning and mobile laser scanning.
Wang et al
Unconventional resources have emerged as the primary source to meet the escalating demand for energy consumption, with hydraulic fracturing standing out as an effective means of boosting production. The utilization of microseismic monitoring is crucial for acquiring real-time or semi-real-time extension information of the fracture network to guide the fracturing process. The precise positioning of microseismic events is a fundamental aspect of microseismic monitoring. Traditional methods relying on (relative) arrival time significantly impact positioning accuracy due to picking errors. While waveform-based methods offer high accuracy, they require precise velocity models and are time-consuming. To overcome challenges associated with arrival time pickup and velocity accuracy, we introduce a virtual field optimization method (VFOM) based on arrival time correction. Initially, an equivalent velocity model is established, and the arrival time difference resulting from the model transformation of the master event is calculated to correct the observed arrival time of the target event. Subsequently, we match detector pairs, establish hyperboloids based on the corrected arrival time difference, and employ the intersection point of all hyperboloids as the positioning result. After that, we use the location results of the master event to enhance the accuracy of the target event. Finally, we apply the proposed method to both synthetic test and field datasets, demonstrating a significant improvement in the positioning accuracy and stability provided by the novel method. The robustness against arrival time error renders it a suitable choice for surface monitoring applications where signal quality is compromised. Furthermore, the simplified velocity model significantly diminishes the computational requirements in the positioning process, enhancing its efficiency, and consequently holds vast potential for application in real-time monitoring.
Liang et al
Multivariate time series (MTS) anomaly detection is vital for ensuring the safety and reliability of large-scale industrial systems. However, existing deep learning methods often overlook complex interrelationships between different time series and the study of anomalies has been limited to detection. To address this, we propose an MTS anomaly detection model based on transfer entropy (TE) and graph attention network (GAT). In the graph construction module, by combining modified TE with automatic structure learning, we extract intricate relationships between features. In the prediction module, we modify the GAT to implement the dynamic attention mechanism and non-linear interaction between different features to improve the accuracy of model prediction. Finally, our model combines the modified TE with anomaly detection task, which can be used to provide interpretability for the detected anomalies using the constructed causal graph. Experimental results on both real and public datasets show that our approach outperforms the mainstream methods, in particular, achieving optimal results in terms of F1 scores and recall.
Chang et al
Bearing fault diagnosis holds significant importance, with widespread attention focused on enhancing its accuracy and efficiency. Existing diagnostic methods based on deep learning and transfer learning typically tackle this issue by introducing new function modules and diagnostic strategies, such as attention mechanism, adversarial domain adaptation, etc. However, most studies do not consider the structure and hyperparameters optimization of the network to improve the diagnostic performance of the network itself. To address this limitation, a novel multi-objective optimized deep auto-encoder (MODAE) is proposed in this paper. The optimal network structure and hyperparameters is determined by a multi-objective particle swarm optimization algorithm. Crucially, the method is based on a data-driven approaches to automatically search for network structures with stronger generalization and feature extraction capabilities to address engineering problems in different scenarios. Finally, this method is examined in both multi-fault classification diagnosis and transfer diagnosis scenarios, demonstrating strong self-adaptability through experimental results. In comparison with typical deep learning fault diagnosis methods, the proposed method demonstrates higher diagnostic accuracy and superior generalization ability.
Wu et al
In horizontal intermittent flow, the long bubbles move toward the center of the pipe due to inertia, forming the thin liquid film above the long bubbles. Accurate measurement of the liquid film thickness is crucial for heat and mass transfer. In this paper, laser interferometric technology is innovatively introduced to measure the film thickness of the intermittent flow, and the thin liquid film is detected with a resolution of 100 nm. Considering the curvature of the circular pipe wall, which leads to divergent reflected light, the effect of the pipe wall on the interference pattern is explored by the ray tracing technique. A two-dimensional interpolation phase retrieval algorithm based on light intensity is proposed to reconstruct the thickness of the liquid film, and the average error is less than 1.86%. Benefiting from the exceptionally high resolution, research is conducted on the thin liquid film at the top of the horizontal intermittent flow, revealing its dependence on the sub-regimes and gas-liquid velocities.
Open all abstracts, in this tab
Pengfei Song et al 2024 Meas. Sci. Technol. 35 095203
In response to the problem where current traditional denoising algorithms cannot effectively remove noise from inclined tunnel point cloud data with irregular contours. This paper proposes a denoising algorithm for inclined tunnel point cloud data based on irregular contour features. The algorithm combines the DBSCAN clustering algorithm with polynomial curve fitting to obtain sequential point cloud slices along the perpendicular direction to the centerline of the inclined tunnel. By identifying and extracting irregular contour feature points from these slices, it achieves the extraction of irregular wall shapes inside the tunnel. Based on these irregular wall shape features, noise points are effectively removed using distance iteration calculations. Experimental results demonstrate that the proposed algorithm can effectively handle the irregular shapes and elevation variations in inclined tunnel point cloud data and achieve good denoising performance for various types of noise within the tunnel. This algorithm lays a solid foundation for subsequent three-dimensional modeling of tunnels with high precision.
Celina Hellmich et al 2024 Meas. Sci. Technol. 35 095001
A scalable wafer-based fabrication process for a new generation of 3D standards enabling the 3D calibration of optical microscopes is presented and validated. The 3D standards are based on step pyramids with several layers in the µm range and a system of cylindrical knops distributed across the layers as marks for coordinate based calibration. This enables calibration for the three coordinate axes and the orthogonality error between them in a single measurement step. The requirements necessary for such a calibration, as optical non-transparency, reproducible flatness of the pyramid step heights and the lowest possible deviations of the lateral marks coordinates, are met by optimizing the manufacturing process: The deviation of the height steps distributed over the wafer is ±3.6 nm and is primarily caused by the layer deposition processes. The lateral manufacturing accuracy was determined using calibrated scanning electron microscope (SEM) and show a mean deviation of 20 or 60 nm, depending on the lateral size of the structures. The electron beam lithography process and the level of inaccuracy of the SEM standard have an influence on the lateral scaling accuracy. Based on the tactilely generated height values and the coordinates of the mark determined by a calibrated SEM, an example calibration of a confocal laser scanning microscope was successfully performed and showed good conformity to conventional calibration techniques.
Geoffrey D de Villiers et al 2024 Meas. Sci. Technol. 35 095101
Gravity measurements have uses in a wide range of fields including geological mapping and mine-shaft inspection. The specific application under consideration sets limits on the survey and the amount of information that can be obtained. For example, in a conventional gravity survey at the Earth's surface a gravimeter is translated on a two-dimensional planar grid taking measurements of the vertical component of gravity. If, however, the survey points cannot be chosen so freely, for example if the gravimeter is constrained to operate in a tunnel where only a one-dimensional line of data could be taken, less information will be obtained. To address this situation, we investigate an alternative approach, in the form of an instrument which rotates around a central point measuring the gravitational potential or its radial derivative on the boundary of a sphere. The ability to record additional components of gravity by rotating the gravimeter will give more information than obtained with a single measurement traditionally taken at each point on a survey, consequently reducing ambiguities in interpretation. We term a device which measures the potential, or its radial derivatives, around the surface of a sphere a gravitational eye. In this article we explore ideas of resolution and propose a thought experiment for comparing the performance of diverse types of gravitational eye. We also discuss radial analytic continuation towards sources of gravity and the resulting resolution enhancement, before finally discussing the possibility of using cold-atom gravimetry and gradiometry to construct a gravitational eye. If realised, the gravitational eye will offer revolutionary capability enabling the maximum information to be obtained about features in all directions around it.
M Neumayer et al 2024 Meas. Sci. Technol. 35 096002
The application of electrical capacitance tomography (ECT) for monitoring of industrial processes has been studied and proposed by many researchers. Examples can be found in monitoring of multiphase flows or mixing processes in reactors. Demonstrations of the functionality based on lab and test rig measurements have proven the potential of the proposed principles. This paper discusses the application of an ECT system in a heavy industries application. The harsh operating conditions in the industrial environment pose several challenges for the application of sophisticated measurement technology. This work addresses key aspects for the application of an ECT measurement system in such an environment. Specifically the electrical system design and the influence of the temperature are addressed. Relevant parameters and possible solutions are discussed. Furthermore the application of ECT as instrument for mass flow metering in pneumatic conveying processes is addressed. Supportive measurement studies from test rig experiments and comparative simulation studies are presented. Therefore, the paper provides a concise discussion on the application of ECT under harsh operating conditions, as well as the use of ECT as a measurement device for process measurement. The work concludes with a presentation of a measurement system in an industrial plant in which the proposed concepts were succesfully implemented.
Aleksi Mattila et al 2024 Meas. Sci. Technol. 35 085025
Spectral scatterometry is a technique that allows rapid measurements of diffraction efficiencies of diffractive optical elements (DOEs). The analysis of such diffraction efficiencies has traditionally been laborious and time consuming. However, machine learning can be employed to aid in the analysis of measured diffraction efficiencies. In this paper we describe a novel system for providing measurements of multiple measurands rapidly and concurrently using a spectral scatterometer and an artificial neural network (ANN) which is trained utilising transfer learning. The ANN provides values for the pitch, height, and line widths of the DOEs. In addition, an uncertainty evaluation was performed. In the majority of the studied cases, the discrepancies between the values obtained using a scanning electron microscope (SEM) and artificial neural network assisted spectral scatterometer (ANNASS) for the grating parameters were below 5 nm. Furthermore, independent reference samples were used to perform a metrological validation. An expanded uncertainty (k = 2) of 5.3 nm was obtained from the uncertainty evaluation for the measurand height. The height value measurements performed employing ANNASS and SEM are demonstrated to be in agreement within this uncertainty.
Johannes Konrad et al 2024 Meas. Sci. Technol.
A uniform magnetic flux density that is effective in the movement range of the coil is essential for accurate Kibble balance experiments. By utilizing Hopkinson's law and Kirchhoff's circuit laws, basic formulas have been derived, providing a method to calculate the magnetic flux density distribution in the airgap of a cylindrical magnet system. A parabolic outer contour of the inner yoke has been found to be a suitable solution to achieve uniformity. Experiments show, that this approach results in a relative change of magnetic flux density in the order of 3 · 10−4 per 8mm movement range, using a magnet system with a mass of only 2.3 kg. Therefore, the system will be integrated into the upgraded version of PTB's Planck-Balance – a compact variant of a Kibble balance – aiming for sub 1 · 10−6 accuracy level determination of the geometric factor. The solution described provides a comparatively easy means to design a cylindrical magnet system, using only one permanent magnet disc, without the use of complex simulation software.
Meng Zhang 2024 Meas. Sci. Technol. 35 086140
Rolling bearing fault diagnosis is crucial for ensuring the safe and reliable operation of mechanical equipment. Detecting faults directly from measurement signals is challenging due to severe noise and interference. Blind deconvolution (BD), as a preferred method, effectively recovers periodic pulses from the measured vibration signals of faulty bearings. This study introduces a simulated annealing-based BD approach to enhance the pulse signal components reflecting faults in vibration signals measured on rolling bearings. This method iteratively searches for the optimal coordinates in a high-dimensional orthogonal optimization space, where the optimal coordinates reflect the combination of the inverse filter coefficients. Compared to the generalized spherical optimization space used in the 'Optimization-Blind Deconvolution' method in previous works, the proposed finite high-dimensional optimization space helps overcome the problem of inverse filter coefficient convergence, allowing for the design of inverse filters without limit of its shape. To better accommodate the cyclostationarity characteristics of bearing signal measured in reality, the proposed method employs a target vector that allows for uncertainty in pulse occurrence instants, thus overcomes challenges introduced by pseudo-periodic phenomena resulting from bearing slippage. Numerical simulations and experimental results on real bearing vibration signals confirm that the proposed method can design more flexible filters to enhance pulse-like patterns in signals, effectively utilize limited filter resources. Its capacity to tolerate inaccurate fault period estimates, high background noise, and pulse randomness enables it to effectively address vibration measurement signals in real-world scenarios.
Boyao Liu and William Allison 2024 Meas. Sci. Technol. 35 087001
We describe a compact constant current power supply with µA precision designed to drive coils. The unit generates currents from −125 mA to 125 mA with a load up to 10 Ω using a precision 16-bit digital to analogue converter, driven from a microcontroller (e.g. Raspberry Pi Pico). All power for the unit is derived from the 5 V of the microcontroller. As a demonstration of the capability of the power supply, it was applied to spin manipulation in a helium spin echo system.
Mohammadmahdi Abedi et al 2024 Meas. Sci. Technol. 35 085606
This study investigates the synergistic effects of cement, water, and hybrid carbon nanotubes/graphene nanoplatelets (CNT/GNP) concentrations on the mechanical, microstructural, durability, and piezoresistive properties of self-sensing cementitious geocomposites. Varied concentrations of cement (8% to 18%), water (8% to 16%), and CNT/GNP (0.1% to 0.34%, 1:1) were incorporated into cementitious stabilized sand (CSS). Mechanical characterization involved compression and flexural tests, while microstructural analysis utilized dry density, apparent porosity, water absorption, and non-destructive ultrasonic testing, alongside TGA, SEM, EDS, and x-ray diffraction analyses. The durability of the composite was also assessed against 180 Freeze-thaw cycles. Moreover, the piezoresistive behavior of the nano-reinforced CSS was analyzed during cyclic flexural and compressive loading using the four-probe method. The optimal carbon nanomaterials (CNM) content was found to depend on the water and cement ratios. Generally, elevating the water content led to a rise in the CNM optimal concentration, primarily attributed to improved dispersion and adequate water for the cement hydration process. The maximum increments in flexural and compressive strengths, compared to plain CSS, were significant, reaching up to approximately 30% for flexural strength and 41% for compressive strength, for the specimen containing 18% cement, 12% water, and 0.17% CNM. This improvement was attributed to the nanoparticles' pore-filling function, acceleration of hydration, regulation of free water, and facilitation of crack-bridging mechanisms in the geocomposite. Further decreases in cement and water content adversely impacted the piezoresistive performance of the composite. Notably, specimens containing 8% cement (across all water content variations) and 10% cement (with 8% and 12% water content) showed a lack of piezoresistive responses. In contrast, specimens containing 14% and 18% cement displayed substantial sensitivity, evidenced by elevated gauge factors, under loading conditions.
Sven Schulze et al 2024 Meas. Sci. Technol. 35 085020
The 2019 redefinition of the kilogram not only changes the way mass is defined but also broadens the horizon for a direct realization of other standards. The true becquerel project at the national institute of standards and technology is creating a new paradigm for realization and dissemination of radionuclide activity. Standard reference materials for radioactivity are supplied as aqueous solutions of specific radionuclides which are characterized by massic activity in the units becquerel per gram of solution, Bq/g. The new method requires measuring the mass of a few milligrams of dispensed radionuclide liquid solution. An electrostatic force balance is used, due to its suitability for a milligram mass range. The goal is to measure the mass of dispensed fluid of 1 mg–5 mg with a relative uncertainty of less than 0.05%. A description of the balance operation is presented. Results of preliminary measurements with a reference mass indicate relative standard deviations less than 0.5% for tens of tests and differ 0.54% or less from an independent measurement of the reference mass.