Accurate offset measurement is crucial for recovering the size of past earthquakes and understanding the recurrence patterns of strike-slip faults. Traditional methods, which rely on manual delineation of displaced ge...
详细信息
Accurate offset measurement is crucial for recovering the size of past earthquakes and understanding the recurrence patterns of strike-slip faults. Traditional methods, which rely on manual delineation of displaced geomorphic markers from satellite images, often introduce significant uncertainties. This study aims to develop a more objective and accurate method for identifying faulted landforms and measuring offsets. Although supervised deep learning methods have great potential for image recognition and segmentation, due to the absence of data sets, we apply the k-means algorithm, an easy and practical unsupervised machine learning method with minimal parameters, to extract displaced geomorphic markers. Our research is conducted in the kusai Lake segment of the Eastern kunlun Fault using high-resolution satellite imagery. Initially, we identify multiple well-preserved geomorphic markers along the fault traces and manually label some to form an imagery dataset. The k-means algorithm demonstrates its efficacy in extracting landforms clearly when the classification number is set to three. Furthermore, a quantitative comparison of several commonly used unsupervised algorithms confirmed that k-means performs best, achieving a recall of 0.70 and an accuracy of 0.906 on the dataset. The validity of our measurements is corroborated by examining two specific sites around Hongshui Gou. Quantitative analysis comparing our imagery-based measurements with field data reveals a strong agreement, evidenced by a Pearson's correlation coefficient of 0.997. Repeated measurements of the same geomorphic markers across different images indicate that the geomorphic features obtained from the images significantly affect the accuracy of offset measurements. This study underscores the potential of our approach for both extracting faulted landforms and accurately measuring their offsets, emphasizing the importance of assessing the integrity of geomorphic markers when using satellite imagery
The k-means algorithm is one of the most commonly used clustering algorithms in the field of artificial intelligence. However, as the amount of data increases, traditional electronic computers face challenges such as ...
详细信息
The k-means algorithm is one of the most commonly used clustering algorithms in the field of artificial intelligence. However, as the amount of data increases, traditional electronic computers face challenges such as low computational efficiency and high energy consumption when executing the k-means algorithm. The ternary optical computer (TOC), as a novel optoelectronic hybrid computer, employs optical processing and electrical control to perform data computations. This type of computer features a large number of data bits, carry-free addition, and low power consumption, making it well-suited to address the issues encountered by electronic computers in implementing the k-means algorithm. Leveraging the characteristics of TOC, such as its abundant data bits and reconfigurable processor, this paper designs a parallel k-means algorithm based on TOC. Additionally, this paper also proposes an implementation method for a TOC-based subtractor. This subtractor eliminates the need to take the opposite of the subtrahend, thus making the TOC more convenient for implementing the k-means algorithm. Finally, experimental validation was conducted on the TOC experimental platform. The results demonstrate that the implementation of the subtractor is correct and that the parallel k-means algorithm on TOC is feasible. Through performance comparison, it is shown that the time complexity of the multiplication process in calculating the distance between sample points and cluster centers can be optimized from On2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left( {{n<^>2}} \right)$$\end{document} to On\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-6
Aiming at the traffic dynamic shortest path allocation problem with triangular fuzzy numbers as attribute values, a traffic dynamic shortest path allocation model method based on triangular fuzzy number weights is pro...
详细信息
Aiming at the traffic dynamic shortest path allocation problem with triangular fuzzy numbers as attribute values, a traffic dynamic shortest path allocation model method based on triangular fuzzy number weights is proposed. The weighted distance optimization model of triangular fuzzy number, ideal solution and negative ideal solution is established. The weight value of each attribute is obtained by solving and optimizing. The similarity difference between attribute information is determined longitudinally by using the idea of deviation maximization, and the evaluation uncertainty of each scheme under different attributes is described horizontally by entropy value. The attribute weights based on reliability are obtained by synthesizing the similarity difference index and the uncertainty index. Based on triangular fuzzy weights. A subset of vertex set V, R, is selected as the representative vertex, and the shortest path distance between all vertex pairs in R is calculated. At the same time, considering the spatial correlation of road traffic flow and the dynamic change process of traffic flow in each section of the road network, the time is discretized, and the abrupt point of road traffic state is taken as the node pair, and the road traffic flow is dynamically loaded. The prediction model of road section travel time under occasional congestion is established. Experiments show that when the error upper limit is small, the retrieval results of the approximation algorithm are relatively accurate. Then, an example analysis of traffic dynamic path allocation is provided, which shows the effectiveness and feasibility of this method in traffic dynamic shortest path allocation.
In response to the problems in solving large-scale linear algebraic equations, this study adopts two block discrimination criteria, the Euclidean clustering and the cosine clustering, to decompose them into row vector...
详细信息
In response to the problems in solving large-scale linear algebraic equations, this study adopts two block discrimination criteria, the Euclidean clustering and the cosine clustering, to decompose them into row vectors and construct the k-means clustering algorithm. Based on the uniformly distributed blockkaczmarz algorithm, the weight coefficients are integrated into the probability criterion to construct a new and efficient weight probability. Thus, a blockkaczmarz algorithm based on weight probability is constructed. Experimental results showed that the method was three to five times faster than the greedy kaczmarz method. In any convergence situation, the uniformly distributed kaczmarz method had smaller computational complexity and iteration times compared with the weighted probability blockkaczmarz method, with a minimum acceleration ratio of 2.21 and a maximum acceleration ratio of 8.44. This study can establish and analyze mathematical models of various complex systems, reduce memory consumption and computation time when solving large linear equation systems, and provide solutions and scientific guidance for solving large linear equation systems in fields such as civil engineering and electronic engineering.
Limitations in instrument resolution mean that soil samples analysis using energy-dispersive X-ray fluorescence (EDXRF) often produced overlapping spectral peaks of heavy metals with similar energies. This overlapping...
详细信息
Limitations in instrument resolution mean that soil samples analysis using energy-dispersive X-ray fluorescence (EDXRF) often produced overlapping spectral peaks of heavy metals with similar energies. This overlapping of spectral peaks posed a significant problem in the quantitative analysis of heavy metals in soil samples. Therefore, it was necessary to decompose the overlapping peaks. In this paper, a new method for decomposing overlapping peaks was presented in the context of Gaussian Mixture Model (GMM) parameter estimation, combining the k-means algorithm and the African Vulture Optimization algorithm (AVOA). The data acquired from EDXRF instruments were preprocessed before being input to the k-means algorithm, which produces the output used to initialize the parameters of the GMM. Subsequently, AVOA was employed to optimize the GMM parameters based on a fitness function derived from the data. Ultimately, the optimized GMM parameters obtained through the AVOA process yield the optimal solution for the decomposition of overlapping peaks. This approach effectively addressed the challenge of peak overlap in the determination of heavy metals in soil by EDXRF. Soil samples were obtained from national standard samples, peach gardens, and vegetable-growing regions. The results showed that the maximum absolute errors for the peak channel and peak area were 1.46% and 6.37%, respectively. A comparison of the proposed model with others models indicated that the k-means-GMM-AVOA model exhibits the best performance in terms of speed, accuracy, and stability.
In the fierce competition of the electricity market, how to consolidate and develop customers is particularly important. Aiming to analyze the electricity consumption characteristics of customer groups, this paper use...
详细信息
In the fierce competition of the electricity market, how to consolidate and develop customers is particularly important. Aiming to analyze the electricity consumption characteristics of customer groups, this paper used a k-means algorithm and optimized it. The number of clusters was determined by the Davies-Bouldin index (DBI). An improved Harris Hawks optimization (IHHO) algorithm was designed to realize the initial cluster center selection. Based on data such as electricity purchase and average electricity price, electricity customer groups were clustered using the IHHO-k-means algorithm. The IHHO-k-means algorithm achieved the best clustering effect on Iris, Wine, and Glass datasets compared with the traditional k-means and PSO-k-means algorithms. Taking Iris as an example, the optimal value of the IHHO-k-means algorithm was 96.538, with an accuracy rate of 0.932, precision and recall rates of 0.941 and 0.793, respectively, an F-measure of 0.861, and an area under the curve (AUC) value of 0.851. In the customer dataset, the number of clusters determined by DBI was 4. The power customers were divided into four groups with different characteristics of electricity consumption, and their electricity consumption behaviors were analyzed. The results prove the reliability of the IHHO-k-means algorithm in analyzing electricity consumption characteristics of customer groups, and it can be applied in practice.
Correcting interferometric synthetic aperture radar (InSAR) interferograms using Global Navigation Satellite System (GNSS) data can effectively improve their accuracy. However, most of the existing correction methods ...
详细信息
Correcting interferometric synthetic aperture radar (InSAR) interferograms using Global Navigation Satellite System (GNSS) data can effectively improve their accuracy. However, most of the existing correction methods utilize the difference between GNSS and InSAR data for surface fitting;these methods can effectively correct overall long-wavelength errors, but they are insufficient for multiple medium-wavelength errors in localized areas. Based on this, we propose a method for correcting InSAR interferograms using GNSS data and the k-means spatial clustering algorithm, which is capable of obtaining correction information with high accuracy, thus improving the overall and localized area error correction effects and contributing to obtaining high-precision InSAR deformation time series. In an application involving the Central Valley of Southern California (CVSC), the experimental results show that the proposed correction method can effectively compensate for the deficiency of surface fitting in capturing error details and suppress the effect of low-quality interferograms. At the nine GNSS validation sites that are not included in the modeling process, the errors in the ascending track 137A and descending track 144D are mostly less than 15 mm, and the average root mean square error values are 11.8 mm and 8.0 mm, respectively. Overall, the correction method not only realizes effective interferogram error correction, but also has the advantages of high accuracy, high efficiency, ease of promotion, and can effectively address large-scale and high-precision deformation monitoring scenarios.
In order to improve the efficiency of personalised classification of brand promotion information and shorten the time of personalised classification, this paper proposes a personalised classification method of brand p...
详细信息
In this research work, we present a mathematical analysis of a fractional sixth-order laser model of a resonant which is homogeneously extended three-level optically pumped. We use Caputo fractional order derivative i...
详细信息
In this research work, we present a mathematical analysis of a fractional sixth-order laser model of a resonant which is homogeneously extended three-level optically pumped. We use Caputo fractional order derivative in the proposed model. Our analysis includes an investigation of various chaotic behaviors under fractional order derivative and qualitative theory of the existence of the solution to the proposed model. For our required analysis of qualitative type, we use formal analysis tools. Further, numerical simulations are performed with a clustering method based on the k- meansalgorithm and Adams Bashforth scheme. With the help of the aforesaid scheme, we present different chaotic behavior corresponding to various values of fractional order. Finally, we give a comparison of the CPU time of the proposed method with that of the Rk4 *** 2023 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Alexandria University. This is an open access article under the CC BY-NC-ND license (http://***/ licenses/by-nc-nd/4.0/).
Since many attribute values of high-dimensional sparse data are zero, we combine the approximation set of the rough set theory with the semi-supervised k-means algorithm to propose a rough set-based semi-supervised k-...
详细信息
Since many attribute values of high-dimensional sparse data are zero, we combine the approximation set of the rough set theory with the semi-supervised k-means algorithm to propose a rough set-based semi-supervised k-means (RSkmeans) algorithm. Firstly, the proportion of non-zero values is calculated by a few labelled data samples, and a small number of important attributes in each cluster are selected to calculate the clustering centres. Secondly, the approximation set is used to calculate the information gain of each attribute. Thirdly, different attribute values are partitioned into the corresponding approximate sets according to the comparison of information gain with the upper approximation and boundary threshold. Then, the new attributes are increased and the above process is continued to update the clustering centres. The experimental results on text data show that the RSkmeansalgorithm can help find the important attributes, filter the invalid information, and improve the performances significantly.
暂无评论