Classification of imbalanced data is an important challenge in current research. sampling is an important way to solve the problem of imbalanced data classification, but some traditional sampling algorithms are suscep...
详细信息
Classification of imbalanced data is an important challenge in current research. sampling is an important way to solve the problem of imbalanced data classification, but some traditional sampling algorithms are susceptible to outliers. Therefore, an iF-ADASYN sampling algorithm is proposed in this paper. First, based on the ADASYN algorithm, we introduce the isolation Forest algorithm to overcome its vulnerability to outliers. Then, a calculation method of anomaly index which can delete outliers accurately of minority data is presented. The experimental results of four UCI public imbalanced datasets show that the algorithm can effectively improve the accuracy of the minority class, and increase the stability. In the real thrombus dataset, the AUC value of the iF-ADASYN algorithm is more significant than that of SMOTE and ADASYN algorithms, and the recognition rate of patients with thrombosis increased by 20%. The iF-ADASYN algorithm obtains better resistance to outliers than the original ADASYN algorithm. Meanwhile, it improves the accuracy of minority class decision boundary region division.
This paper proposes a novel sampling algorithm for digital signal processing (DSP) controlled 2 kW power factor correction (PFCs) converters, which can improve switching noise immunity greatly in average-current-contr...
详细信息
This paper proposes a novel sampling algorithm for digital signal processing (DSP) controlled 2 kW power factor correction (PFCs) converters, which can improve switching noise immunity greatly in average-current-control power supplies. Based on the newly developed DSP chip TMS320F240, a 2 kW PFC stage is implemented. The novel sampling algorithm shows great advantages when the converter operates at a frequency above 30 kHz.
We consider the Hanurav-Vijayan (HV) sampling design, which is the default method programmed in the SURVEYSELECT procedure of the SAS software. We prove that it is equivalent to the Sunter procedure but is capable of ...
详细信息
We consider the Hanurav-Vijayan (HV) sampling design, which is the default method programmed in the SURVEYSELECT procedure of the SAS software. We prove that it is equivalent to the Sunter procedure but is capable of handling any set of inclusion probabilities. We prove that the Horvitz-Thompson (HT) estimator is not generally consistent under this sampling design. We propose a conditional HT estimator and prove its consistency under a nonstandard assumption on the first-order inclusion probabilities. Since this assumption seems difficult to control in practice, we recommend not to use the HV sampling design.
In this paper, we propose a new sampling algorithm combined with multilevel UV(MLUV) factorization method to calculate the scattering from Gaussian random rough surface with exponential correlation function. The new s...
详细信息
ISBN:
(纸本)9781479962815
In this paper, we propose a new sampling algorithm combined with multilevel UV(MLUV) factorization method to calculate the scattering from Gaussian random rough surface with exponential correlation function. The new sampling algorithm is based on the steepness of patch pairs which support the basis functions. The numerical analyses in this paper show that the proposed algorithm significantly improves the accuracy of the UV approximation of matrix Z(K).
Perceiving the environment is vital for autonomous vehicles as it serves as the foundation for decision making and path planning. LiDAR is a widely employed sensor, which produces a voluminous and sparsely populated p...
详细信息
ISBN:
(纸本)9798350373172;9798350373189
Perceiving the environment is vital for autonomous vehicles as it serves as the foundation for decision making and path planning. LiDAR is a widely employed sensor, which produces a voluminous and sparsely populated point cloud. For voxel-based 3D object detection methods, the initial step involves the division of the raw point cloud into voxels, the process known as voxelization. Nevertheless, once the number of point clouds contained within a voxel reaches the certain threshold, the allocation of additional point clouds to that voxel ceases. This leads to a greater degree of information loss. Scholars primarily focus on the subsequent stages following voxelization, such as feature extraction and utilization. We first focus on the sampling issue during the voxelization. In the paper, we propose a general and elegant Points in Voxel sampling algorithm module named PVSA. During the voxelization, the assignment of all points into their respective voxels continues even after the maximum number of points in a voxel has been reached. For voxels in which the number of internal point clouds exceeds the certain threshold, the farthest distance sampling method is utilized as it ensures a genuine and uniform distribution of the point cloud within the voxel. We conducted an evaluation of the proposed module using the Kitti dataset. Experimental findings suggest that the incorporation of the PVSA module enhances the object detection capabilities of the voxel-based model, particularly in the identification of samll targets like pedestrians. The incorporation of PVSA modules significantly enhances Pillarnet's capacity to recognize pedestrians, resulting in a 46.2% pt improvement in performance at a distance of 20 meters. On average, there is an enhancement of 1.43% pt.
Extending the microwave frequency measurement from very low to very high frequency requires improvements for the acquisition time. In this paper, a new sampling algorithm is presented with the main purpose to reduce t...
详细信息
ISBN:
(纸本)9781509020478
Extending the microwave frequency measurement from very low to very high frequency requires improvements for the acquisition time. In this paper, a new sampling algorithm is presented with the main purpose to reduce the acquisition time by using a limited number of samples. The adaptive algorithm proposed in this paper computes only a limited number of samples and then reconstructs the entire circuit response using the interpolation model. This method uses an adaptive step-size control and has the initial step and the error predefined. The algorithm evaluates the difference between two consecutive S parameters. The adaptive step-size algorithm assumes that, when the distance between the current S parameter value and the previous one decreases (below a threshold epsilon), the exploration step-size may be increased up to a limit in order to keep the step-size to a moderate value. Otherwise, it might overlook major S parameter variations. The biggest challenge here is represented by the correlation between S parameter domain, frequency domain and scaling values. This algorithm automatically finds the number of points needed to accurately evaluate high S parameters variations and it has a high speed computing.
This paper presents a new frequency sampling algorithm with the main purpose to reduce the number of samples. As a consequence, the acquisition time will be reduced. The second problem discussed in this paper is repre...
详细信息
ISBN:
(纸本)9781538649015
This paper presents a new frequency sampling algorithm with the main purpose to reduce the number of samples. As a consequence, the acquisition time will be reduced. The second problem discussed in this paper is represented by the accuracy. To preserve the informational consistency, the samples will be focused in the area of the spikes. For this, the algorithm identifies the extreme points of the amplitude frequency for the tested device and next, a spline interpolation is provided.
Flight test data is a critical element for characterizing aircraft performance and status, serving essential roles throughout stages such as design, manufacturing, flight testing, and operations. Generated by onboard ...
详细信息
The aim of this study was to develop and use an algorithm to generate the automatic landslide susceptibility map. The proposed algorithm based on the two-level random sampling (2LRS) strategy and machine learning clas...
详细信息
The aim of this study was to develop and use an algorithm to generate the automatic landslide susceptibility map. The proposed algorithm based on the two-level random sampling (2LRS) strategy and machine learning classification was generated using MATLAB. Performing automatic susceptibility mapping using machine learning classification requires an automatic and robust algorithm for the training of the constructed model. The proposed algorithm contains 20 steps and most of them have novel solutions for sampling. The user could also change the ratio of training and testing with this algorithm to perform automatic landslide susceptibility mapping. During the study, the 28 parameters used as input data sets which are the Digital Elevation Model (DEM), slope, aspect, plan curvature, profile curvature, the convergence index, the Topographic Wetness Index (TWI), the LS factor, the Normalized Difference Vegetation Index (NDVI), the Kaolinite Index, the Calcite Index, the OH Index, distance to fault lines, distance to channels, and 14 decorrelation stretched Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) bands. In this study, a total of 12 susceptibility models with different numbers of samples were automatically generated and tested using the produced algorithm in the study area, which contains active deep-seated rotational landslides (Alakir catchment area [Western Antalya, Turkey]). In addition, derivatives of the random forest (RF), support vector machines (SVM), and decision tree (DT) algorithms were compared in the MATLAB Classification Learner Toolbox according to their accuracies. The Medium Gaussian SVM has the highest level of accuracy (84%) among them. The constructed 12 models with different numbers of samples were also tested according to their spatial performance and their processing times. Then, the obtained area under curve values (AUCs) were obtained between 0.90 (in 360.009 s) and 0.84 (in 78.307 s).
Last several years have overseen a rapidly growing interest in Oceanic Sensor Networks (OSN in short), which consists of a variable number of sensor nodes working at underwater to monitor particular oceanic environmen...
详细信息
ISBN:
(纸本)9781612846835;9781612846828
Last several years have overseen a rapidly growing interest in Oceanic Sensor Networks (OSN in short), which consists of a variable number of sensor nodes working at underwater to monitor particular oceanic environment collaboratively. Compared with the traditional sensor networks, OSN has many additional challenges with acoustic communication environment such as large propagation delay, low communication bandwidth and high packet drop rate. Moreover, OSN belongs to one 3-Dimensional framework networks. Therefore, traditional localization schemes for 2-Dimensional terrestrial sensor networks should be transformed into 3-Dimensional localization problem. In this paper, we proposed a large-scale OSN framework for distributed localization first. And then a distributed range-based localization algorithm designed for 3-Dimensional OSN is illustrated in details. Localization accuracy and localized ratio are two metrics to verify our proposed scheme.
暂无评论