Wireless Sensor Networks (WSNs) play a critical role in environmental monitoring and early forest fire detection. However, they are susceptible to sensor malfunctions and network intrusions, which can compromise data ...
详细信息
Wireless Sensor Networks (WSNs) play a critical role in environmental monitoring and early forest fire detection. However, they are susceptible to sensor malfunctions and network intrusions, which can compromise data integrity and lead to false alarms or missed detections. This study presents a hybrid anomaly detection framework that integrates a transformer-based autoencoder, Isolation Forest, and XGBoost to effectively classify normal sensor behavior, malfunctions, and intrusions. The transformer autoencoder models spatiotemporal dependencies in sensor data, while adaptive thresholding dynamically adjusts sensitivity to anomalies. Isolation Forest provides unsupervised anomaly validation, and XGBoost further refines classification, enhancing detection precision. Experimental evaluation using real-world sensor data demonstrates that our model achieves 95% accuracy, with high recall for intrusion detection, minimizing false negatives. The proposed approach improves the reliability of WSN-based fire monitoring by reducing false alarms, adapting to dynamic environmental conditions, and distinguishing between hardware failures and security threats.
With the continual advancement of communication technologies, mobile devices have become indispensable tools in our daily lives. While existing sensor-based continuous authentication systems provide some level of user...
详细信息
With the continual advancement of communication technologies, mobile devices have become indispensable tools in our daily lives. While existing sensor-based continuous authentication systems provide some level of user privacy protection, they often neglect the temporal characteristics of multisensor data and the unique information of each sensor. To further protect the privacy of mobile devices, we present FuMeAuth, a sensor-based continuous Authentication system using a Fused Memory-Augmented transformer autoencoder. FuMeAuth leverages the built-in accelerometer, gyroscope, and magnetometer of smartphones to implicitly gather user behavior patterns. The Fused global-local Memory network (FuMe) effectively captures and adaptively combines the shared-private features of sensor data in FuMeAuth. During the registration phase, FuMeAuth collects and preprocesses the sensor data and sends the processed data to FuMe, which then records fused shared and private representations across different sensors for legitimate users. In the authentication phase, the trained FuMe reconstructs the current user's data, computes the reconstruction error between user input data and the corresponding reconstructed data, and compares it against a predefined authentication threshold for authentication. We evaluate the performance of FuMeAuth on our data set in terms of the effectiveness of FuMeAuth, effect of sensor numbers, efficiency of fused memory module, and comparison with state-of-the-art approaches. The experimental results demonstrate that FuMeAuth exhibits superior performance than other approaches by achieving an accuracy of 99.84% and an equal error rate of 0.14% with 69 unseen users.
Multi -modal signals have become essential data for emotion recognition since they can represent emotions more comprehensively. However, in real -world environments, it is often impossible to acquire complete data on ...
详细信息
Multi -modal signals have become essential data for emotion recognition since they can represent emotions more comprehensively. However, in real -world environments, it is often impossible to acquire complete data on multi -modal signals, and the problem of missing modalities causes severe performance degradation in emotion recognition. Therefore, this paper represents the first attempt to use a transformer -based architecture, aiming to fill the modality -incomplete data from partially observed data for multi -modal emotion recognition (MER). Concretely, this paper proposes a novel unified model called transformer autoencoder (TAE), comprising a modality -specific hybrid transformer encoder, an inter -modality transformer encoder, and a convolutional decoder. The modality -specific hybrid transformer encoder bridges a convolutional encoder and a transformer encoder, allowing the encoder to learn local and global context information within each particular modality. The inter -modality transformer encoder builds and aligns global cross -modal correlations and models longrange contextual information with different modalities. The convolutional decoder decodes the encoding features to produce more precise recognition. Besides, a regularization term is introduced into the convolutional decoder to force the decoder to fully leverage the complete and incomplete data for emotional recognition of missing data. 96.33%, 95.64%, and 92.69% accuracies are attained on the available data of the DEAP and SEED -IV datasets, and 93.25%, 92.23%, and 81.76% accuracies are obtained on the missing data. Particularly, the model acquires a 5.61% advantage with 70% missing data, demonstrating that the model outperforms some state-of-the-art approaches in incomplete multi -modal learning.
The success of transformer-based models has encouraged many researchers to learn CAD models using sequence-based approaches. However, learning CAD models is still a challenge, because they can be represented as comple...
详细信息
The success of transformer-based models has encouraged many researchers to learn CAD models using sequence-based approaches. However, learning CAD models is still a challenge, because they can be represented as complex shapes with long construction sequences. Furthermore, the same CAD model can be expressed using different CAD construction sequences. We propose a novel contrastive learning-based approach, named ContrastCAD, that effectively captures semantic information within the construction sequences of the CAD model. ContrastCAD generates augmented views using dropout techniques without altering the shape of the CAD model. We also propose a new CAD data augmentation method, called a Random Replace and Extrude (RRE) method, to enhance the learning performance of the model when training an imbalanced training CAD dataset. Experimental results show that the proposed RRE augmentation method significantly enhances the learning performance of transformer-based autoencoders, even for complex CAD models having very long construction sequences. The proposed ContrastCAD model is shown to be robust to permutation changes of construction sequences and performs better representation learning by generating representation spaces where similar CAD models are more closely clustered. Our codes are available at https://***/cm8908/ContrastCAD.
暂无评论