The anti-forensics (AF) technology has become a new field of cybercrime. The problems of existing forensic technologies should be considered from criminals' perspective, so as to make improvement to existing AF te...
详细信息
ISBN:
(纸本)9783030953881;9783030953874
The anti-forensics (AF) technology has become a new field of cybercrime. The problems of existing forensic technologies should be considered from criminals' perspective, so as to make improvement to existing AF technologies. There are two types of AF methods, namely, data hiding and destruction, where most AF tools are primarily based on data hiding. If the data can be intercepted by investigators during the AF process, the remaining data may be destroyed by the criminal, which would make investigators obtain nothing about data information. To address this issue, this paper proposes an AF scheme with multi-device storage based on Reed-Solomon codes by combining data hiding and data destruction. The data is divided into multiple out-of-order data blocks and parity blocks, where these blocks are stored separately in different devices. This method can reduce the storage cost and protect the privacy of data. Even if the data is destroyed, it allows AF investigators to recover the data. Security analysis showed that this AF method can prevent malicious, erroneous or invalid files while acquired and ensure data security in data stolen. Theoretical analysis indicated that this method was difficult for investigators but easy for AFer in files recovery. Experimental results demonstrated that the proposed method is effective and has practical efficiency.
With image generation and manipulation as one of impressive progress of convolutional neural networks (CNNs), facial image synthesis methods, e.g., DeepFakes, pose serious challenges to social and personal security. S...
详细信息
ISBN:
(纸本)9781510642782;9781510642775
With image generation and manipulation as one of impressive progress of convolutional neural networks (CNNs), facial image synthesis methods, e.g., DeepFakes, pose serious challenges to social and personal security. Specifically, we find that (1) CNN-based synthesized facial image detection methods generally fail to identify synthesized images generated by other synthesis methods;(2) classical detection methods exploiting one-class support vector machines (SVMs) and traditional features of video clips fail when only one image is available. In view of the above challenges, we propose and experimentally verify a method combining CNNs features and one-class SVMs, which not only effectively detects synthesized facial images generated by different methods, but also has good robustness to the variances of the scene content.
The standard kernel method is computationally expensive because it needs to store and compute the inverse of the Gram matrix. Furthermore, the classification accuracy of the single kernel method is not effective or ef...
详细信息
The standard kernel method is computationally expensive because it needs to store and compute the inverse of the Gram matrix. Furthermore, the classification accuracy of the single kernel method is not effective or efficient as the method's ability to extract features. The random Fourier feature method establishes a connection between the kernel method and deep learning to solve the above problems. In this paper, we propose a novel slim deep random Fourier feature network named Slim-RFFNet, which introduces convolution into kernel learning. We use the hierarchical strategy and skip connection to construct a deep network structure and lighten the model by using quantization. Experiments conducted on classification benchmarks MNIST and CIFAR10 demonstrate that the proposed SlimRFFNet significantly outperforms current state-of-the-art deep kernel learning methods. Our algorithm also achieves a trade-off between accuracy and latency. The proposed network can be applied to resource-constrained embedded AI devices. The experimental results on the edge computing system show that our algorithm has a small memory footprint and fast inference speed on small edge devices, and thus meets the requirements for practical applications. (c) 2021 Elsevier B.V. All rights reserved.
The rapid advancement of generative artificial intelligence (GAI) has led to the creation of transformative applications such as ChatGPT, which significantly boosts text processing efficiency and diversifies audio, im...
详细信息
ISBN:
(数字)9798331509712
ISBN:
(纸本)9798331509729
The rapid advancement of generative artificial intelligence (GAI) has led to the creation of transformative applications such as ChatGPT, which significantly boosts text processing efficiency and diversifies audio, image, and video content. Beyond digital content creation, GAI’s capability to analyze complex data distributions holds immense potential for next-generation networks and communications, especially given the swift rise of video conferencing applications (VCAs). This paper presents a dynamic, real-time method for detecting anomalous network links in video conferencing applications. The proposed tool, MonkeyGPT, generates tracing representations of network activity and trains a large language model from scratch to serve as a detection system based on network traffic data. Unlike traditional methods, MonkeyGPT provides an unrestricted search space and does not rely on predefined rules or patterns, enabling it to detect a wider range of anomalies. We demonstrate the effectiveness of MonkeyGPT as an anomaly detection tool in real-world VCAs. The results indicate that the model possesses strong detection capabilities, achieving an accuracy rate of over 97%. It is applicable to various platforms, including Zoom, Microsoft Teams, Tencent Meeting, and Feishu, showcasing its robust adaptability.
Face recognition is an important technology in the field of digital imageprocessing, which has a crucial impact on subsequent work. This article introduced an example of facial recognition based on the combination of...
Face recognition is an important technology in the field of digital imageprocessing, which has a crucial impact on subsequent work. This article introduced an example of facial recognition based on the combination of Hadoop algorithm and facial feature extraction. The experimental results were validated in the MATLAB environment and trained using neural networks to improve system performance and adaptability. The test results indicated that the processing speed was between 0.84 and 0.97; the highest resource utilization rate was 0.06, and the lowest was 0.02; the throughput range of facial data was between 13598 and 15479, and the throughput range of user data was between 5037 and 5879. This indicated that the system had fast recognition speed and high-precision image output, and exhibited good fault tolerance and robustness compared to other methods. Exploring facial image recognition and processing algorithms in the Hadoop environment can utilize Hadoop’s distributed computing capabilities to accelerate the processing of large-scale facial image data, achieve fast and efficient processing of large-scale facial image data, accelerate algorithm operation, and possess distributed computing capabilities. Multiple computing nodes can be used for parallelprocessing to improve algorithm performance.
Hyperspectral anomaly detection (AD), as a frontier research topic in the field of remotely sensed data processing, aims to identify targets of interest from complex and vast images. Existing AD methods typically invo...
详细信息
Hyperspectral anomaly detection (AD), as a frontier research topic in the field of remotely sensed data processing, aims to identify targets of interest from complex and vast images. Existing AD methods typically involve complex models and many parameters, posing challenges in meeting the requirements of computational efficiency in hyperspectral AD. To address this issue, this article presents an AD acceleration algorithm based on the multivariate Gaussian model as well as its field programmable gate array (FPGA) implementation. By exploiting the parallelprocessing capabilities of FPGA, we introduce an innovative spectral dimensionality reduction method in which the data processing flow can be accomplished in a distributed manner. Then, we employ an improved linear rotation strategy based on correlation coefficients to accelerate the convergence rate of the proposed AD algorithm. The rotation of Gaussianization in the improved strategy is independent of eigenvalue decomposition, thereby substantially reducing the computational complexity involved during the rotation procedure. Furthermore, we apply a pipeline parallel mechanism to facilitate the FPGA implementation of the AD algorithm and to significantly enhance the computational efficiency. Experimental results on an embedded FPGA platform demonstrate that the FPGA implementation of the hyperspectral AD algorithm proposed in this article achieves a significant acceleration rate with guaranteed high detection accuracy.
RGBT tracking has attracted increasing attention since RGB and thermal infrared data have strong complementary advantages, which could make trackers all-day and all-weather work. Existing works usually focus on extrac...
详细信息
RGBT tracking has attracted increasing attention since RGB and thermal infrared data have strong complementary advantages, which could make trackers all-day and all-weather work. Existing works usually focus on extracting modality-shared or modality-specific information, but the potentials of these two cues are not well explored and exploited in RGBT tracking. In this paper, we propose a novel multi-adapter network to jointly perform modality-shared, modality-specific and instance-aware target representation learning for RGBT tracking. To this end, we design three kinds of adapters within an end-to-end deep learning framework. In specific, we use the modified VGG-M as the generality adapter to extract the modality-shared target representations. To extract the modality-specific features while reducing the computational complexity, we design a modality adapter, which adds a small block to the generality adapter in each layer and each modality in a parallel manner. Such a design could learn multilevel modality-specific representations with a modest number of parameters as the vast majority of parameters are shared with the generality adapter. We also design instance adapter to capture the appearance properties and temporal variations of a certain target. Moreover, to enhance the shared and specific features, we employ the loss of multiple kernel maximum mean discrepancy to measure the distribution divergence of different modal features and integrate it into each layer for more robust representation learning. Extensive experiments on two RGBT tracking benchmark datasets demonstrate the outstanding performance of the proposed tracker against the state-of-the-art methods.
The medical routine of the future is strongly influenced by medical information technology. The quality and efficiency of medicine are at higher standards due to the image-based methods and the increase in computation...
详细信息
ISBN:
(数字)9781665485579
ISBN:
(纸本)9781665485586
The medical routine of the future is strongly influenced by medical information technology. The quality and efficiency of medicine are at higher standards due to the image-based methods and the increase in computation power. parallel and distributed computing are attractive alternatives to improve processing performance in many areas of study and this paper investigates such solutions for increasing the throughput on large datasets in medical imaging. In this research, different edge detection techniques are used for imageprocessing, as it is the most common and challenging task in the medical domain. The results are relevant and present the effectiveness of the two processing approaches compared to a single CPU for the acceleration of medical imageprocessing. The CUDA processing platform using a hybrid streams-based programming model is the parallel approach. The distributed system is a Linux-based cluster, which uses message passing for communication and efficient load balancing between nodes for high performance. This paper highlights parallel computing as a very good solution for processing large image datasets in the medical domain. Good results were also obtained for the distributed system.
The Fast Fourier Transform (FFT) is a fundamental algorithm in signal processing;significant efforts have been made to improve its performance using software optimizations and specialized hardware accelerators. Comput...
详细信息
ISBN:
(纸本)9781665440660
The Fast Fourier Transform (FFT) is a fundamental algorithm in signal processing;significant efforts have been made to improve its performance using software optimizations and specialized hardware accelerators. Computational imaging modalities, such as MRI, often rely on the Non-uniform Fast Fourier Transform (NuFFT), a variant ()I' the EFT for processing data acquired from non-uniform sampling patterns. The most time-consuming step of the NuFFT algorithm is "gridding," wherein non-uniform samples are interpolated to allow a uniform FFT to be computed over the data. Each non-uniform sample affects a window of non-contiguous memory locations, resulting in poor cache and memory bandwidth utilization. As a result, gridding can account for more than 99.6% of the NuFFT computation time, while the FFT requires less than 0.4%. We present Slice-and-Dice, a novel approach to the NuFFT's gridding step that eliminates the presorting operations required by prior methods and maps more efficiently to hardware. Our GPI! implementation achieves gridding speedups of over 250x and 16x vs prior ***-art CPU and GPU implementations, respectively. We achieve further speedup and energy efficiency gains by implementing Slice-and-Dice in hardware with JIGSAW, a streaming hardware accelerator for non-uniform data gridding. JIGSAW uses stall-free fixed-point pipelines to process M non-uniform samples in approximately M cycles, irrespective of sampling pattern-yielding speedups of over 1500x the CPU baseline and 36x the state-of-the-art GPU implementation, consuming similar to 200 mW power and similar to 12 mm(2) area in 16 nm technology. Slice-and-Dice GPU and JIGSAW ASIC implementations achieve unprecedented end-to-end NuFFT speedups of 8x and 36x compared to the state-of-the-art GPU implementation, respectively.
Context. The chromospheric H alpha spectral line is a strong line in the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of stellar activity. For the...
详细信息
Context. The chromospheric H alpha spectral line is a strong line in the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of stellar activity. For the Sun, other tracers, such as Ca ii K, are typically used to monitor solar activity. Nonetheless, the Sun is observed constantly in H alpha with globally distributed ground-based full-disk imagers. Aims. The aim of this study is to introduce the imaging H alpha excess and deficit as tracers of solar activity and compare them to other established indicators. Furthermore, we investigate whether the active region coverage fraction or the changing H alpha excess in the active regions dominates temporal variability in solar H alpha observations. methods. We used observations of full-disk H alpha filtergrams of the Chromospheric Telescope and morphological imageprocessing techniques to extract the imaging H alpha excess and deficit, which were derived from the intensities above or below 10% of the median intensity in the filtergrams, respectively. These thresholds allowed us to filter for bright features (plage regions) and dark absorption features (filaments and sunspots). In addition, the thresholds were used to calculate the mean intensity I-mean(E/D) for H alpha excess and deficit regions. We describe the evolution of the H alpha excess and deficit during Solar Cycle 24 and compare it to the mean intensity and other well established tracers: the relative sunspot number, the F10.7 cm radio flux, and the Mg ii index. In particular, we tried to determine how constant the H alpha excess and number density of H alpha excess regions are between solar maximum and minimum. The number of pixels above or below the intensity thresholds were used to calculate the area coverage fraction of H alpha excess and deficit regions on the Sun, which was compared to the imaging H alpha excess and deficit and the respective mean intensities averaged for the length of one Carrin
暂无评论