Medical applications are among the tasks of optical technology. The processing of two-dimensional optical signals and images is an urgent task today. One of the most dangerous eye diseases is diabetic macular retinopa...
详细信息
ISBN:
(数字)9781510644250
ISBN:
(纸本)9781510644250
Medical applications are among the tasks of optical technology. The processing of two-dimensional optical signals and images is an urgent task today. One of the most dangerous eye diseases is diabetic macular retinopathy. The first stage in the laser coagulation operation is the stage of fundus image segmentation. The calculation of texture features for solving this problem takes a lot of time. In this paper, we consider the use of a high-performance algorithm for calculating texture features based on distributed computing to speed up the processing and analysis of medical images. Various use cases of the high-performance algorithm on a single node were investigated and compared with sequential and parallel algorithms. The high-performance algorithm achieves a 40x speedup and more under some parameters. Using a high-performance algorithm, analysis and segmentation is performed in less than 1 minute for standard images. The use of a high-performance algorithm for the analysis and segmentation of fundus images avoids the need for a sequential skip-step algorithm, which, due to interpolation, reduces the execution time, but at the same time, accuracy is lost.
Deep-unfolded networks (DUNs) have set new performance benchmarks in fields such as compressed sensing, image restoration, and wireless communications. DUNs are built from conventional iterative algorithms, where an i...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Deep-unfolded networks (DUNs) have set new performance benchmarks in fields such as compressed sensing, image restoration, and wireless communications. DUNs are built from conventional iterative algorithms, where an iteration is transformed into a layer/block of a network with learnable parameters. Despite their huge success, the reasons behind their superior performance over their iterative counterparts are not fully understood. This paper focuses on enhancing the explainability of DUNs by investigating potential reasons behind their superior performance over traditional iterative methods. We concentrate on the Learnt Iterative Shrinkage-Thresholding Algorithm (LISTA), a foundational contribution that achieves sparse recovery with significantly fewer layers than its iterative counterpart, ISTA. Our findings reveal that the learnt matrices in LISTA always have Gaussian distributed entries regardless of whether the sensing matrix is random Gaussian, Bernoulli, exponential, or uniform. The findings also show that the singular values of the learnt matrices exceed unity, despite which, the reconstruction scheme is stable. We conjecture that the activation function may have a role to play in ensuring stability. We also present an unbiasing technique that substantially improves the sparse recovery performance by reestimating the amplitudes based on the converged support.
image encryption is a reliable method for securely transmitting images over a network. The time required to encrypt and decrypt an image in online applications is also very important. Although cellular automata crypto...
image encryption is a reliable method for securely transmitting images over a network. The time required to encrypt and decrypt an image in online applications is also very important. Although cellular automata cryptography is an appropriate technique for parallelizing and accelerating cryptographic methods, its capacity cannot be demonstrated only in multi-core platforms. Thus, it is needed to parallelize cellular automata cryptography on Graphic Processor Units (GPUs) in order to significantly decrease the encryption/decryption time. In this paper, we propose a new parallel algorithm for two-dimensional cellular automata cryptography that is implemented on GPU. The proposed algorithm uses multiple threads at once to accelerate the bit-level permutation and substitution operations by taking into account the capacity of cellular automata in parallelprocessing. According to the study experimental findings, the proposed algorithm performs faster on GPU compared to a multicore platform while maintaining the same level of security in comparison to the serial algorithm.
The coexistence of technologies, like big data application, cloud computing, and the numerous images in the Web has paved the need for new imageprocessing algorithms that exploit the processed image for diverse appli...
详细信息
The exponential data volume and complexity of Machine Learning (ML) algorithms has resulted in an increase in computational limitations, which affects artificial intelligence [AI]. This research seeks to establish whe...
详细信息
ISBN:
(数字)9798331518981
ISBN:
(纸本)9798331518998
The exponential data volume and complexity of Machine Learning (ML) algorithms has resulted in an increase in computational limitations, which affects artificial intelligence [AI]. This research seeks to establish whether cloud-based distributed computing can effectively address these limitations and improve the performance of ML. Our approach is a new architecture that deploys the computing load over a cloud framework in order to distribute and process in parallel some of the computational majorities of learning algorithms. To further enhance the application’s efficiency of processing the circuit data, a combination of adaptive resource allocation techniques, data partitioning strategies, and a new synchronization protocol has been proposed and implemented. This framework was assessed with deep neural networks, random forests, gradient-boosting machines to different datasets, and complex tasks. Focusing on the obtained results, it becomes possible to state that the proposed methods yield up to 87% improvement in the training speed and 5–12% increase in the accuracy of the models compared to the traditional implementations in single nodes. Moreover, our approach proved its capability of scalability did not degrade the performance and achieved near-linear speedup throughout the distributed nodes of up to 1000. These results propose that cloud-based AI can significantly reduce time to on-ramp for training numerous models on larger datasets. In light of this, distributed computing in the cloud offers a way to meet the computational demands of today’s AI applications while opening the door to new advancements in ML throughout different domains.
Generative Adversarial Networks (GAN) are approaches that are utilized for data augmentation, which facilitates the development of more accurate detection models for unusual or unbalanced datasets. Computer-assisted d...
详细信息
ISBN:
(纸本)9781450397612
Generative Adversarial Networks (GAN) are approaches that are utilized for data augmentation, which facilitates the development of more accurate detection models for unusual or unbalanced datasets. Computer-assisted diagnostic methods may be made more reliable by using synthetic pictures generated by GAN. Generative adversarial networks are challenging to train because too unpredictable training dynamics may occur throughout the learning process, such as model collapse and vanishing gradients. For accurate and faster results the GAN network need to trained in parallel and distributed manner. We enhance the speed and precision of the Deep Convolutional Generative Adversarial Networks (DCGAN) architecture by using its parallelism and executing it on High-Performance Computing platforms. The effective analysis of a DCGAN in Graphic processing Unit and Tensor processing Unit platforms in which each layer execution pattern is analyzed. The bottleneck is identified for the GAN structure for each execution platforms. The Central processing Unit is capable of processing neural network models, but it requires a great deal of time to do it. Graphic processing Unit in contrast, side, are a hundred times quicker than CPUs for Neural Networks, however, they are prohibitively expensive compared to CPUs. Using the systolic array structure, TPU performs well on neural networks with high batch sizes but in GAN the shift between CPU and TPU is huge so it does not perform well.
The role-oriented learning approach could improve the performance of multi-agent reinforcement learning by decomposing complex multi-agent tasks into different roles. However, due to the dynamic environment and intera...
详细信息
Since the radiation of X-ray is detrimental to patients, low-dose computed tomography (CT) has been developed in medical field. Nevertheless, it may degrade the quality of CT images and affect the clinical diagnosis. ...
详细信息
Real-time video surveillance, through CCTV camera systems has become essential for ensuring public safety which is a priority today. Although CCTV cameras help a lot in increasing security, these systems require const...
详细信息
ISBN:
(纸本)9781665462198
Real-time video surveillance, through CCTV camera systems has become essential for ensuring public safety which is a priority today. Although CCTV cameras help a lot in increasing security, these systems require constant human interaction and monitoring. To eradicate this issue, intelligent surveillance systems can be built using deep learning video classification techniques that can help us automate surveillance systems to detect violence as it happens. In this research, we explore deep learning video classification techniques to detect violence as they are happening. Traditional image classification techniques fall short when it comes to classifying videos as they attempt to classify each frame separately for which the predictions start to flicker. Therefore, many researchers are coming up with video classification techniques that consider spatiotemporal features while classifying. However, deploying these deep learning models with methods such as skeleton points obtained through pose estimation and optical flow obtained through depth sensors, are not always practical in an IoT environment. Although these techniques ensure a higher accuracy score, they are computationally heavier. Keeping these constraints in mind, we experimented with various video classification and action recognition techniques such as ConvLSTM, LRCN (with both custom CNN layers and VGG-16 as feature extractor) CNNTransformer and C3D. We achieved a test accuracy of 80% on ConvLSTM, 83.33% on CNN-BiLSTM, 70% on VGG16-BiLstm,76.76% on CNN-Transformer and 80% on C3D.
Sea surface temperature (SST) prediction is crucial for understanding global climate and marine ecosystems, and its anomalies can lead to extreme weather events. SST exhibits complex non-stationary over natural spatio...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Sea surface temperature (SST) prediction is crucial for understanding global climate and marine ecosystems, and its anomalies can lead to extreme weather events. SST exhibits complex non-stationary over natural spatio-temporal processes. However, most of the existing deep learning methods for SST prediction only extract non-stationary features through the simple state transitions of classic CNNs or RNNs, which are too simplistic to capture higher-order non-stationary trends in complex SST sequences. Therefore, we propose a DSNet_SST network, aiming to enhance the extraction of non-stationary information from the spatio-temporal SST evolution. It incorporates two parallel modules: one for capturing high-order temporal non-stationarity based on stacked memory in memory (MIM) blocks and the other one for extracting spatial correlation non-stationarity by multiscale difference operation. A third module adaptively integrates these features to improve the accuracy and stability of SST prediction. The experimental results, relying on the OISST data obtained from both remote sensing satellites and in situ platforms, demonstrate the advantages of the proposed DSNet_SST over baseline methods in terms of SST prediction accuracy.
暂无评论