The implementation of Gated Recurrent Neural Networks (GRU) to generate background music (BGM) combines deep learning technology with music that is used for the visual content of a commercial or educational. Indeed, t...
The implementation of Gated Recurrent Neural Networks (GRU) to generate background music (BGM) combines deep learning technology with music that is used for the visual content of a commercial or educational. Indeed, this BGM is necessary to enhance the intended message expressed to the other audience. This work aimed to provide the model network of GRU which is based on RNN to generate multi-label genres of music by using the open source of GTZAN to evaluate the new BGM. Our GRU networks can solve the vanishing gradient problem by utilizing both the reset gate and the update gate on the network. In the results, we achieved a new BGM that synchronized with the human mood which made more variety of sounds.
This research utilizes statistical path modeling to examine the relationship between the number of Venezuelans migrating to Colombia due to economic collapse and the media's coverage of specific topics. We focus o...
详细信息
We developed a terahertz time-domain spectroscopy system to generate high-resolution 2-dimensional images of paraffin-embedded murine pancreatic ductal adenocarcinoma tissues using the refractive index and absorption ...
详细信息
Supervised deep learning (SDL) has shown remarkable success in various financial applications, such as stock prediction and fraud detection. However, SDL’s reliance on class labels renders it unsuitable for portfolio...
详细信息
ISBN:
(数字)9798350370249
ISBN:
(纸本)9798350370270
Supervised deep learning (SDL) has shown remarkable success in various financial applications, such as stock prediction and fraud detection. However, SDL’s reliance on class labels renders it unsuitable for portfolio management (PM) tasks, where such labels are often unavailable. To address this limitation, we propose a novel two-level architecture based on deep reinforcement learning (DRL) for PM, which does not require class labels. Our approach comprises several local agents that provide trading decisions and uncertainty assessments for individual stocks, and a global agent that makes portfolio management decisions based on the outputs of the local agents. Additionally, we incorporate the concept of explainable AI (XAI) into our framework using the SHAP (Shapley additive explanations) method, enhancing the transparency and interpretability of the global agent’s decisions. Our experimental results demonstrate that the proposed architecture consistently yields profitable outcomes in the market.
Data coding and decoding are essential steps in communicating and interpreting information. These encodings are basic units of data representation, which can be combined and are generally organized by a layout of a da...
详细信息
ISBN:
(数字)9798350380163
ISBN:
(纸本)9798350380170
Data coding and decoding are essential steps in communicating and interpreting information. These encodings are basic units of data representation, which can be combined and are generally organized by a layout of a data visualization technique, a common scenario for digital data visualizations. Technological advances in manipulating materials allowed us to represent data physically, creating an opportunity to use other senses, in addition to vision and hearing, to encode and decode data, such as touch, smell, and taste. In everyday life, touch is already widely used to decode information with few values using vibrations, such as turning devices on/off, communicating alerts on devices, and providing feedback in interaction in video games, among others. Still, it is generally not used to decode a larger set of values. Therefore, this article aims to investigate the encoding and decoding of data through tactile vibrations, identifying the minimum and maximum limits of tactile perception without causing discomfort and minimum intervals that differentiate two consecutive vibration values. This preliminary information allowed us to understand how to map data using vibrotactile scales. The data are mapped according to their value, changing the vibration frequency of the motors. Two stages of testing were carried out. The first aims to find the limits of tactile perception and create the vibrotactile scale, and the second aims to perform data analysis tasks, such as comparison, ordering, and identification. The results demonstrate that users considered find minimum and maximum vibration values tasks easy, while clustering and sorting tasks were classified as the most complex.
In a cyber-physical system such as an autonomous vehicle (AV), machine learning (ML) models can be used to navigate and identify objects that may interfere with the vehicle’s operation. However, ML models are unlikel...
详细信息
Cloud computing and video streaming services have been in constant expansion in recent years. Along with it, the demand for computing resources has also increased significantly. In this context, monitoring the use of ...
详细信息
ISBN:
(数字)9783903176515
ISBN:
(纸本)9781665489928
Cloud computing and video streaming services have been in constant expansion in recent years. Along with it, the demand for computing resources has also increased significantly. In this context, monitoring the use of these resources is crucial to maintain a satisfactory level of Quality of Service and, consequently, Quality of Experience, especially in video transmission services. This work discusses a new method of monitoring resources and quality of service metrics on content servers involving CPU utilization and server throughput, which is obtained in a distributed way. For that, a distributed collector system that is based on a modified version of the ring election algorithm is developed to retrieve the Quality of Service metrics in each server. Evaluation experiment results show that there are no performance gains on the system such as the content loading faster for the user, there are however, improvements in terms of the whole system scalability. The greater the number of servers for monitoring, the better the approach is compared to the traditional method of monitoring resources through request and response.
Motivation: Accurately predicting drug-target protein interactions (DTI) is a cornerstone of drug discovery, enabling the identification of potential therapeutic compounds. Sequence-based prediction models, despite th...
详细信息
In recent years,deep learning methods have been introduced for segmentation and classi-fication of leaf lesions caused by pests and *** the commonly used approaches,convolutional neural networks have provided results ...
详细信息
In recent years,deep learning methods have been introduced for segmentation and classi-fication of leaf lesions caused by pests and *** the commonly used approaches,convolutional neural networks have provided results with high *** purpose of this work is to present an effective and practical system capable of seg-menting and classifying different types of leaf lesions and estimating the severity of stress caused by biotic agents in coffee leaves using convolutional neural *** proposed approach consists of two stages:a semantic segmentation stage with severity calculation and a symptom lesion classification *** stage was tested separately,highlighting the positive and negative points of each *** obtained very good results for the severity estimation,suggesting that the model can estimate severity values very close to the real *** the biotic stress classification,the accuracy rates were greater than 97%.Due to the promising results obtained,an App for Android platform was developed and imple-mented,consisting of semantic segmentation and severity calculation,as well as symptom classification to assist both specialists and farmers to identify and quantify biotic stresses using images of coffee leaves acquired by smartphone.
Although Twitter is a popular platform for social interaction analysis and text data mining, it faces challenges with geolocation automation. To address this problem, the researchers propose the utilization of a Suppo...
详细信息
Although Twitter is a popular platform for social interaction analysis and text data mining, it faces challenges with geolocation automation. To address this problem, the researchers propose the utilization of a Support Vector Machine (SVM) model to develop an automated Twitter crawling system. The system aims to collect data related to weather in Indonesia by employing Twin, a Python-based Twitter scraping software. To overcome null geolocations, the study incorporates aliases created based on the common practice of Indonesian users mentioning the country's location in tweets. The results demonstrate that the SVM model, combined with automated smart crawling, achieves an 85% accuracy rate.
暂无评论