Ring resonators are pivotal components in photonics, enabling light manipulation for applications in telecommunications, optical sensing, and quantum computing. Integrating machine learning with photonic design opens ...
详细信息
ISBN:
(数字)9798331507671
ISBN:
(纸本)9798331507688
Ring resonators are pivotal components in photonics, enabling light manipulation for applications in telecommunications, optical sensing, and quantum computing. Integrating machine learning with photonic design opens new avenues for enhancing the performance and efficiency of these devices. In this study, we leverage the Lumerical software to create a comprehensive database of ring resonator designs and their performance metrics. Utilizing the $\mathbf{k}$ -Nearest Neighbors (KNN) regression algorithm, we develop a predictive model to optimize resonator parameters for specific applications. This approach allows for rapid exploration of the design space, uncovering innovative resonator configurations and improving sensitivity and efficiency. The proposed method demonstrates the potential of machine learning to accelerate photonic device development, offering a systematic, data-driven strategy for advancing ring resonator technology. Obtained accuracy of 82.8
Type ii diabetes mellitus, on the other hand has been regarded as one of the growing concerns globally and thus clearly raises the need for making accurate forecasts of diabetes. The risk for Type ii diabetes can be p...
详细信息
ISBN:
(数字)9798331527495
ISBN:
(纸本)9798331527501
Type ii diabetes mellitus, on the other hand has been regarded as one of the growing concerns globally and thus clearly raises the need for making accurate forecasts of diabetes. The risk for Type ii diabetes can be predicted using Ma-chine Learning as well as any other form to make the predictions much more enhanced than the traditional methods. This paper aims to give a broad overview of literature that has so far been available on the ML algorithms used in the management of Type ii diabetes including such supervised algorithms as logistic regression, alphabet regression, random forest, support vector regression along with other methods such as, ensemble learning, deep learning, and hybrid. Analysis of the main aspects for the performance model such as parameter selection, the way to face and cope with imbalance parameters, interpretability and generalizability across different populations, another aspect that was regarded is the possibility of using real-time data collected with wearable devices and applying tissue and other biomarkers for better prediction. Finally, the key obstacles and future directions towards developing ML algorithms and models explainable and clinically relevant have been introduced to help researchers and practitioners toward effective, personalized, and scalable interventions.
With the popularization of mobile wireless networks and Internet of Things (IoT) technologies, energy-hungry and delay-intensive applications continue to surge. Due to the limited computing power and battery capacity,...
详细信息
ISBN:
(纸本)9783030953881;9783030953874
With the popularization of mobile wireless networks and Internet of Things (IoT) technologies, energy-hungry and delay-intensive applications continue to surge. Due to the limited computing power and battery capacity, mobile terminals rarely satisfy the increasing demands of application services. Mobile Edge computing (MEC) deploys communication and computing resources near the network edge closing to the user side, which effectively reduces devices' energy consumption and enhances system performance. However, the application of MEC needs infrastructures that can deploy edge services, and is limited by the geographical environment. UAV-assisted MEC has better flexibility and communication Line-of-Sight (LoS), which expands service scope while improving the versatility of MEC. Meanwhile, the dynamic task arrival rate, channel condition, and environmental factors pose challenges for task offloading and resources allocation strategy. In this paper, we jointly optimize UAV deployment, frequency scaling, and task scheduling to minimize energy consumption for devices while ensuring system stability in the long term. Due to the dynamic and randomness of task arrival rate and wireless channel, the original problem is defined as a stochastic optimization problem. The Drone Placement and Online Task oFFloading (DPOTFF) algorithm is designed to decouple the original problem into several sub-problems and solve them within a limited time complexity. It is also proved theoretically that the DPOTFF can obtain close-to-optimal energy consumption while ensuring system stability. The effectiveness and reliability of the algorithm are also verified by simulation and comparative experiments.
In recent years, Internet of Things (IoT) devices have begun to take an important place in our daily lives, allowing the collect of massive data in various domains. IoT combined with other technologies like Cloud, Fog...
详细信息
Internet of Things (IoT) applications have emerged as a critical component in improving people's quality of life. The increasing volume of data generated by IoT devices, on the other hand, places a strain on the r...
详细信息
ISBN:
(纸本)9781665409841
Internet of Things (IoT) applications have emerged as a critical component in improving people's quality of life. The increasing volume of data generated by IoT devices, on the other hand, places a strain on the resources of traditional cloud data centres. This makes cloud data centres unable to meet the demands of IoT applications. Due to resource constraints and the massive connectivity of IoT devices, efficient task (incoming request) allocation is critical for service providers. We create a novel processing power-based task scheduling technique that employs a broker manager to distribute tasks and the process strategy across multiple devices. This is important in cloud technology because it enables organisations to effectively scale their processes. We use multithreading to parallelize processes and evaluate the performance of the proposed model using existing task allocation algorithms (i) round-robin, (ii) weighted round-robin, (iii) random allocation, (iv) genetic algorithm, and (v) dynamic algorithm. Our results show that when there are few tasks and one cluster has more processing capacity than the others, a weighted round-robin algorithm works well. The effectiveness of allocating based on weight shows no improvement in task completion time as task size increases. Using our proposed model, we show that the dynamic algorithm outperforms all other algorithms for task scheduling by 1.37 times for all task sizes.
The detection and recognition of targets within imagery and video analysis is vital for military and commercial applications. The development of infrared sensor devices for tactical aviation systems imagery has increa...
详细信息
ISBN:
(纸本)9781510661561;9781510661578
The detection and recognition of targets within imagery and video analysis is vital for military and commercial applications. The development of infrared sensor devices for tactical aviation systems imagery has increased the performance of target detection. Due to the advancements of infrared sensors capabilities, their use for field operations such as visual operations (visops) or reconnaissance missions that take place in a variety of operational environments have become paramount. Many techniques implemented stretch back to 1970, but were limited due to computational power. The AI industry has recently been able to bridge the gap between traditional signal processing tools and machine learning. Current state of the art target detection and recognition algorithms are too bloated to be applied for on ground or aerial mission reconnaissance. Therefore, this paper proposes Edge IR Vision Transformer (EIR-ViT), a novel algorithm for automatic target detection utilizing infrared images that is lightweight and operates on the edge for easier deployability.
In the last few years, Android mobile devices have encountered a large spread and nowadays a huge part of the traffic traversing the Internet is related to them. In parallel, the number of possible threats and attacks...
详细信息
ISBN:
(纸本)9781665494250
In the last few years, Android mobile devices have encountered a large spread and nowadays a huge part of the traffic traversing the Internet is related to them. In parallel, the number of possible threats and attacks has also increased, thus emphasizing the need for accurate automatic malware detection systems. In this paper, we design and evaluate a system to detect whether a traffic object (biflow) is benign or malicious, possibly understanding its specific nature in the latter case. The proposal leverages machine learning in a hierarchical fashion, in order to capitalize on the structure of the traffic data and reap both design and performance benefits. The comparative evaluation-performed considering the public CICAndMal2017 dataset-assesses the performance of several machine-learning algorithms and witnesses that the hierarchical approach leads to improved performance w.r.t. the flat approach (up to +0.18 F1-score, depending on the granularity of the analysis and the machine learning algorithm considered). In addition, we evaluate the impact of a reject-option mechanism, showing the trade-off between classification accuracy and ratio of classified biflows.
Research focus on the trade-off between diagnostic accuracy for Alzheimer's disease (AD) and time management for diagnosis is very limited. This study proposes a novel two-stage feature selection framework integra...
详细信息
Deep neural networks (DNNs) are being widely used in various computer vision tasks as they can achieve very high accuracy. However, the large number of parameters employed in DNNs can result in long inference times fo...
详细信息
ISBN:
(数字)9781665471770
ISBN:
(纸本)9781665471770
Deep neural networks (DNNs) are being widely used in various computer vision tasks as they can achieve very high accuracy. However, the large number of parameters employed in DNNs can result in long inference times for vision tasks, thus making it even more challenging to deploy them in the compute- and memory-constrained mobile/edge devices. To boost the inference of DNNs, some existing works employ compression (model pruning or quantization) or enhanced hardware. However, most prior works focus on improving model structure and implementing custom accelerators. As opposed to the prior work, in this paper, we target the video data that are processed by edge devices, and study the similarity between frames. Based on that, we propose two runtime approaches to boost the performance of the inference process, while achieving high accuracy. Specifically, considering the similarities between successive video frames, we propose a frame-level compute reuse algorithm based on the motion vectors of each frame. With frame-level reuse, we are able to skip 53% of frames in inference with negligible overhead and remain within less than 1% mAP (accuracy) drop for the object detection task. Additionally, we implement a partial inference scheme to enable region/tile-level reuse. Our experiments on a representative mobile device (Pixel 3 Phone) show that the proposed partial inference scheme achieves 2x speedup over the baseline approach that performs full inference on every frame. We integrate these two data reuse algorithms to accelerate the neural network inference and improve its energy efficiency. More specifically, for each frame in the video, we can dynamically select between (i) performing a full inference, (ii) performing a partial inference, or (iii) skipping the inference altogether. Our experimental evaluations using six different videos reveal that the proposed schemes are up to 80% (56% on average) energy efficient and 2.2 x performance efficient compared to the conventi
Edge computing has emerged as a promising line of research for processing large-scale data and providing low-latency services. Unfortunately, deploying deep neural networks (DNNs) on resource-limited edge devices pres...
详细信息
ISBN:
(纸本)9783030953881;9783030953874
Edge computing has emerged as a promising line of research for processing large-scale data and providing low-latency services. Unfortunately, deploying deep neural networks (DNNs) on resource-limited edge devices presents unacceptable latency, hindering artificial intelligence from empowering edge devices. Prior solutions attempted to address this issue by offloading workload to the remote cloud. However, the cloud-assisted approach ignores that devices in the edge environment tend to exist as clusters. In this paper, we propose EdgeSP, a scalable multi-device parallel DNN inference framework that maximizes resource utilization of heterogeneous edge device clusters. We design a multiple fused-layer blocks parallelization strategy to reduce inter-device communication during parallel inference. Further, we add early exit branches to DNNs, empowering the device to trade-off latency and accuracy for a variety of sophisticated tasks. Experimental results show that EdgeSP enables inference latency acceleration of 2.3 x -3.7x for DNN inference tasks of various scales and outperforms the existing naive parallel inference method. Additionally, EdgeSP can provide high accuracy inference services under various latency requirements.
暂无评论