The fifth-generation (5G) of mobile networks and beyond is to host a variety of services for industry verticals with a diverse range of requirements. network slicing (NS) is considered to be the fundamental enabling t...
详细信息
ISBN:
(纸本)9781665454698
The fifth-generation (5G) of mobile networks and beyond is to host a variety of services for industry verticals with a diverse range of requirements. network slicing (NS) is considered to be the fundamental enabling technology to address legacy networks’ shortcoming, by tailoring logical virtual networks, called network slices, over the same infrastructure. Adopting the concepts of virtualization and open interfaces, Virtual radio access networks (vRAN) and open RAN (ORAN) are two of the most promising architectures proposed for slicing radio access networks. To realize the efficient deployment (i.e. increase flexibility, scalability, and decreased CAPEX and OPEX) of these architectures, a proper network planning approach is essential. This paper introduces a novel approach to planning and design of the ORAN architecture that takes into account QoS, CAPEX, OPEX and the transport network, simultaneously. The ORAN slice planning and design is formulated as a multi-objective optimization with binary variables and solved by simulated annealing. This paper provides a comprehensive discussion of the results. The proposed approach can be used in designing 5G ORAN network slices but also can be used as a transition network solution to integrate the 4G tier together with 5G for enabling a smooth and less costly transition.
Deep learning and machine learning are pivotal in diverse fields, particularly in mathematical problem-solving. This study introduces a novel approach for solving ordinary differential equations (ODEs) using a single-...
详细信息
ISBN:
(数字)9798350360059
ISBN:
(纸本)9798350360066
Deep learning and machine learning are pivotal in diverse fields, particularly in mathematical problem-solving. This study introduces a novel approach for solving ordinary differential equations (ODEs) using a single-layer convolutional neural network (CNN) with a logistic sigmoid activation function. The methodology employs a control variable approach, allowing exploration of hyperparameters and optimizer choices. The primary objective is to design a CNN capable of accurately resolving ODEs, trained with the Adam optimizer and backpropagation. Extensive experiments involve custom loss functions tailored to ODE characteristics, examining the impact of varying neural networkarchitectures on solutions. Key findings include the successful application of the CNN-based approach, adaptability assessments through control variables, and insights into the influence of network structure on ODE solutions. This research contributes to the intersection of neural networks and mathematical problem-solving, offering a promising avenue for efficiently addressing ODEs. The findings provide a foundation for enhanced mathematical problem-solving, demonstrating the potential of CNNs and custom loss functions. The implications extend to various scientific and engineering domains, opening avenues for improved efficiency and accuracy in solving ODEs in real-world applications.
Tensors are a popular programming interface for developing AI algorithms. Representative AI programming frameworks require developers to be always aware of tensor layouts, thereby reducing their productivity in integr...
详细信息
ISBN:
(纸本)9781450380751
Tensors are a popular programming interface for developing AI algorithms. Representative AI programming frameworks require developers to be always aware of tensor layouts, thereby reducing their productivity in integrating an existing operation with a new library and/or writing a new operation. We propose VTensor, a layout-oblivious virtual tensor programming interface, together with a global layout inference mechanism to resolve the layout required by virtual tensors. Furthermore, VTensor leverages a layoutoriented optimization to globally minimize the number of layout conversion operations, together with a straggler-ware scheduling algorithm and a pool-based memory allocation scheme to globally allocate resources. VTensor yields significant speedup and LOC (Lines of Codes) reduction compared to TensorFlow.
In underwater missions, energy-efficient operation is critical to maximize the operability of an autonomous robots and mission duration. At the same time, accurate pose estimation at reduced computational cost is esse...
详细信息
ISBN:
(数字)9798331516857
ISBN:
(纸本)9798331516864
In underwater missions, energy-efficient operation is critical to maximize the operability of an autonomous robots and mission duration. At the same time, accurate pose estimation at reduced computational cost is essential for various underwater robotics and exploration applications. This paper presents an energy-efficient model for pose regression tasks based on direct Spike-Coding-Decoding (SCD) with adaptive threshold mechanism. The proposed approach enables efficient processing of multimodal continuous-valued images and IMU data. The effectiveness of the proposed module is tested using an open-sourced underwater simulator. In underwater robotics domain, real-world testing remains critical due to limited access to actual underwater environment, making simulators invaluable for rapid algorithm development and refinement. Accordingly, HoloOcean simulator is utilized to collect images and IMU sensor data from various underwater scenarios, which are then fed into a pose estimation framework. The SCD module uses 2D and 1D convolutional layers followed by LIF neurons with adaptive threshold to convert continuous-valued data into sparse spike-based signals. During spike decoding, the spike-based signals are converted back to continuous-values by utilizing the accumulated membrane potential of the output LIF neurons. The feasibility of SCD module is evaluated by its ability to introduce sparsity in spike-based representations with minimal information loss that enables efficient signal reconstruction. Custom multimodal data is collected using the HoloOcean simulator and is used to train the network. We compared the performance of the adaptive learnable threshold method with fixed threshold methods. The experimental results show that spike coding with an adaptive threshold mechanism is highly effective compared to a fixed threshold mechanism, as it can encode the continuous-valued input to its spike-based representation with ≈ 2× higher sparsity and a lesser spike rate.
With the widespread deployment of deep neural networks (DNNs), ensuring the reliability of DNN-based systems is of great importance. Serious reliability issues such as system failures can be caused by numerical defect...
With the widespread deployment of deep neural networks (DNNs), ensuring the reliability of DNN-based systems is of great importance. Serious reliability issues such as system failures can be caused by numerical defects, one of the most frequent defects in DNNs. To assure high reliability against numerical defects, in this paper, we propose the RANUM approach including novel techniques for three reliability assurance tasks: detection of potential numerical defects, confirmation of potential-defect feasibility, and suggestion of defect fixes. To the best of our knowledge, RANUM is the first approach that confirms potential-defect feasibility with failure-exhibiting tests and suggests fixes automatically. Extensive experiments on the benchmarks of 63 real-world DNN architectures show that RANUM outperforms state-of-the-art approaches across the three reliability assurance tasks. In addition, when the RANUM-generated fixes are compared with developers' fixes on open-source projects, in 37 out of 40 cases, RANUM-generated fixes are equivalent to or even better than human fixes.
Artificial intelligence and nearly all its subfields include machine learning and deep learning in operations with the closings being a vital aspect across disciplines including solving mathematical problems. The pres...
详细信息
ISBN:
(数字)9798331521691
ISBN:
(纸本)9798331521707
Artificial intelligence and nearly all its subfields include machine learning and deep learning in operations with the closings being a vital aspect across disciplines including solving mathematical problems. The presentation of this work proposes a novel approach to the computation of numerical solutions to ODEs with certain architectures of deep learning models namely single-layer CNN with logistic sigmoid activation. It uses a control variable approach which facilitates manipulation of hyperparameters and the optimizer's choice. The first goal is to develop CNN to solve ODEs with high precision powered by the Adam optimizer and backpropagation method. The extensive experiments include bespoke loss functions that capture ODE features, and analysis of the effects of different structures of neural networks on solutions. Various outcomes, such as the first application of the CNN-based approach, the assessment of adaptations through control variables, and the exploration of the effects exerted by the network structure on ODE solutions. This research find its application in the use of neural networks in solving mathematical problems particularly ODEs; finding a new way of solving them without consuming a lot of time. This paper has offered grounds for improved mathematical problem-solving and established the efficacy of CNNs and counteractive loss functions. The consequences have been carried out to numerous scientific and engineering fields and are the possibility of enhanced calculation accuracy and efficacy of ODEs in numerous practice areas. This paper presents the advantages of using convolutional neural networks (CNNs) to solve differential equations (ODEs), particularly with reference to the use of loss functions to capture important ODE features. This method, which incorporates traditional mathematical methods, reduces computational time by an average of 30% and improves accuracy by an average of 25%. Applications range from dynamic systems modeling in engineer
In this paper, we present a six-degree of freedom (DOF) robotic arm that can be directly controlled by brainwaves, also known as electroencephalogram (EEG) signals. The EEG signals are acquired using an open-source de...
详细信息
Collaboration skills are important for future software engineers. In computer science education, these skills are often practiced through group assignments, where students develop software collaboratively. The approac...
详细信息
ISBN:
(纸本)9780738133201
Collaboration skills are important for future software engineers. In computer science education, these skills are often practiced through group assignments, where students develop software collaboratively. The approach that students take in these assignments varies widely, but often involves a division of labour. It can then be argued whether collaboration still takes place. The discipline of computing education is especially interesting in this context, because some of its specific features (such as the variation in entry skill level and the use of source code repositories as collaboration platforms) are likely to influence the approach taken within groupwork. The aim of this research is to gain insight into the work division and allocation strategies applied by computer science students during group assignments. To this end, we interviewed twenty students of four universities. The thematic analysis shows that students tend to divide up the workload to enable working independently, with pair programming and code reviews being often employed. Motivated primarily by grade and efficiency factors, students choose and allocate tasks primarily based on their prior expertise and preferences. Based on our findings, we argue that the setup of group assignments can limit students' motivation for practicing new software engineering skills, and that interventions are needed towards encouraging experimentation and learning.
Digital processing in-memory (DPIM) provides very low overhead, highly parallel computation in conventional memory, which significantly accelerates data-intensive workloads like deep neural networks (DNNs). DPIM-based...
详细信息
ISBN:
(纸本)9781665442787
Digital processing in-memory (DPIM) provides very low overhead, highly parallel computation in conventional memory, which significantly accelerates data-intensive workloads like deep neural networks (DNNs). DPIM-based DNN accelerators require that data be properly laid out to make the best use of the available in-memory operations. However, existing DPIM accelerators tend to optimize for a particular DNN dataflow, neglecting the large design space of data layout. This work systematically investigates the data layout for DPIM DNN acceleration. We propose a mapping framework to represent the whole design space of DPIM data layout for general DNN models. Our investigation shows that an exhaustive exploration on the whole design space for mapping a DNN application to DPIM architecture is not computationally tractable. Therefore, we propose a compiler-level optimization, PIM-DL, that finds highly efficient data layouts for DPIM DNN acceleration using a two-level dynamic programming algorithm and a heuristic-based search. Our experiments show that DNN DPIM solutions created by our PIM-DL provide 3.7 x and 4.3 x better performance and energy efficiency as compared to the state of the art under the same hardware constraints.
Modern cloud-edge-device computational platforms does not match the needs of artificial intelligence at the edge of the network. Indeed, the lack of computing power allows that only some AI processes can be performed ...
详细信息
ISBN:
(数字)9781665462976
ISBN:
(纸本)9781665462976
Modern cloud-edge-device computational platforms does not match the needs of artificial intelligence at the edge of the network. Indeed, the lack of computing power allows that only some AI processes can be performed on edge devices, having also to consider their constrained energy capacity. Moreover, the lack of computing continuum between nodes of the same layer, i.e., edge-to-edge, allows to only operate independently within the layer by sensing the environment where nodes stay. In this paper, we propose a lightweight framework for collaborative nodes with decentralized edge intelligence. Organized like a swarm, the groups of nodes emphasize the edge-to-edge continuum of the device-edge-cloud paradigm. This supports a paradigm shift from programming environments for individual devices to dynamic and cooperating groups of nodes. The nodes' coordination relays on green overlay and offloading mechanisms. Innovative mesh architectures with mixed topologies allow building overlays for having swarm coordination. Tasks offloading exploits the overlays to balance the swarm in near-real-time, according to forecasted energy consumption. Stemming from the proposed reference architecture, we also discuss a series of open challenges, which we believe represent relevant research directions in the nearest future.
暂无评论