In this paper, we consider the learning of a Reduced-Order Linear Parameter-Varying Model (ROLPVM) of a nonlinear dynamical system based on data. This is achieved by a two-step procedure. In the first step, we learn a...
详细信息
In this paper, we consider the learning of a Reduced-Order Linear Parameter-Varying Model (ROLPVM) of a nonlinear dynamical system based on data. This is achieved by a two-step procedure. In the first step, we learn a projection to a lower dimensional state-space. In step two, an LPV model is learned on the reduced-order state-space using a novel, efficient parameterization in terms of neural networks. The improved modeling accuracy of the method compared to an existing method is demonstrated by simulation examples.
It is critical to assess the standards of disabled facilities in order to ensure the comfort and safety of disabled individuals who use them. In this study, deep convolutional neural networks (CNNs) with multi-label c...
详细信息
Obtaining valuable information from massive data efficiently has become our research goal in the era of Big Data. Text summarization technology has been continuously developed to meet this demand. Recent work has also...
详细信息
As a classic topic in computer graphics, the semantic part segmentation of 3D data is helpful for3D part-level editing and modeling. Single-views point cloud is the raw format of 3D data. Giving each point a semantic ...
详细信息
As a classic topic in computer graphics, the semantic part segmentation of 3D data is helpful for3D part-level editing and modeling. Single-views point cloud is the raw format of 3D data. Giving each point a semantic annotation in single-view point cloud, i.e., single-view point cloud semantic part segmentation, is meaningful and challenging.
Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical *** of verifying whether the robustness property holds or not in certain neura...
详细信息
Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical *** of verifying whether the robustness property holds or not in certain neural networks,this paper focuses on training robust neural networks with respect to given ***-of-the-art training methods,interval bound propagation(IBP)and CROWN-IBP,perform well with respect to small perturbations,but their performance declines significantly in large perturbation cases,which is termed“drawdown risk”in this ***,drawdown risk refers to the phenomenon that IBPfamily training methods cannot provide expected robust neural networks in larger perturbation cases,as in smaller perturbation *** alleviate the unexpected drawdown risk,we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch(global robustness training),and the corresponding robustness losses are combined with monotonically decreasing weights(monotonically decreasing robustness training).With experimental demonstrations,our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great *** is also noteworthy that our training method achieves higher model accuracy than the original training methods,which means that our presented training strategy gives more balanced consideration to robustness and accuracy.
This paper proposes a Complex-Valued Neural Network (CVNN) for glucose sensing in milli-meter wave (mmWave). Based on the propagation characteristics of millimeter wave in glucose medium, we obtain the S21 parameter o...
详细信息
Wireless federated learning (FL) is a collaborative machine learning (ML) framework in which wireless client-devices independently train their ML models and send the locally trained models to the FL server for aggrega...
详细信息
Wireless federated learning (FL) is a collaborative machine learning (ML) framework in which wireless client-devices independently train their ML models and send the locally trained models to the FL server for aggregation. In this paper, we consider the coexistence of privacy-sensitive client-devices and privacy-insensitive yet computing-resource constrained client-devices, and propose an FL framework with a hybrid centralized training and local training. Specifically, the privacy-sensitive client-devices perform local ML model training and send their local models to the FL server. Each privacy-insensitive client-device can have two options, i.e., (i) conducting a local training and then sending its local model to the FL server, and (ii) directly sending its local data to the FL server for the centralized training. The FL server, after collecting the data from the privacy-insensitive client-devices (which choose to upload the local data), conducts a centralized training with the received datasets. The global model is then generated by aggregating (i) the local models uploaded by the client-devices and (ii) the model trained by the FL server centrally. Focusing on this hybrid FL framework, we firstly analyze its convergence feature with respect to the client-devices' selections of local training or centralized training. We then formulate a joint optimization of client-devices' selections of the local training or centralized training, the FL training configuration (i.e., the number of the local iterations and the number of the global iterations), and the bandwidth allocations to the client-devices, with the objective of minimizing the overall latency for reaching the FL convergence. Despite the non-convexity of the joint optimization problem, we identify its layered structure and propose an efficient algorithm to solve it. Numerical results demonstrate the advantage of our proposed FL framework with the hybrid local and centralized training as well as our proposed alg
This paper is focused on uncertain fractional order systems. In this context, a new modelling approach of uncertain fractional order systems represented by an explicit fractional order interval transfer function is pr...
详细信息
Versatile video coding(H.266/VVC),which was newly released by the Joint Video Exploration Team(JVET),introduces quad-tree plus multitype tree(QTMT)partition structure on the basis of quad-tree(QT)partition structure i...
详细信息
Versatile video coding(H.266/VVC),which was newly released by the Joint Video Exploration Team(JVET),introduces quad-tree plus multitype tree(QTMT)partition structure on the basis of quad-tree(QT)partition structure in High Efficiency Video Coding(H.265/HEVC).More complicated coding unit(CU)partitioning processes in H.266/VVC significantly improve video compression efficiency,but greatly increase the computational complexity *** ultra-high encoding complexity has obstructed its real-time *** order to solve this problem,a CU partition algorithm using convolutional neural network(CNN)is proposed in this paper to speed up the H.266/VVC CU partition ***,64×64 CU is divided into smooth texture CU,mildly complex texture CU and complex texture CU according to the CU texture ***,CU texture complexity classification convolutional neural network(CUTCC-CNN)is proposed to classify ***,according to the classification results,the encoder is guided to skip different RDO search *** optimal CU partition results will be *** results show that the proposed method reduces the average coding time by 32.2%with only 0.55%BD-BR loss compared with VTM 10.2.
Electromagnetic waves with orbital angular momentum (OAM) have garnered significant interest by reason of their potential in designing advanced communication systems. This paper reports on a specific design of a flexi...
详细信息
暂无评论