Engine-generator set (EGS) is an important energy supply component of high-voltage microgrid in series hybrid electric powertrain (SHEP). Sustained and steady energy supply from EGS is one of the conditions for the ba...
详细信息
Engine-generator set (EGS) is an important energy supply component of high-voltage microgrid in series hybrid electric powertrain (SHEP). Sustained and steady energy supply from EGS is one of the conditions for the balanced energy between supply and demand. In some high-power processes, the balanced energy would be broken and the dynamic speed of EGS would be out of expectation, which can result in unstable working states of EGS. If the unstable working states of EGS can be known prior, it is significant for the research of unstable state identification and avoidance. Predicting rotational speed of EGS can warn of the previous issue in advance, while the insufficient data of unstable states would encounter overfitting problems in common prediction methods, so it is a challenge to improve the prediction effect of dynamic speed and then accurately predict the unstable states. Base on the above problems, a physics-informed learning algorithm with adaptive mechanism is proposed for EGS rotational speed prediction in this paper. First, a prediction problem related to the stability of SHEP running state is studied, which is found from engineering knowledge. Second, a new mechanism is proposed for physicsinformed learning algorithm, and the physical information adopted to learning algorithm is more selective. Third, a professional adaptive function is originally formed according to speed characteristics, which bridge the information between physics and learning algorithm. By importing the experimental data, the prediction accuracy of proposed method in one of the test cycles is better than the results of baseline methods, specifically 27.11% and 3.49%, 11.90% and 7.94%, 53.83% and 27.62%. In summary, the proposed method can have better predictions against other baseline methods.
Artificial neural networks have shown great success in solving real-world problems in recent years. Nowadays, most of the widely used neural network algorithms are running on silicon-based computers, where the resourc...
详细信息
Artificial neural networks have shown great success in solving real-world problems in recent years. Nowadays, most of the widely used neural network algorithms are running on silicon-based computers, where the resource requirement and energy consumption become a challenge when the network size grows. In contrast, brain contains trillions of neurons and synapses which naturally processes information with far less energy as compared to silicon-based computers. In vitro biological neural networks created from dissociated neurons may be used for computing and performing machine learning tasks. In this dissertation, learning algorithms are explored for a network of neurons with detailed biological properties with feedforward and recurrent structures. For feedforward networks, a two-layer hybrid bio-silicon platform is constructed and a five-step design method is proposed for the fast development of living neural network algorithms. Neural variations and dynamics are verified by fitting model parameters with biological experimental results. Random connections are generated under different connection probabilities to vary network sparsity. A multi-layer perceptron algorithm is tested with biological constraints to investigate the impact of neural variations and random connections. The results show that a reasonable inference accuracy can be achieved despite the presence of neural variations and random network connections. A new adaptive pre-processing technique is proposed to ensure good learning accuracy with different living neural network sparsity. On the training side, a supervised STDP-based learning algorithm is proposed for networks with biological constraints. For recurrent spiking neural networks (RSNN), temporal dynamics are studied with detailed biological features. An automatic fitting tool is used to match the precise spike timing of the in vitro neurons and the modeled neurons to get the fitted neuron parameters. Model fidelity and learning performance of dif
This paper presents the preliminary results of the work on a control algorithm for a two-finger gripper equipped with an electronic skin (e-skin). The e-skin measures the magnitude and location of the pressure applied...
详细信息
This paper presents the preliminary results of the work on a control algorithm for a two-finger gripper equipped with an electronic skin (e-skin). The e-skin measures the magnitude and location of the pressure applied to it. Contact localization allowed the development of a reliable control algorithm for robotic grasping. The main contribution of this work is the learning algorithm that adjusts the pose of the gripper during the pre-grasp approach step based on contact information. The algorithm was tested on different objects and showed comparable grasping reliability to the vision-based approach. The developed tactile sensor-rich gripper with a dedicated control algorithm may find applications in various fields, from industrial robotics to advanced interactive robots.
Deep learning is a branch of machine learning, which has been used to solve many problems in the fields of computer vision, speech recognition, natural language processing and so on. Deep neural networks are trained b...
详细信息
In this study, a precise and efficient eigenvalue-based machine learning algorithm, particularly denoted as Eigenvalue Classification (EigenClass) algorithm, has been presented to deal with classification problems. Th...
详细信息
In this study, a precise and efficient eigenvalue-based machine learning algorithm, particularly denoted as Eigenvalue Classification (EigenClass) algorithm, has been presented to deal with classification problems. The EigenClass algorithm is constructed by exploiting an eigenvalue-based proximity evaluation. To appreciate the classification performance of EigenClass, it is compared with the well-known algorithms, such as k-nearest neighbours, fuzzy k-nearest neighbours, random forest (RF) and multi-support vector machines. Number of 20 different datasets with various attributes and classes are used for the comparison. Every algorithm is trained and tested for 30 runs through 5-fold cross-validation. The results are then compared among each other in terms of the most used measures, such as accuracy, precision, recall, micro-F-measure, and macro-F-measure. It is demonstrated that EigenClass exhibits the best classification performance for 15 datasets in terms of every metric and, in a pairwise comparison, outperforms the other algorithms for at least 16 datasets in consideration of each metric. Moreover, the algorithms are also compared through statistical analysis and computational complexity. Therefore, the achieved results show that EigenClass is a precise and stable algorithm as well as the most successful algorithm considering the overall classification performances.
In the manufacturing industry, production efficiency is the core competitiveness of enterprises, but also an important goal of enterprise production planning management, and in recent years, the production efficiency ...
详细信息
In the manufacturing industry, production efficiency is the core competitiveness of enterprises, but also an important goal of enterprise production planning management, and in recent years, the production efficiency of China's manufacturing industry does not meet the requirements, and there are still many problems to be solved. In the face of this situation, the relevant fields have explored intelligent manufacturing and proposed intelligent manufacturing technology, which mainly integrates information technology with traditional manufacturing system, can realize the important measures of intelligence, automation, digitalization and information in the manufacturing process, and can help solve the current key problems. Therefore, this paper takes ship manufacturing as the background. The real-time optimization technology of intelligent ship manufacturing based on learning algorithm is studied. This technology can effectively improve ship production efficiency and reduce production cost.
Edge computing and IIoT (Industrial Internet of Things) are two representative application scenarios in 5G (5th Generation) mobile communication technology network. Therefore, this paper studies the resource allocatio...
详细信息
Edge computing and IIoT (Industrial Internet of Things) are two representative application scenarios in 5G (5th Generation) mobile communication technology network. Therefore, this paper studies the resource allocation methods for these two typical scenarios, and proposes an energy-efficient resource allocation method by using DRL algorithm, which enhances the network performance and reduces the network operation cost. Firstly, ac-cording to the characteristics of edge computing, taking the connection relationship between base stations, users and the transmission power allocated by base stations to users as decision variables, minimizing the overall energy efficiency as the goal, and taking the needs of mobile users as constraints, a resource allocation model for user quality of service (QoS) guarantee is designed. Secondly, simulation experiments are used to verify that the proposed method has faster training speed and convergence speed, ensures the quality-of-service requirements of mobile users, and realizes intelligent and energy-saving resource allocation. The approximate amounts of steps to convergence under four cases are 500, 350, 250, and 220, respectively, and the values of awards are 21.76, 21.09, 20.38, and 20.25, respectively. Thirdly, aiming at the IIoT environment scenario, this paper proposes a resource allocation method for 5G wide connection low delay services based on asynchronous dominant action evaluation algorithm (A3C). Firstly, according to the characteristics of IIoT environment, taking the connection relationship between base stations and users and the transmission power allocated by base stations to users as decision variables and maximizing energy efficiency while satisfying needs of each user as optimization goal, a resource allocation model for 5G wide connection low delay services is designed. On this basis, an energy -efficient resource allocation method according to A3C is proposed. The hierarchical aggregation clustering al-gorithm
In this article, we deal with the one-step time-delay problem in power converter and motor drive system applications implemented by the digital signal processor. For these wide applications, a novel one-step time-dela...
详细信息
In this article, we deal with the one-step time-delay problem in power converter and motor drive system applications implemented by the digital signal processor. For these wide applications, a novel one-step time-delay compensation algorithm is proposed, comprising a standard Luenberger-type observer including a learning algorithm and disturbance observer (DOB). The first novelty is to suggest the learning algorithm automatically determining the observer feedback gains according to the estimation error magnitude. The second is to introduce DOBs with estimation error integrators for removing the steady-state estimation errors, removing the additional adaptive and nonlinear damping compensation terms. The closed-loop observer behaviors are rigorously analyzed to present useful properties. The experimental data obtained using a 3-kW prototype dc-dc boost converter validates the effectiveness of the proposed algorithm.
Multi-target tracking is one of the important fields in computer vision, which aims to solve the problem of matching and correlating targets between adjacent frames. In this paper, we propose a fine-grained track reco...
详细信息
Multi-target tracking is one of the important fields in computer vision, which aims to solve the problem of matching and correlating targets between adjacent frames. In this paper, we propose a fine-grained track recoverable (FGTR) matching strategy and a heuristic empirical learning (HEL) algorithm. The FGTR matching strategy divides the detected targets into two different sets according to the distance between them, and adopts different matching strategies respectively, in order to reduce false matching, we evaluated the trust degree of the target's appearance feature information and location feature information, adjusted the proportion of the two reasonably, and improved the accuracy of target matching. In order to solve the problem of trajectory drift caused by the cumulative increase of Kalman filter error during the occlusion process, the HEL algorithm predicts the position information of the target in the next few frames based on the effective information of other previous target trajectories and the motion characteristics of related targets. Make the predicted trajectory closer to the real trajectory. Our proposed method is tested on MOT16 and MOT17, and the experimental results verify the effectiveness of each module, which can effectively solve the occlusion problem and make the tracking more accurate and stable.
In the past ten years, researchers have always attached great importance to the application of ontology to its relevant specific fields. At the same time, applying learning algorithms to many ontology algorithms is al...
详细信息
In the past ten years, researchers have always attached great importance to the application of ontology to its relevant specific fields. At the same time, applying learning algorithms to many ontology algorithms is also a hot topic. For example, ontology learning technology and knowledge are used in the field of semantic retrieval and machine translation. The field of discovery and information systems can also be integrated with ontology learning techniques. Among several ontology learning tricks, multi-dividing ontology learning is the most popular one which proved to be in high efficiency for the similarity calculation of tree structure ontology. In this work, we study the multi-dividing ontology learning algorithm from the mathematical point of view, and an approximation conclusion is presented under the linear representation assumption. The theoretical result obtained here has constructive meaning for the similarity calculation and concrete engineering applications of tree-shaped ontologies. Finally, linear combination multi-dividing ontology learning is applied to university ontologies and mathematical ontologies, and the experimental results imply that the higher efficiency of the proposed approach in actual applications.
暂无评论