A rapid reentry trajectory planning method for a common aero vehicle (CAV) subject to all path constraints is developed. The reentry trajectory is divided into initial descent phase and quasi-equilibrium glide phase w...
详细信息
ISBN:
(纸本)9781424460434
A rapid reentry trajectory planning method for a common aero vehicle (CAV) subject to all path constraints is developed. The reentry trajectory is divided into initial descent phase and quasi-equilibrium glide phase which possesses a majority of the reentry period. An improved quasi-equilibrium glide condition is utilized to convert reentry corridor constraints to the control variable constraints and get a simple relation between the range-to-go and velocity. Accordingly the longitudinal reference trajectory is computed through the one-parameter search of bank angle model and the corresponding tracking law is designed using linear quadratic regulator theory. To enhance the maneuvering capability, a geometrical control approach considering no-fly zone constraints is applied to the lateral guidance. Finally, the performance of this method is verified by computer simulation.
Pre-trained language models learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of natural language processing (NLP)...
详细信息
Pre-trained language models learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of natural language processing (NLP) after fine-tuning. These models, however, suffer from poor robustness and lack of interpretability. We refer to pre-trained language models with knowledge injection as knowledge-enhanced pre-trained language models (KEPLMs). These models demonstrate deep understanding and logical reasoning and introduce interpretability. In this survey, we provide a comprehensive overview of KEPLMs in NLP. We first discuss the advancements in pre-trained language models and knowledge representation learning. Then we systematically categorize existing KEPLMs from three different perspectives. Finally, we outline some potential directions of KEPLMs for future research.
暂无评论