Predicting the cycle life of Lithium-Ion Batteries(LIBs) remains a great challenge due to their complicated degradation *** present work employs an interpretative machine learning of symbolic regression(SR) to dis...
详细信息
Predicting the cycle life of Lithium-Ion Batteries(LIBs) remains a great challenge due to their complicated degradation *** present work employs an interpretative machine learning of symbolic regression(SR) to discover an analytic formula for LIB life prediction with newly defined *** novel features are based on the discharging energies under the constant-current(CC) and constant-voltage(CV) modes at every five cycles *** cycle life is affected by the CC-discharging energy at the 15th cycle(E15-CCD) and the difference between the CC-discharging energies at the 45th cycle and 95th cycle(Δ45-95).The cycle life highly correlates with a simple indicator(E15-CCD-3)/Δ45-95with a Pearson correlation coefficient of *** machine learning tools provide a rapid and accurate prediction of cycle life at the early stage.
Software trustworthiness includes many *** weight allocation of trustworthy at-tributes plays a key role in the software trustworthiness *** practical application,attribute weight usually comes from experts'evalua...
详细信息
Software trustworthiness includes many *** weight allocation of trustworthy at-tributes plays a key role in the software trustworthiness *** practical application,attribute weight usually comes from experts'evaluation to attributes and hidden information derived from ***,when the weight of attributes is researched,it is necessary to consider weight from subjective and objective ***,a novel weight allocation method is proposed by combining the fuzzy analytical hierarchy process(FAHP)method and the criteria importance though intercrieria correlation(CRITIC)***,based on the weight allocation method,the trustworthiness measurement models of component-based software are estab-lished according to the seven combination structures of ***,the model reasonability is verified via proving some metric ***,a case is carried *** to the comparison with other models,the result shows that the model has the advantage of utilizing hidden information fully and analyzing the com-bination of components *** is an important guide for measuring the trustworthiness measurement of component-based software.
People-centric activity recognition is one of the most critical technologies in a wide range of real-world applications,including intelligent transportation systems, healthcare services, and brain-computer interfaces....
详细信息
People-centric activity recognition is one of the most critical technologies in a wide range of real-world applications,including intelligent transportation systems, healthcare services, and brain-computer interfaces. Large-scale data collection and annotation make the application of machine learning algorithms prohibitively expensive when adapting to new tasks. One way of circumventing this limitation is to train the model in a semi-supervised learning manner that utilizes a percentage of unlabeled data to reduce the labeling burden in prediction tasks. Despite their appeal, these models often assume that labeled and unlabeled data come from similar distributions, which leads to the domain shift problem caused by the presence of distribution gaps. To address these limitations, we propose herein a novel method for people-centric activity recognition,called domain generalization with semi-supervised learning(DGSSL), that effectively enhances the representation learning and domain alignment capabilities of a model. We first design a new autoregressive discriminator for adversarial training between unlabeled and labeled source domains, extracting domain-specific features to reduce the distribution gaps. Second, we introduce two reconstruction tasks to capture the task-specific features to avoid losing information related to representation learning while maintaining task-specific consistency. Finally, benefiting from the collaborative optimization of these two tasks, the model can accurately predict both the domain and category labels of the source domains for the classification task. We conduct extensive experiments on three real-world sensing datasets. The experimental results show that DGSSL surpasses the three state-of-the-art methods with better performance and generalization.
The proliferation of cooking videos on the internet these days necessitates the conversion of these lengthy video contents into concise text recipes. Many online platforms now have a large number of cooking videos, in...
详细信息
The proliferation of cooking videos on the internet these days necessitates the conversion of these lengthy video contents into concise text recipes. Many online platforms now have a large number of cooking videos, in which, there is a challenge for viewers to extract comprehensive recipes from lengthy visual content. Effective summary is necessary in order to translate the abundance of culinary knowledge found in videos into text recipes that are easy to read and follow. This will make the cooking process easier for individuals who are searching for precise step by step cooking instructions. Such a system satisfies the needs of a broad spectrum of learners while also improving accessibility and user simplicity. As there is a growing need for easy-to-follow recipes made from cooking videos, researchers are looking on the process of automated summarization using advanced techniques. One such approach is presented in our work, which combines simple image-based models, audio processing, and GPT-based models to create a system that makes it easier to turn long culinary videos into in-depth recipe texts. A systematic workflow is adopted in order to achieve the objective. Initially, Focus is given for frame summary generation which employs a combination of two convolutional neural networks and a GPT-based model. A pre-trained CNN model called Inception-V3 is fine-tuned with food image dataset for dish recognition and another custom-made CNN is built with ingredient images for ingredient recognition. Then a GPT based model is used to combine the results produced by the two CNN models which will give us the frame summary in the desired format. Subsequently, Audio summary generation is tackled by performing Speech-to-text functionality in python. A GPT-based model is then used to generate a summary of the resulting textual representation of audio in our desired format. Finally, to refine the summaries obtained from visual and auditory content, Another GPT-based model is used
In the domain of sound event detection (SED), Convolutional Recurrent Neural Network (CRNN) has become the most successful architecture, which adopts Recurrent Neural Network (RNN) to model temporal dependencies from ...
详细信息
Vertical federated learning (VFL) has recently emerged as an appealing distributed paradigm empowering multi-party collaboration for training high-quality models over vertically partitioned datasets. Gradient boosting...
详细信息
Automatic post-editing (APE) aims to automatically correct the outputs of the machine translation system by learning from manual correction examples. We present the system developed by HIT-MI &T Lab for the CCMT 2...
详细信息
In recent decades, brain tumors have been regarded as a severe illness that causes significant damage to the health of the individual, and finally it results to death. Hence, the Brain Tumor Segmentation and Classific...
详细信息
In recent decades, brain tumors have been regarded as a severe illness that causes significant damage to the health of the individual, and finally it results to death. Hence, the Brain Tumor Segmentation and Classification (BTSC) has gained more attention among researcher communities. BTSC is the process of finding brain tumor tissues and classifying the tissues based on the tumor types. Manual tumor segmentation from is prone to error and a time-consuming task. A precise and fast BTSC model is developed in this manuscript based on a transfer learning-based Convolutional Neural Networks (CNN) model. The utilization of a variant of CNN is because of its superiority in distinct tasks. In the initial phase, the Magnetic Resonance Imaging (MRI) brain images are acquired from the Brain Tumor Image Segmentation Challenge (BRATS) 2019, 2020 and 2021 databases. Then the image augmentation is performed on the gathered images by using zoom-in, rotation, zoom-out, flipping, scaling, and shifting methods that effectively reduce overfitting issues in the classification model. The augmented images are segmented using the layers of the Visual-Geometry-Group (VGG-19) model. Then feature extraction using An Attribute Aware Attention (AWA) methodology is carried out on the segmented images following the segmentation block in the VGG-19 model. The crucial features are then selected using the attribute category reciprocal attention phase. These features are inputted to the Model Agnostic Concept Extractor (MACE) to generate the relevance score between the features for assisting in the final classification process. The obtained relevance scores from the MACE are provided to the max-pooling layer of the VGG-19 model. Then, the final classified output is obtained from the modified VGG-19 architecture. The implemented Relevance score with the AWA-based VGG-19 model is used to classify the tumor as the whole tumor, enhanced tumor, and tumor core. In the classification section, the proposed
The maintainability of source code is a key quality characteristic for software *** approaches have been proposed to quantitatively measure code *** approaches rely heavily on code metrics,e.g.,the number of Lines of ...
详细信息
The maintainability of source code is a key quality characteristic for software *** approaches have been proposed to quantitatively measure code *** approaches rely heavily on code metrics,e.g.,the number of Lines of Code and McCabe’s Cyclomatic *** employed code metrics are essentially statistics regarding code elements,e.g.,the numbers of tokens,lines,references,and branch ***,natural language in source code,especially identifiers,is rarely exploited by such *** a result,replacing meaningful identifiers with nonsense tokens would not significantly influence their outputs,although the replacement should have significantly reduced code *** this end,in this paper,we propose a novel approach(called DeepM)to measure code maintainability by exploiting the lexical semantics of text in source *** leverages deep learning techniques(e.g.,LSTM and attention mechanism)to exploit these lexical semantics in measuring code *** key rationale of DeepM is that measuring code maintainability is complex and often far beyond the capabilities of statistics or simple ***,DeepM leverages deep learning techniques to automatically select useful features from complex and lengthy inputs and to construct a complex mapping(rather than simple heuristics)from the input to the output(code maintainability index).DeepM is evaluated on a manually-assessed *** evaluation results suggest that DeepM is accurate,and it generates the same rankings of code maintainability as those of experienced programmers on 87.5%of manually ranked pairs of Java classes.
Concerns about the durability of transportation infrastructure due to freeze-thaw(F-T)cycles are particu-larly significant in the Chinese plateau region,where concrete aging and performance deterioration pose substant...
详细信息
Concerns about the durability of transportation infrastructure due to freeze-thaw(F-T)cycles are particu-larly significant in the Chinese plateau region,where concrete aging and performance deterioration pose substantial *** current national standards for the frost resistance design of concrete struc-tures are based predominantly on the coldest monthly average temperature and do not adequately address the comprehensive effects of the spatiotemporal variance,amplitude,and frequency of F-T *** address this issue,this study introduced a spatiotemporal distribution model to analyze the long-term impact of F-T action on concrete structures by employing statistical analysis and spatial inter-polation *** analysis was applied to create a nationwide zonation of F-T action level from data on the freezing temperature,temperature difference,and the number of F-T ***,this study explored the similarity between natural environmental conditions and laboratory-accelerated tests using hydraulic pressure and cumulative damage theories.A visualization platform that incorporates tools for meteorological data queries,environmental characteristic analyses,and F-T action similarity calculations was *** research lays theoretical groundwork and provides technical guidance for assessing service life and enhancing the quantitative durability design of concrete structures in the Chinese plateau region.
暂无评论