Due to the increasing degree of substation automation and intelligence and the promotion of unattended operation, the role of batteries is becoming more and more important. Battery life assessment methods usually rely...
详细信息
The emergence of mobile financial technology (mobile fintech) services raises numerous public concerns regarding privacy issues;consequently, researchers in mobile technology acceptance have focused on consumers' ...
详细信息
The emergence of mobile financial technology (mobile fintech) services raises numerous public concerns regarding privacy issues;consequently, researchers in mobile technology acceptance have focused on consumers' privacy self-disclosure behaviors under the usual scenario. However, there is still a lack of understanding on how external influences, such as a public health crisis, affect consumers' privacy decision-making process. Therefore, in this article, we examine the effects of privacy- and pandemic-related antecedents on mobile fintech users' information self-disclosure behavior during the coronavirus disease 2019 pandemic. The present research adopts a self-administered questionnaire with 712 effective responses for data collection and a two-stage partial least squares-structural equation modeling-artificial neural network (PLS-SEM-ANN) approach to test the theoretical lens proposed. The results indicate that the significant structural paths in the model are consistent with the proposed hypotheses and existing literature. Surprisingly, face-to-face avoidance (FFA) does not significantly influence consumers' self-disclosure willingness. Infection severity and infection susceptibility were insignificant with FFA. The present research is the first to investigate consumers' privacy-related behavior via integrating the privacy-calculus framework with control agency theory. This research focuses on consumers' decision-making during the pandemic, explicitly highlighting the macroenvironment's role in influencing an individual's behavior.
data analyses are usually designed to identify some property of the population from which the data are drawn, generalizing beyond the specific data sample. For this reason, data analyses are often designed in a way th...
详细信息
data analyses are usually designed to identify some property of the population from which the data are drawn, generalizing beyond the specific data sample. For this reason, data analyses are often designed in a way that guarantees that they produce a low generalization error. That is, they are designed so that the result of a dataanalysis run on a sample data does not differ too much from the result one would achieve by running the analysis over the entire population. An adaptive dataanalysis can be seen as a process composed by multiple queries interrogating some data, where the choice of which query to run next may rely on the results of previous queries. The generalization error of each individual query/analysis can be controlled by using an array of well-established statistical techniques. However, when queries are arbitrarily composed, the different errors can propagate through the chain of different queries and bring to a high generalization error. To address this issue, data analysts are designing several techniques that not only guarantee bounds on the generalization errors of single queries, but that also guarantee bounds on the generalization error of the composed analyses. The choice of which of these techniques to use, often depends on the chain of queries that an adaptive dataanalysis can generate. In this work, we consider adaptive data analyses implemented as while-like programs and we design a program analysis which can help with identifying which technique to use to control their generalization errors. More specifically, we formalize the intuitive notion of adaptivity as a quantitative property of programs. We do this because the adaptivity level of a dataanalysis is a key measure to choose the right technique. Based on this definition, we design a program analysis for soundly approximating this quantity. The program analysis generates a representation of the dataanalysis as a weighted dependency graph, where the weight is an upper bound on the
Diagnosing flight route faults based on QAR data is crucial for preventing flight accidents and reducing maintenance costs. This paper presents a method for diagnosing flight route faults using a Long Short-Term Memor...
详细信息
Financial sentiment analysis, the task of discerning market sentiment from financial texts, plays a crucial role in investment decisions, risk assessment, and understanding economic trends. Traditional sentiment analy...
详细信息
ISBN:
(纸本)9798400709760
Financial sentiment analysis, the task of discerning market sentiment from financial texts, plays a crucial role in investment decisions, risk assessment, and understanding economic trends. Traditional sentiment analysis techniques have often faced limitations in handling the complexities and nuances of financial language. The advent of large language models (LLMs) has brought a paradigm shift in this field. With their remarkable ability to process and understand natural language, LLMs are enabling new approaches that increase the accuracy and sophistication of financial sentiment analysis. This paper provides a comparative overview of cutting-edge LLM-based techniques for financial sentiment analysis. We introduce a six-pronged classification framework covering data types, sentiment granularity, model architectures, training approaches, methodological focus, and evaluation metrics. This framework aims to provide a structured perspective for understanding recent research trends. Our analysis reveals several key developments in the field. We discuss the challenges and opportunities associated with advanced techniques, like Instruction-tuning approaches and Retrieval-augmented methods. While LLMs offer clear advantages, ensuring data quality, mitigating bias, enhancing model explainability, and scaling these models to real-world applications remain active research areas. This review offers investors and financial researchers a comprehensive guide to the evolving landscape of financial sentiment analysis, facilitating well-informed choices for different use cases and laying the groundwork for future research.
This paper introduces a simulation-based approach to enhance quality defect detection in a prominent French textile company's manufacturing process. The production workflow is initially simulated and analyzed to i...
详细信息
This paper introduces a simulation-based approach to enhance quality defect detection in a prominent French textile company's manufacturing process. The production workflow is initially simulated and analyzed to identify key parameters influencing defect creation and detection. After assessing the existing controlprocess using performance metrics, a novel Inspection Quality Plan (IQP) is formulated. To validate the effectiveness of the proposed IQP, a simulation model is developed and calibrated to mirror real-world conditions. This model, initially validated against the current solution, acts as a benchmark for evaluating predefined scenarios and determining optimal configurations. The implemented scenario, tested in the workshop under authentic conditions, results in a remarkable 42% improvement in defect detection within the knitting process. The positive outcomes are meticulously examined and discussed. This study, employing a holistic approach that integrates simulation modeling, performance evaluation, and real-world deployment, establishes an optimized strategy for enhancing defect detection in the textile industry. It underscores the effectiveness of the proposed IQP, showcasing significant improvements observed in practical settings based on a simulation model.
This article describes the FModal tool that has been designed to bridge the gap between the structural dynamics and guidance and control domains to facilitate the development and use of high-fidelity flexible body dyn...
详细信息
ISBN:
(纸本)9781665490320
This article describes the FModal tool that has been designed to bridge the gap between the structural dynamics and guidance and control domains to facilitate the development and use of high-fidelity flexible body dynamics models. FModal streamlines the process of generating modal data-including residual vectors and modal integral terms-from component NASTRAN structural dynamics models. The data generation process can be tailored to meet simulation fidelity and performance needs. FModal's output is a portable HDF5 file with hierarchical, well-organized, and labeled data that can be used to automate, simplify, and speed up the creation of flexible multibody dynamics models-resulting in faster design iterations and reduced costs. This paper uses an interface between FModal and the DARTS flexible multibody dynamics tool to carry out several numerical studies to exercise and validate the FModal pipeline.
This study addresses the problem of training deep learning models with limited datasets, a significant challenge in sectors like medical imaging and food quality analysis. To tackle this issue, generative adversarial ...
详细信息
This study addresses the problem of training deep learning models with limited datasets, a significant challenge in sectors like medical imaging and food quality analysis. To tackle this issue, generative adversarial networks (GANs) will be employed to augment the available data and improve model performance. An innovative approach is introduced here, integrating semi-supervised learning and generative modeling to maximize the use of small datasets in developing robust models. The method involves reversing the conventional distribution of training and testing data to focus on model evaluation and generalization from limited samples. Wasserstein GANs (WGANs) and Semi-Supervised GANs (SGANs), are utilized to supplement datasets with synthetic but realistic examples, enhancing the training process in scenarios of data scarcity. These techniques are applied in the context of visible reflectance spectroscopy to analyze tomato sauces, demonstrating the method's effectiveness in non-invasively assessing key quality parameters such as oil content, degrees Brix, and pH. The results show significant improvements in model performance metrics: for %Oil content, overall accuracy increased from 0.47 to 0.66;for degrees Bx, it rose from 0.65 to 0.71;and for pH measurement, accuracy improved from 0.43 to 0.62. These outcomes highlight the model's improved capability to generalize and maintain accuracy with limited data.
Polymer-based semiconductors and organic electronics encapsulate a significant research thrust for informatics-driven materials development. However, device measurements are described by a complex array of design and ...
详细信息
Polymer-based semiconductors and organic electronics encapsulate a significant research thrust for informatics-driven materials development. However, device measurements are described by a complex array of design and parameter choices, many of which are sparsely reported. For example, the mobility of a polymer-based organic field-effect transistor (OFET) may vary by several orders of magnitude for a given polymer as a plethora of parameters related to solution processing, interface design/surface treatment, thin-film deposition, postprocessing, and measurement settings have a profound effect on the value of the final measurement. Incomplete contextual, experimental details hamper the availability of reusable data applicable for data-driven optimization, modeling (e.g., machine learning), and analysis of new organic devices. To curate organic device databases that contain reproducible and findable, accessible, interoperable, and reusable (FAIR) experimental data records, data ontologies that fully describe sample provenance and process history are required. However, standards for generating such process ontologies are not widely adopted for experimental materials domains. In this work, we design and implement an object-relational database for storing experimental records of OFETs. A data structure is generated by drawing on an international standard for batch processcontrol (ISA-88) to facilitate the design. We then mobilize these representative data records, curated from the literature and laboratory experiments, to enable data-driven learning of process-structure-property relationships. The work presented herein opens the door for the broader adoption of data management practices and design standards for both the organic electronics and the wider materials community.
In high-precision machining, the inevitable tool wear will significantly affect the surface quality. Traditional tool wear modeling is complicated due to the complex mathematical reasoning process and the identificati...
详细信息
In high-precision machining, the inevitable tool wear will significantly affect the surface quality. Traditional tool wear modeling is complicated due to the complex mathematical reasoning process and the identification of numerous unknown parameters linked to the wear mechanism. data-driven modeling for tool wear prediction can avoid the above issues but suffers from high-cost and low-efficiency because of amounts of expensive tool consumption and time-consumed experiments for generating the training dataset. To fill this gap, this study proposed an innovative approach to accurately identifying tool wear in high-precision machining effortlessly. Firstly an unsupervised Modified Toeplitz Inverse Covariance Clustering (MTICC) algorithm was first proposed to objectively categorize tool wear phase from multi-channel time-series data to break through traditional manualexperience-based division, whose effectiveness was validated by the well-designed experiments. Then, a hybrid deep learning model with a multi-scale CNN-BiLSTM-GCN and cross-attention structures was developed to deeply extract spatial-temporal features from multi-channel signals by first considering the interdependencies of the sensor network for higher accuracy. After hyperparameters optimization, features importance analysis was conducted to identify the most important features, which are "X-force", "Y-force" and "Phase1 Active Power", and the hyperparameters importance quantitatively analyze the contribution of CNN, BiLSTM, and GCN modules, respectively. Through the comparative studies, the proposed multi-scale CNN-BiLSTM-GCN model performed better with a weighted average F1 score of 0.987 than other models. The proposed model was finally employed on the intelligent IoT platform and successfully achieved the real-time identification of tool wear in the HPM process.
暂无评论