The state-of-the-art in time series classification has come a long way, from the 1NN-DTW algorithm to the ROCKET family of classifiers. However, in the current fast-paced development of new classifiers, taking a step ...
ISBN:
(纸本)9783031498954;9783031498961
The state-of-the-art in time series classification has come a long way, from the 1NN-DTW algorithm to the ROCKET family of classifiers. However, in the current fast-paced development of new classifiers, taking a step back and performing simple baseline checks is essential. These checks are often overlooked, as researchers are focused on establishing new state-of-the-art results, developing scalable algorithms, and making models explainable. Nevertheless, there are many datasets that look like time series at first glance, but classic algorithms such as tabular methods with no time ordering may perform better on such problems. For example, for spectroscopy datasets, tabular methods tend to significantly outperform recent time series methods. In this study, we compare the performance of tabular models using classic machine learning approaches (e.g., Ridge, LDA, RandomForest) with the ROCKET family of classifiers (e.g., Rocket, MiniRocket, MultiRocket). Tabular models are simple and very efficient, while the ROCKET family of classifiers are more complex and have state-of-the-art accuracy and efficiency among recent time series classifiers. We find that tabular models outperform the ROCKET family of classifiers on approximately 19% of univariate and 28% of multivariate datasets in the UCR/UEA benchmark and achieve accuracy within 10% points on about 50% of datasets. Our results suggest that it is important to consider simple tabular models as baselines when developing time series classifiers. These models are very fast, can be as effective as more complex methods and may be easier to understand and deploy.
In this paper a performance comparison of Proton - a Linux-based compatibility layer - and native Windows is carried out for the purpose of video gaming. Proton, being a fork project of Wine, is a compatibility layer ...
ISBN:
(纸本)9783031414558;9783031414565
In this paper a performance comparison of Proton - a Linux-based compatibility layer - and native Windows is carried out for the purpose of video gaming. Proton, being a fork project of Wine, is a compatibility layer and just as Wine allows to run Windows software in general on Linux - Proton is specialized to run Windows native games on Linux. Recently Proton gained traction because of being the platform used in Valve gaming console Steam Deck. So the ability to play Windows games is crucial for new owners of this Linux-based console, when their whole game library so far have been played only on Windows. With this study, through a more detailed analysis, we try to answer a general question, for this purpose, "Is Proton good enough?".
作者:
Mu, CaihongYing, JiahuiFang, YunfeiLiu, YiXidian Univ
Collaborat Innovat Ctr Quantum Informat Shaanxi P Sch Artificial IntelligenceMinist Educ Key Lab Intelligent Percept & Image Understanding Xian Peoples R China Xidian Univ
Sch Elect Engn Xian Peoples R China
Cross-domain recommendation (CDR) is an effective method to deal with the problem of data sparsity in recommender systems. However, most of the existing CDR methods belong to single-target CDR, which only improve the ...
ISBN:
(数字)9783031402890
ISBN:
(纸本)9783031402883;9783031402890
Cross-domain recommendation (CDR) is an effective method to deal with the problem of data sparsity in recommender systems. However, most of the existing CDR methods belong to single-target CDR, which only improve the recommendation effect of the target domain without considering the effect of the source domain. Meanwhile, the existing dual-target or multi-target CDR methods do not consider the differences between domains during the feature transfer. To address these problems, this paper proposes a graph neural network for CDR based on transfer and inter-domain contrastive learning (TCLCDR). Firstly, useritem graphs of two domains are constructed, and data from both domains are used to alleviate the problem of data sparsity. Secondly, a graph convolutional transfer layer is introduced to make the information of the two domains transfer bidirectionally and alleviate the problem of negative transfer. Finally, contrastive learning is performed on the overlapping users or items in the two domains, and the self-supervised contrastive learning task and supervised learning task are jointly trained to alleviate the differences between the two domain.
Credit scoring is a vital task in the financial industry for assessing the creditworthiness of companies and mitigating credit risks. In recent years, machine learning algorithms have shown promising results in credit...
ISBN:
(纸本)9783031414558;9783031414565
Credit scoring is a vital task in the financial industry for assessing the creditworthiness of companies and mitigating credit risks. In recent years, machine learning algorithms have shown promising results in credit scoring by leveraging large amounts of tabular data. However, the traditional tabular data alone may not capture all the information relevant to credit scoring that is typically used by credit risk analysts. In this paper, we propose a novel approach for company credit scoring that integrates text and tabular data. Our method uses natural language processing techniques to extract key features from risk assessments made by credit risk experts which are then combined with financial data to predict the likelihood of default within a one-year horizon. We compare different Machine Learning based models for different text embedding techniques. Our results show that the fact of adding a textual feature improves the ability of the model to capture defaulted companies. More concretely, adding a categorical feature generated by the application of sentiment analysis over text risk assessments yields the best results.
Accelerating global language loss, associated with elevated incidence of illicit substance use, type 2 diabetes, binge drinking, and assault, as well as sixfold higher youth suicide rates, poses a mounting challenge f...
ISBN:
(纸本)9783031358937;9783031358944
Accelerating global language loss, associated with elevated incidence of illicit substance use, type 2 diabetes, binge drinking, and assault, as well as sixfold higher youth suicide rates, poses a mounting challenge for minority, Indigenous, refugee, colonized, and immigrant communities. In environments where intergenerational transmission is often disrupted, artificialintelligence neural machine translation systems have the potential to revitalize heritage languages and empower new speakers by allowing them to understand and be understood via instantaneous translation. Yet, artificialintelligence solutions pose problems, such as prohibitive cost and output quality issues. A solution is to couple neural engines to classical, rule-based ones, which empower engineers to purge loanwords and neutralize interference from dominant languages. This work describes an overhaul of the engine deployed at *** to enable translation into and out of Lemko, a severely endangered, minority lect of Ukrainian genetic classificability indigenous to borderlands between Poland and Slovakia (where it is also referred to as Rusyn). Dictionary-based translation modules were fitted with morphologically and syntactically informed noun, verb, and adjective generators fueled by 877 lemmata together with 708 glossary entries, and the entire system was riveted by 9,518 automatic, codification-referencing, must-pass quality-control tests. The fruits of this labor are a 23% improvement since last publication in translation quality into English and 35% increase in quality translating from English into Lemko, providing translations that outperform every Google Translate service by every metric, and score 396% higher than Google's Ukrainian service when translating into Lemko.
In the classical Human-Machine Dialogue (HMD) setting, existing research has mainly focused on the objective quality of the machine answer. However, it has been recently shown that humans do not perceive in the same m...
ISBN:
(数字)9783031409608
ISBN:
(纸本)9783031409592;9783031409608
In the classical Human-Machine Dialogue (HMD) setting, existing research has mainly focused on the objective quality of the machine answer. However, it has been recently shown that humans do not perceive in the same manner a human made answer and respectively a machine made answer. In this paper, we put ourselves in the context of conversational artificialintelligence software and introduce the setting of postmodern human machine dialogues by focusing on the factual relativism of the human perception of the interaction. We demonstrate the above-mentioned setting in a practical setting via a pedagogical experiment using ChatGPT3.
How can we detect traffic disturbances from international flight transportation logs, or changes to collaboration dynamics in academic networks? These problems can be formulated as detecting anomalous change points in...
ISBN:
(纸本)9783031333736;9783031333743
How can we detect traffic disturbances from international flight transportation logs, or changes to collaboration dynamics in academic networks? These problems can be formulated as detecting anomalous change points in a dynamic graph. Current solutions do not scale well to large real world graphs, lack robustness to large amount of node additions / deletions and overlook changes in node attributes. To address these limitations, we propose a novel spectral method: Scalable Change Point Detection (SCPD). SCPD generates an embedding for each graph snapshot by efficiently approximating the distribution of the Laplacian spectrum at each step. SCPD can also capture shifts in node attributes by tracking correlations between attributes and eigenvectors. Through extensive experiments using synthetic and real world data, we show that SCPD (a) achieves state-of-the-art performance, (b) is significantly faster than the state-of-the-art methods and can easily process millions of edges in a few CPU minutes, (c) can effectively tackle a large quantity of node attributes, additions or deletions and (d) discovers interesting events in large real world graphs. Code is publicly available at https://***/shenyangHuang/***.
In this study, we have reviewed the possible negative side effects (NSEs) of existing artificialintelligence (AI) secretarial services and propose a task performance method that can mitigate these effects. An AI assi...
详细信息
ISBN:
(纸本)9783031358937;9783031358944
In this study, we have reviewed the possible negative side effects (NSEs) of existing artificialintelligence (AI) secretarial services and propose a task performance method that can mitigate these effects. An AI assistant is a voice user interface (VUI) that combines voice recognition with AI technology to support self-learning. When a user encounters the unintended behavior of an AI agent or they cannot predict all the outcomes of using an agent at the development stage, NSEs may occur. Reducing NSEs in AI has been emerging as a major research task, while there is a lack of research on applications and solutions regarding AI secretaries. In this study, we performed a user interface (UI) analysis of actual services;designed NSE mitigation task execution methods;and developed three prototypes: A-existing AI secretarial method, B-confirmation request method, and C-question guidance method. The usability assessment comprised these factors: efficiency, flexibility, meaningfulness, accuracy, trust, error count, and task execution time. Prototype C showed higher efficiency, flexibility, meaningfulness, accuracy, and trust than prototypes A and B did, with B showing higher error counts and task execution times. Most users preferred prototype C since it presented a verifiable option that enabled tasks to be quickly executed with short commands. The results of this study can be used as basic data for research related to the NSEs of using AI and be used as a reference in designing and evaluating AI services.
We propose a smartphone-based computer vision system for visually impaired people that uses a neural network to classify objects and estimate image depth to improve spatial orientation in the environment. For this pur...
ISBN:
(纸本)9783031425073;9783031425080
We propose a smartphone-based computer vision system for visually impaired people that uses a neural network to classify objects and estimate image depth to improve spatial orientation in the environment. For this purpose, we have developed and implemented a spatial orientation algorithm with a recursive function for calculating the sum of image array values to estimate depth. The advantage of this algorithm is the lowcomplexity of calculations, which ensures its high performance in real-time. Our system is designed to be easy to use, portable, and affordable, making it accessible to a wide range of users. The proposed system utilizes a smartphone camera and computer vision algorithms to analyze the user's environment and provide real-time feedback through audio and haptic feedback. The neural network depth estimation model is trained on a large dataset of images and corresponding depthmaps, which allows it to accurately avoid various objects in the user's field of view.
This paper presents a open-source omnidirectional walk controller that provides bipedal walking for non-parallel robots through parameter optimization. The approach relies on pattern generation with quintic splines in...
ISBN:
(纸本)9783031284687;9783031284694
This paper presents a open-source omnidirectional walk controller that provides bipedal walking for non-parallel robots through parameter optimization. The approach relies on pattern generation with quintic splines in Cartesian space. Additionally, baselines of achieved walk velocities in simulation for all robots of the Humanoid Virtual Season, as well as some commercial robot models, are provided.
暂无评论