Current arbitrary style transfer models are limited to either image or video domains. In order to achieve satisfying image and video style transfers, two different models are inevitably required with separate training...
Current arbitrary style transfer models are limited to either image or video domains. In order to achieve satisfying image and video style transfers, two different models are inevitably required with separate training processes on image and video domains, respectively. In this paper, we show that this can be precluded by introducing UniST, a Unified Style Transfer framework for both images and videos. At the core of UniST is a domain interaction transformer (DIT ), which first explores context information within the specific domain and then interacts contextualized domain information for joint learning. In particular, DIT enables exploration of temporal information from videos for the image style transfer task and meanwhile allows rich appearance texture from images for video style transfer, thus leading to mutual benefits. Considering heavy computation of traditional multi-head self-attention, we present a simple yet effective axial multi-head self-attention (AMSA) for DIT , which improves computational efficiency while maintains style transfer performance. To verify the effectiveness of UniST, we conduct extensive experiments on both image and video style transfer tasks and show that UniST performs favorably against state-of-the-art approaches on both tasks. Code is available at https://***/NevSNev/UniST.
Automated program repair (APR) techniques have shown great success in automatically finding fixes for programs in programming languages such as C or Java. In this work, we focus on repairing formal specifications, in ...
详细信息
ISBN:
(纸本)9781450394758
Automated program repair (APR) techniques have shown great success in automatically finding fixes for programs in programming languages such as C or Java. In this work, we focus on repairing formal specifications, in particular for the Alloy specification language. As opposed to most APR tools, our approach to repair Alloy specifications, named ICEBAR, does not use test-based oracles for patch assessment. Instead, ICEBAR relies on the use of property-based oracles, commonly found in Alloy specifications as predicates and assertions. These property-based oracles define stronger conditions for patch assessment, thus reducing the notorious overfitting issue caused by using test-based oracles, typically observed in APR contexts. Moreover, as assertions and predicates are inherent to Alloy, whereas test cases are not, our tool is potentially more appealing to Alloy users than test-based Alloy repair tools. At a high level, ICEBAR is an iterative, counterexample-based process, that generates and validates repair candidates. ICEBAR receives a faulty Alloy specification with a failing property-based oracle, and uses Alloy’s counterexamples to build tests and feed ARepair, a test-based Alloy repair tool, in order to produce a repair candidate. The candidate is then checked against the property oracle for overfitting: if the candidate passes, a repair has been found; if not, further counterexamples are generated to construct tests and enhance the test suite, and the process is iterated. ICEBAR includes different mechanisms, with different degrees of reliability, to generate counterexamples from failing predicates and assertions. Our evaluation shows that ICEBAR significantly improves over ARepair, in both reducing overfitting and improving the repair rate. Moreover, ICEBAR shows that iterative refinement allows us to significantly improve a state-of-the-art tool for automated repair of Alloy specifications without any modifications to the tool.
Drones have become a vital part of the technology-driven world we live in today. From using it for delivering our day-to-day items to high grade military purposes, drones have spread their demands to various sectors o...
详细信息
Context: Stakeholders constantly make assumptions in the development of deep learning (DL) frameworks. These assumptions are related to various types of software artifacts (e.g., requirements, design decisions, and te...
详细信息
Context: Stakeholders constantly make assumptions in the development of deep learning (DL) frameworks. These assumptions are related to various types of software artifacts (e.g., requirements, design decisions, and technical debt) and can turn out to be invalid, leading to system failures. Existing approaches and tools for assumption management usually depend on manual identification of assumptions. However, assumptions are scattered in various sources (e.g., code comments, commits, pull requests, and issues) of DL framework development, and manually identifying assumptions has high costs (e.g., time and resources). Objective: The objective of the study is to evaluate different classification models for the purpose of identification with respect to assumptions from the point of view of developers and users in the context of DL framework projects (i.e., issues, pull requests, and commits) on GitHub. Method: First, we constructed a new and largest dataset (i.e., the AssuEval dataset) of assumptions collected from the TensorFlow and Keras repositories on GitHub. Then we explored the performance of seven non-transformers based models (e.g., Support Vector Machine, Classification and Regression Trees), the ALBERT model, and three decoder-only models (i.e., ChatGPT, Claude, and Gemini) for identifying assumptions on the AssuEval dataset. Results: The study results show that ALBERT achieves the best performance (f1-score: 0.9584) for identifying assumptions on the AssuEval dataset, which is much better than the other models (the 2nd best f1-score is 0.8858, achieved by the Claude 3.5 Sonnet model). Though ChatGPT, Claude, and Gemini are popular models, we do not recommend using them to identify assumptions in DL framework development because of their low performance. Fine-tuning ChatGPT, Claude, Gemini, or other language models (e.g., Llama3, Falcon, and BLOOM) specifically for assumptions might improve their performance for assumption identification. Conclusions: This stu
Fuzzy Cognitive Maps (FCMs) are graph-based simulation models commonly used to model complex systems. They are often built by participants and aggregated to compare the viewpoints of homogenous groups (e.g., anglers a...
Fuzzy Cognitive Maps (FCMs) are graph-based simulation models commonly used to model complex systems. They are often built by participants and aggregated to compare the viewpoints of homogenous groups (e.g., anglers and ecologists) and increase the reliability of the FCM. However, the default approach for aggregation may propagate the errors of an individual participant, producing an aggregate FCM whose structure and simulation outcomes do not align with the system of interest. Alternative aggregation methods exist; however, there are no criteria to assess the quality of aggregation methods. We define nine desirable criteria for FCM aggregation algorithms and demonstrate how three existing aggregation procedures from social choice theory can aggregate FCMs and fulfill desirable criteria, enabling the assessment and comparison of FCM aggregation procedures to support modelers in selecting an aggregation algorithm. Moreover, we classify existing aggregation algorithms to provide structure to the growing body of aggregation approaches.
This article discusses the issues of systematic processing of electronic workshops content to prepare competitive graduates, of various majors, in knowledge-intensive areas. The research proposes an approach to an aut...
详细信息
This study examines the efficacy of machine learning techniques, particularly Multilayer Perceptron (MLP), and Random Forest and in forecasting Nigeria's Gross Domestic Product (GDP). Traditional statistical metho...
详细信息
ISBN:
(数字)9798350353266
ISBN:
(纸本)9798350353273
This study examines the efficacy of machine learning techniques, particularly Multilayer Perceptron (MLP), and Random Forest and in forecasting Nigeria's Gross Domestic Product (GDP). Traditional statistical methods frequently fail to capture intricate, nonlinear relationships, thus necessitating advanced techniques. The dataset employed in this research encompasses 22 instances from 2000 to 2021. It includes variables such as GDP, healthcare expenditure, population, and the index of economic freedom obtained from the World Bank as well as the Nigerian Perceptions Index for corruption. The data was prepared to undergo training and testing following the CRISP-DM methodology. The models were trained and evaluated using a five-member cross-validation method. Performance metrics indicated that MLP outperformed Random Forest, with MAE values of 16.8228 and 25.3074 and RMSE values of 23.2111 and 33.2027, respectively. These findings suggest that MLP offers more accurate GDP predictions for Nigeria, effectively managing complex data relationships. This study provides valuable insights for enhancing predictive accuracy and supporting more effective economic planning and decision-making.
Mental illness is caused by depression which may have a deep negative impact on individuals or on the society as a whole. It is a growing and severe problem which has the tendency to increase with time due to the exte...
详细信息
software development companies commonly use Global software Development (GSD) in their industry. A competent Scrum team supports the success of the GSD project. This research aims to identify the game components in th...
详细信息
ISBN:
(纸本)9798350345728
software development companies commonly use Global software Development (GSD) in their industry. A competent Scrum team supports the success of the GSD project. This research aims to identify the game components in the form of types, elements, and technologies that contribute to building experience for the Scrum team in a GSD environment. We used a Systematic Literature Review (SLR) with the Kitchenham and snowball method. From the 12 papers, we concluded that none of the games specifically focused on the competence of the Scrum team at GSD. The SLR results also identify ten game elements and technologies as a reference for game development to meet the challenges in GSD. The conclusion is that there are still great opportunities to develop a game that can improve the competence of the GSD Scrum team. Game development should refer to elements and identified technologies.
We propose two quantum algorithms for communication between two observers Alice and Bob, situated at spacelike separated locations. Alice and Bob share one qubit each of a Bell state . While sending classical informat...
详细信息
暂无评论