Given an optimization problem, combining knowledge from both (i) structural or algorithmic known results and (ii) new solving techniques, helps gain insight and knowledge on the aforementioned problem by tightening th...
详细信息
ISBN:
(纸本)9783031637742;9783031637759
Given an optimization problem, combining knowledge from both (i) structural or algorithmic known results and (ii) new solving techniques, helps gain insight and knowledge on the aforementioned problem by tightening the gap between lower and upper bounds on the sought optimal value. Additionally, this gain may be further improved by iterating (i) and (ii) until a fixed point is reached. In this paper, we illustrate the above through the classical Cyclic Bandwidth problem, an optimization problem which takes as input an undirected graph G = (V, E) with |V| = n, and asks for a labeling. of V in which every vertex v takes a unique value phi(v) is an element of [1;n], in such a way that B-c(G, phi) = max{min(uv)is an element of E(G){|phi(u)-phi(v)|, n-|phi(u)-phi(v)|}}, called the cyclic bandwidth of G, is minimized. Using the classic benchmark from the Harwell-Boeing sparse matrix collection introduced in [16], we show how to combine (i) previous results from the Cyclic Bandwidth literature, and (ii) new solving techniques, which we first present, and then implement, starting from the best results obtained in step (i). We show that this process allows us to determine the optimal cyclic bandwidth value for half of the instances of our benchmark, and improves the best known bounds for a large number of the remaining instances.
Teaching computer architecture to students focused on high-level programming is a challenging task. Such students often struggle with the complexities of this subject, exhibiting difficulties in grasping connections b...
详细信息
Most existing Finite Element Method and the Material Point Method (FEM-MPM) coupling is designed for explicit solvers. By contrast, implicit schemes offer the advantage of substantially larger time steps while maintai...
详细信息
Most existing Finite Element Method and the Material Point Method (FEM-MPM) coupling is designed for explicit solvers. By contrast, implicit schemes offer the advantage of substantially larger time steps while maintaining enhanced stability, particularly beneficial for tackling stiff nonlinear problems. Despite this, the development of implicit FEM-MPM coupling has not been extensively explored, leaving a notable gap in the context of contact and elastoplastic deformation challenges. Thus, this paper proposes a novel unified FEM-MPM coupling approach within implicit time integration under the framework of multivariable variational principle and convex cone programming, termed CP-FEMP. The CP-FEMP is the first successful attempt to impose the contact constraints via Lagrange multiplier and barrier method under convex cone programming, which can tackle not only the tie constraints but also the frictional contact between MPM and FEM domains with ensuring convergence and feasibility regardless of the time step size or the mesh resolutions. The contact locking issue in tie contact is circumvented using a well-defined interpolation space. The governing equations, associated frictional contact model, and associated elastoplastic constitutive law are formulated into a global convex optimisation problem, which is efficiently solved using primal-dual interior-point method. Through a succession of standard contact and elastoplastic benchmarks, the CP-FEMP demonstrates its proficiency in the precise transference of contact forces across MPM and FEM domains while showcasing commendable energy conservation attributes. Finally, the CP-FEMP is applied to a slope-retaining wall interaction problem. All results demonstrate CP-FEMP provides a comprehensive solution for FEM-MPM coupling, allowing for large incremental step under nonlinear contact , elastoplastic large deformation and guaranteeing strict, hard non -penetration conditions without convergence issues.
The rapid growth of technology and computerscience, which has led to a surge in demand for skilled professionals in this field. The skill set required for computerscience jobs has evolved rapidly, creating challenge...
详细信息
The rapid growth of technology and computerscience, which has led to a surge in demand for skilled professionals in this field. The skill set required for computerscience jobs has evolved rapidly, creating challenges for those already in the workforce who need to adapt their skills quickly to meet industry demands. To stay ahead of the curve, it is essential to understand the hottest skills needed in the field. The article introduces a new method for analyzing job advertisements using social network analysis to identify the most critical skills required by employers in the *** this research, to form the communication network of skills, first 5763 skills were collected from the LinkedIn social network, then the relationship between skills was collected and searched in 7777 computerscience job advertisements, and finally, the balanced communication network of skills was formed. The study analyzes the formed communication network of skills in the computerscience job market and identifies four distinct communities of skills: Generalists, Infrastructure and Security, Software Development, and Embedded Systems. The findings reveal that employers value both hard and soft skills, such as programming languages and teamwork. Communication skills were found to be the most important skill in the labor market. Additionally, certain skills were highlighted based on their centrality indices, including communication, English, SQL, Git, and business skills, among others. The study provides valuable insights into the current state of the computerscience job market and can help guide individuals and organizations in making informed decisions about skills acquisition and hiring practices.
The growing importance of wearable technology in ice and snow sports highlights its role in injury prevention, where environmental hazards elevate injury risks. To address this, we propose a decision-making model usin...
详细信息
The growing importance of wearable technology in ice and snow sports highlights its role in injury prevention, where environmental hazards elevate injury risks. To address this, we propose a decision-making model using interval-valued bipolar fuzzy programming (IVBFP) for the optimal selection of wearable devices focused on athlete safety. The model employs multi-criteria decision-making (MCDM) methods to evaluate critical factors such as comfort, safety, durability, and real-time monitoring. Fuzzy logic enhances the precision and consistency of decision-making. The IVBFP model addresses vital challenges, including the diverse performance metrics of wearable devices and the uncertainty in expert evaluations. In comparison analyses, the model exhibited a 15% enhancement in judgment accuracy and a 12% decrease in uncertainty relative to conventional techniques. The results underscore the model's proficiency in correctly forecasting devices that mitigate injury risks, providing improved athlete protection. The approach effectively incorporates expert viewpoints and subjective evaluations, diminishing harm risk in simulated and actual datasets. This research is significant both theoretically and practically. It offers a comprehensive framework to guarantee athlete safety in extreme conditions, connecting scholars and practitioners.
Concurrent Constraint programming (CCP) originated in the late '80s with Vijay A. Saraswat's work. In the first '00s, a "soft" perspective of the constraint store based on a parametric algebraic ...
详细信息
ISBN:
(纸本)9783031737084;9783031737091
Concurrent Constraint programming (CCP) originated in the late '80s with Vijay A. Saraswat's work. In the first '00s, a "soft" perspective of the constraint store based on a parametric algebraic structure (a c-semiring) was proposed, namely soft CCP (SCCP). This paper enhances this SCCP language with local constraint spaces, where agents can see and interact with only a portion of the information stored. Thus, it is possible to represent areas where an agent can perform operations without affecting other local spaces. The resulting language is security-oriented as actions are checked against (e.g., read/write) rights, and it is quite rich because of nonmonotonic operations in the store (e.g., the removal of constraints is allowed), thus making the coordination of several agents more flexible and adaptive to personal and global goals.
Dermatologists typically require extensive experience to accurately classify skin *** recent years,the development of computer vision and machine learning has provided new methods for assisted *** skin cancer image cl...
详细信息
Dermatologists typically require extensive experience to accurately classify skin *** recent years,the development of computer vision and machine learning has provided new methods for assisted *** skin cancer image classification methods have certain limitations,such as poor interpretability,the requirement of domain knowledge for feature extraction,and the neglect of lesion area information in skin *** paper proposes a new genetic programming(GP)approach to automatically learn global and/or local features from skin images for *** achieve this,a new function set and a new terminal set have been *** proposed GP method can automatically and flexibly extract effective local/global features from different types of input images,thus providing a comprehensive description of skin images.A new region detection function has been developed to select the lesion areas from skin images for feature *** performance of this approach is evaluated on three skin cancer image classification tasks,and compared with three GP methods and six non-GP *** experimental results show that the new approach achieves significantly better or similar performance in most *** analysis validates the effectiveness of our parameter settings,visualizes the multiple region detection functions used in the individual evolved by the proposed approach,and demonstrates its good convergence ability.
The proceedings contain 171 papers. The topics discussed include: teaching programming in the age of generative AI;early adoption of generative artificial intelligence in computing education: emergent student use case...
ISBN:
(纸本)9798400706035
The proceedings contain 171 papers. The topics discussed include: teaching programming in the age of generative AI;early adoption of generative artificial intelligence in computing education: emergent student use cases and perspectives in 2023;guidelines for the evolving role of generative ai in introductory programming based on emerging practice;AI-grading standup updates to improve project-based learning outcomes;artificial intelligence in everyday life 2.0: educating university students from different majors;in-person vs blended learning: an examination of grades, attendance, peer support, competitiveness, and belonging;on using physiological sensors and ai to monitor emotions in a bug-hunting game;fostering and assessing dispositions by providing grades a meaning in a computing education context;ordered network analysis in CS education: unveiling patterns of success and struggle in automated programming assessment;and with great power comes great responsibility - integrating data ethics into computing education.
Computing education plays a significant role in shaping the calibre of future computing professionals;hence, improving its quality is a valuable endeavour. A promising approach to enhance computing education is levera...
详细信息
Computing education plays a significant role in shaping the calibre of future computing professionals;hence, improving its quality is a valuable endeavour. A promising approach to enhance computing education is leveraging student data from version control systems (VCS). While previous studies have utilised VCS data to predict academic performance, there remains a gap in harnessing this data for learning analytics to understand student learning behaviours in real time. In this research, we introduce the Polivr ecosystem, a comprehensive platform designed to address this gap by utilising VCS data for learning analytics in computing education. The Polivr ecosystem comprises three key modules: Polivr Anonymiser, which ensures data privacy by anonymising student identities;Polivr Core, which mines learning metrics from Git repositories;and Polivr Web Viewer, which transforms the raw metrics into insightful visualisations for educators. We evaluated Polivr using anonymised repositories collected from undergraduate computing courses. The resulting visualisations revealed trends and patterns in student learning behaviours, such as coding habits and progression over time. These insights provide valuable information for educators to enhance teaching strategies and potentially identify at-risk students. This research demonstrates the potential of version control systems as a rich source of learning analytics, contributing to improving computing education by enabling data-driven decision-making in instructional design and student support.
During testing, an error in software can either propagate to the output variables or be masked before it reaches them. Error propagation and masking characteristics are crucial in software testing because they directl...
详细信息
During testing, an error in software can either propagate to the output variables or be masked before it reaches them. Error propagation and masking characteristics are crucial in software testing because they directly influence testing effectiveness. The testing community has hence focused on developing test mechanisms to promote the propagation and reduce masking of errors in software under test. However, little is known about how severely the errors propagate and are masked during software execution. Moreover, little is known about what influences software's propagation and masking characteristics. Thus, in this study, we performed experiments to investigate the behavior of errors during software execution. We chose six case examples and created mutants per case example by seeding an error into each one. We then executed the mutants with randomly selected test cases and investigated how many errors were propagated and masked. If an error was masked, we examined the cause. In addition, we investigated what percentage of errors in the mutants reached the output variables. This study provides insight into the characteristics of errors in terms of propagation and masking, which will help testers design test inputs and choose monitoring variables to properly handle the masking of errors during testing.
暂无评论