Much attention has been paid to computational thinking (CT) as a problem-solving approach across various curricula, particularly in mathematics. Most studies solely used a digital instrument or examined transfer of pr...
详细信息
Much attention has been paid to computational thinking (CT) as a problem-solving approach across various curricula, particularly in mathematics. Most studies solely used a digital instrument or examined transfer of program solving ability, neglecting the mathematics knowledge domain or how the novel digital instrument functions alongside the dominant paper-and-pencil instrument in a classroom. Using Instrument-Mediated Activity Theory, our qualitative case study compares how secondary level students appropriated computerprogramming (as a means of using CT) and paper-and-pencil instruments to solve mathematics textbook word problems, via the analysis of three cases. Our results show that each instrument privileged certain ways of thinking that, by extension, de-emphasized others. The finding implies that teachers seeking to introduce computational concepts should be aware of an epistemic clash arising from the long-term use of paper-and-pencil for solving mathematics problems. We suggest that a more effective way to bring CT into secondary level mathematics is to introduce new types of problems or tasks that are less likely to interfere with the dominant instrument.
Clustering is an unsupervised learning task that aims to partition data into a set of clusters. In many applications, these clusters correspond to real-world constructs (e.g., electoral districts, playlists, TV channe...
详细信息
ISBN:
(数字)9783031605994
ISBN:
(纸本)9783031606014;9783031605994
Clustering is an unsupervised learning task that aims to partition data into a set of clusters. In many applications, these clusters correspond to real-world constructs (e.g., electoral districts, playlists, TV channels) whose benefit can only be attained by groups when they reach a minimum level of representation (e.g., 50% to elect their desired candidate). In this paper, we study the k-means clustering problem with the additional constraint that each group (e.g., demographic group) must have a minumum level of representation in at least a given number of clusters. We formulate the problem through a mixed-integer optimization framework and present an alternating minimization algorithm, called MiniReL, that directly incorporates the fairness constraints. While incorporating the fairness criteria leads to an NP-Hard assignment problem within the algorithm, we present computational approaches that make the algorithm practical even for large datasets. Numerical results show that this approach can produce fair clusters with practically no increase in the clustering cost across standard benchmark datasets.
Block-based programming environments, widely used for teaching beginners, pose accessibility challenges for individuals with visual impairments due to limited support for screen readers and keyboard navigation. To add...
详细信息
Genetic programming-based evolutionary feature construction is a widely used technique for automatically enhancing the performance of a regression algorithm. While it has achieved great success, a challenging problem ...
详细信息
ISBN:
(纸本)9783031569562;9783031569579
Genetic programming-based evolutionary feature construction is a widely used technique for automatically enhancing the performance of a regression algorithm. While it has achieved great success, a challenging problem in feature construction is the issue of overfitting, which has led to the development of many multi-objective methods to control overfitting. However, for multi-objective methods, a key issue is how to select the final model from the front with different trade-offs. To address this challenge, in this paper, we propose a novel minimal complexity knee point selection strategy in evolutionary multi-objective feature construction for regression to select the final model for making predictions. Experimental results on 58 datasets demonstrate the effectiveness and competitiveness of this strategy when compared to eight existing methods. Furthermore, an ensemble of the proposed strategy and existing model selection strategies achieves the best performance and outperforms four popular machine learning algorithms.
Computational thinking (CT) is an essential skill required for every individual in the digital era to become creative problem solvers. The purpose of this research is to investigate the relationships between computati...
详细信息
Computational thinking (CT) is an essential skill required for every individual in the digital era to become creative problem solvers. The purpose of this research is to investigate the relationships between computational thinking skills, the Big Five personality factors, and learning motivation using structural equation modeling (SEM). The research administered the computational thinking scale, NEO FFI scale, and Motivated Strategies for Learning Questionnaire to a sample of 92 students pursuing degrees in computerscience and Engineering. Based on the result analysis, it was determined that both personality and learning motivation had positive and significant impacts on computation thinking skills. Personality had a major contribution to the prediction of CT, with consciousness being the most influential predictor. The findings of this study suggest that educators and academics should focus on the significance of the psychological side of CT for the improvement of students' CT skills.
Block-based programming is an effective way to introduce students to computerscienceprogramming [3], [7], [8]. As the researcher community keeps lowering the barrier to entry, BBP environments now support learners a...
详细信息
In this paper, we propose a new approach for the top-down compilation of relaxed Binary Decision Diagrams (BDDs) for Discrete Optimization: Lookahead, Merge and Reduce. The approach is inspired by the bottom-up algori...
详细信息
ISBN:
(数字)9783031605994
ISBN:
(纸本)9783031606014;9783031605994
In this paper, we propose a new approach for the top-down compilation of relaxed Binary Decision Diagrams (BDDs) for Discrete Optimization: Lookahead, Merge and Reduce. The approach is inspired by the bottom-up algorithm for reducing exact BDDs in which equivalent nodes, that is, nodes with the same partial completions, are merged. In our top-down compilation approach, we apply this reduction algorithm for determining which states to be merged by constructing a lookahead layer, merging the lookahead layer nodes according to some heuristic and then deeming nodes having the same feasible completions in the lookahead BDD as approximately equivalent. Moreover, under certain structural properties we prove an upper limit on the size of the reduced layers given the size of the merged lookahead layer. In a set of preliminary computational experiments, we evaluate our approach for the 0/1 Knapsack problem, showing that the approach often achieves much stronger bounds than the traditional top-down compilation scheme.
The remanufacturing process, driven by human-robot collaboration technology, is becoming an important carrier for the circular economy, contributing to economic development while significantly reducing environmental p...
详细信息
The remanufacturing process, driven by human-robot collaboration technology, is becoming an important carrier for the circular economy, contributing to economic development while significantly reducing environmental pressure. However, existing researches on human-robot collaboration disassembly line have certain limitations and fails to address economic and environmental considerations adequately. Therefore, in this study, a techno-economic and environmental benefit-oriented human-robot collaborative disassembly line balancing problem (TEBHRC-DLBP) was formulated that requires the utilization of human and robot resources to improve disassembly efficiency and quality and to facilitate decision-making on recycling options for disassembled parts to maximize economic and environmental benefits. First, a mixed-integer programming (MIP) model was developed for the TEBHRC-DLBP to minimize the number of workstations, minimize the smoothing index, and maximize techno-economic and environmental benefits. Second, a multi-objective immune genetic algorithm (MIGA) was developed to solve the proposed TEBHRC-DLBP efficiently. A five-layer encoding method and a triple decoding strategy were constructed based on the problem characteristics, and immune operators were introduced into the well-known NSGA-II structure to interfere with the global search process with a certain degree of strength to avoid invalid work and to improve algorithm efficiency. In addition, the correctness and validity of the proposed MIP model and the MIGA were verified using the case of a small-scale personal computer. In 21 benchmark tests with scales ranging from 7 to 148, the proposed MIGA achieved significantly better results than the seven algorithms reported in the literature and obtained the best value for 16 benchmarks. Finally, the application of the MIGA to the disassembly case of a power battery module demonstrates its good stability, convergence, and diversity, as well as its excellent practical ap
Foundational language models, i.e. large, pre-trained neural transformer models like Google BERT and OpenAI ChatGPT, GPT-3 or GPT-4 have created considerable general media attention. Microsoft's *** service has al...
详细信息
ISBN:
(纸本)9783031504846;9783031504853
Foundational language models, i.e. large, pre-trained neural transformer models like Google BERT and OpenAI ChatGPT, GPT-3 or GPT-4 have created considerable general media attention. Microsoft's *** service has also integrated a foundational model (CodePilot) to make programmers more productive. Some people have gone so far and heralded the end of the programming profession, an unsubstantiated claim. We investigate the research question to what extent individuals without the necessary technical background can still use such systems to achieve a set task. Our single case study based preliminary evidence suggests that using such systems may lead to a good task completion rate, but without deepening the understanding much on the way.
Deep Neural Networks (DNNs) have found successful applications in various non-safety-critical domains. However, given the inherent lack of interpretability in DNNs, ensuring their prediction accuracy through robustnes...
详细信息
ISBN:
(纸本)9783031664557;9783031664564
Deep Neural Networks (DNNs) have found successful applications in various non-safety-critical domains. However, given the inherent lack of interpretability in DNNs, ensuring their prediction accuracy through robustness verification becomes imperative before deploying them in safety-critical applications. Neural Network Verification (NNV) approaches can broadly be categorized into exact and approximate solutions. Exact solutions are complete but time-consuming, making them unsuitable for large network architectures. In contrast, approximate solutions, aided by abstraction techniques, can handle larger networks, although they may be incomplete. This paper introduces AccMILP, an approach that leverages abstraction to transform NNV problems into Mixed Integer Linear programming (MILP) problems. AccMILP considers the impact of individual neurons on target labels in DNNs and combines various relaxation methods to reduce the size of NNV models while ensuring verification accuracy. The experimental results indicate that AccMILP can reduce the size of the verification model by approximately 30% and decrease the solution time by at least 80% while maintaining performance equal to or greater than 60% of MIPVerify. In other words, AccMILP is well-suited for the verification of large-scale DNNs.
暂无评论