Currently, in the process of assessing and giving feedback on students' argumentative writing, educators have to spend a considerable amount of time reading and analyzing each essay individually. This can be a com...
详细信息
Currently, in the process of assessing and giving feedback on students' argumentative writing, educators have to spend a considerable amount of time reading and analyzing each essay individually. This can be a complicated and time-consuming process, especially if the number of students to be assessed is quite large. The problem of this research is to find the most effective algorithm in providing accurate and reliable predictions in the context of evaluation and feedback of students' argumentation. This study compares three algorithms (logistic regression, Naive Bayes, and Random Forest) to predict student argumentation using essays from grades 6-12. Logistic regression performed best with 94.34% accuracy, followed by random forest with 91.98% accuracy, and Naive Bayes with 88.93% accuracy. The study optimized preprocessing and selected algorithms for an automated guidance model. It is the first stage of a three-part study for developing automated guidance models. Data came from Kaggle, and the study aims to improve the accuracy of automated guidance models for student argumentation.
Consider a totally ordered set S of n elements;as an example, a set of tennis players and their rankings. Further assume that their ranking is a total order and thus satisfies transitivity and anti-symmetry. Following...
详细信息
Consider a totally ordered set S of n elements;as an example, a set of tennis players and their rankings. Further assume that their ranking is a total order and thus satisfies transitivity and anti-symmetry. Following Frances Yao (1974), an element (player) is said to be (i, j)-mediocre if it is neither among the top i nor among the bottom j elements of S. Finding a mediocre element is closely related to finding the median element. More than 40 years ago, Yao suggested a very simple and elegant algorithm for finding an (i, j)-mediocre element: Pick i+j+1 elements arbitrarily and select the (i+1)-th largest among them. She also asked: "Is this the best algorithm?" No one seems to have found a better algorithm ever since. We first provide a deterministic algorithm that beats the worst-case comparison bound in Yao's algorithm for a large range of values of i (and corresponding suitable j = j(i)) even if the current best selection algorithm is used. We then repeat the exercise for randomized algorithms;the average number of comparisons of our algorithm beats the average comparison bound in Yao's algorithm for another large range of values of i (and corresponding suitable j = j(i)) even if the best selection algorithm is used;the improvement is most notable in the symmetric case i = j. Moreover, the tight bound obtained in the analysis of Yao's algorithm allows us to give a definite answer for this class of algorithms. In summary, we answer Yao's question as follows: (i) "Presently not" for deterministic algorithms and (ii) "Definitely not" for randomized algorithms. (In fairness, it should be said however that Yao posed the question in the context of deterministic algorithms.) (C) 2021 Elsevier B.V. All rights reserved.
Forensic firearm examination provides the court of law with information about the source of fired cartridge cases. We assessed the validity of source decisions of a computer-based method and of 73 firearm examiners wh...
详细信息
Forensic firearm examination provides the court of law with information about the source of fired cartridge cases. We assessed the validity of source decisions of a computer-based method and of 73 firearm examiners who compared breechface and firing pin impressions of 48 comparison sets. We also compared the computer-based method's comparison scores with the examiners' degree-of-support judgments and assessed the validity of the latter. The true-positive rate (sensitivity) and true-negative rate (specificity) of the computer-based method (for the comparison of both the breechface and firing pin impressions) were 94.4% and at least 91.7%, respectively. For the examiners, the true-positive rate was at least 95.3% and the true-negative rate was at least 86.2%. The validity of the source decisions improved when the evaluations of breechface and firing pin impressions were combined and for the examiners also when the perceived difficulty of the comparison decreased. The examiners were reluctant to provide source decisions for "difficult" comparisons even though their source decisions were mostly correct. The correlation between the computer-based method's comparison scores and the examiners' degree-of-support judgments was low for the same-source comparisons to negligible for the different-source comparisons. Combining the outcomes of computer-based methods with the judgments of examiners could increase the validity of firearm examinations. The examiners' numerical degree-of-support judgments for their source decisions were not well-calibrated and showed clear signs of overconfidence. We suggest studying the merits of performance feedback to calibrate these judgments.
We revisit the selection problem, namely that of computing the ith order statistic of n given elements, in particular the classic deterministic algorithm by grouping and partition due to Blum, Floyd, Pratt, Rivest, an...
详细信息
We revisit the selection problem, namely that of computing the ith order statistic of n given elements, in particular the classic deterministic algorithm by grouping and partition due to Blum, Floyd, Pratt, Rivest, and Tarjan (1973). Whereas the original algorithm uses groups of odd size at least 5 and runs in linear time, it has been perpetuated in the literature that using smaller group sizes will force the worst-case running time to become superlinear, namely Omega(n log n). We first point out that the usual arguments found in the literature justifying the superlinear worst-case running time fall short of proving this claim. We further prove that it is possible to use group size smaller than 5 while maintaining the worst case linear running time. To this end we introduce three simple variants of the classic algorithm, the repeated step algorithm, the shifting target algorithm, and the hyperpair algorithm, all running in linear time.
Forensic firearm examiners compare the features in cartridge cases to provide a judgment addressing the question about their source: do they originate from one and the same or from two different firearms? In this arti...
详细信息
Forensic firearm examiners compare the features in cartridge cases to provide a judgment addressing the question about their source: do they originate from one and the same or from two different firearms? In this article, the validity and reliability of these judgments is studied and compared to the outcomes of a computer-based method. The features we looked at were the striation patterns of the firing pin aperture shear marks of four hundred test shots from two hundred Glock pistols, which were compared by a computer-based method. Sixty of the resulting 79,800 comparisons were shown to 77 firearm examiners. They were asked to judge whether the cartridge cases had the same source or a different source, and to indicate the degree of support the evidence provided for those judgments. The results show that the true positive rates (sensitivity) and the true negative rates (specificity) of firearm examiners are quite high. The examiners seem to be slightly less proficient at identifying same-source comparisons correctly, while they outperform the used computer-based method at identifying different-source comparisons. The judged degrees of support by examiners who report likelihood ratios are not well-calibrated. The examiners are overconfident, giving judgments of evidential strength that are too high. The judgments of the examiners and the outcomes of the computer-based method are only moderately correlated. We suggest to implement performance feedback to reduce overconfidence, to improve the calibration of degree of support judgments, and to study the possibility of combining the judgments of examiners and the outcomes of computer-based methods to increase the overall validity. (C) 2019 Elsevier B.V. All rights reserved.
With the popularization and application of computer technology, digital information filled every corner of human society. Artificial management of digital information has been unable to adapt to the development of soc...
详细信息
ISBN:
(纸本)9781467385152
With the popularization and application of computer technology, digital information filled every corner of human society. Artificial management of digital information has been unable to adapt to the development of society, therefore, efficient management and accurate positioning technology of the mass of information has become research hot spots of many research groups. The work proposed in this paper is application and twice extraction of information basing on singular value decomposition. This approach will use the singular value decomposition for the train matrix, modify the eigenvalue by comparison algorithm, generate new train matrix, and then use the principal component analysis of secondary data processing. Our approach not only reduces redundant information and dimension of the original data, but it eliminates the relevance of each input variable. Using extensive simulations and a number of experiments involving proportional data, we show the merits and the accuracy of the proposed approach.
Pathfinding games is a kind of mobile games which are included in Puzzle, Board Game, Game Show, Trivia, Card Games. Pathfinding games require a search techniques to get the fastest route, efficient, and shortest time...
详细信息
ISBN:
(纸本)9781509045211
Pathfinding games is a kind of mobile games which are included in Puzzle, Board Game, Game Show, Trivia, Card Games. Pathfinding games require a search techniques to get the fastest route, efficient, and shortest time. Searching techniques that can be used are A* (A Star) and Basic Theta* (Basic Theta Star) algorithm. A* algorithm was originally introduced by Peter Hart et al., A* algorithm calculate the cost using a heuristic function to prioritize the nodes (vertex) to be traversed, the cost is the distance it takes to walk from a node to anothernode. Basic Theta* algorithm is a variant of the A* made by Alex Nash et al. in 2007 because A* has the disadvantage that the results route is often not true shortest path because the routing path is limited by the grid, while the Basic Theta* change the limit of routing path becomes to all sides/angles. In this study will measure the performance of search methods which is better for square grids map on Android-based pathfinding games and used variable such as completeness, time complexity, and optimality. Input parameters for the simulation consists of a square-grids map with a starting point, goal point and unwalkable, and output consists of completeness, running time, path length, and nodes searched. The results of this study conclude that the A* and Basic Theta*algorithm has the same completeness criteria and has time complexity which is relatively same, the A* algorithm has the advantage of optimality in fewer number of nodes searched, whereas the Basic Theta*algorithm has the advantage of the optimality the shortest results.
This work presents the design of low rate and low power sensor node in wireless based sensor network for early detection and monitoring of fire in environment. It uses the DHT11 digital temperature – humidity sensor ...
详细信息
This work presents the design of low rate and low power sensor node in wireless based sensor network for early detection and monitoring of fire in environment. It uses the DHT11 digital temperature – humidity sensor for measurement and MSP430 microcontroller for processing and implementing best algorithm from the comparison algorithm and DST algorithm using Dempster-Shaffer Theory for fire detection. The detected information is transmitted to the computer system through XBeepro for short distance wirelessly. The performance of algorithms are analyzed and tested with the metric un_detect fire. The information is shared in real time to the owner/ fire station/ police and all other responsible persons by the computer system using teamviewer software through Internet. The DST has more software design complexity than comparison algorithm. But it gives better performance than comparison.
With the popularization and application of computer technology,digital information filled every corner of human *** management of digital information has been unable to adapt to the development of society,therefore,ef...
详细信息
ISBN:
(纸本)9781467385169
With the popularization and application of computer technology,digital information filled every corner of human *** management of digital information has been unable to adapt to the development of society,therefore,efficient management and accurate positioning technology of the mass of information has become research hot spots of many research *** work proposed in this paper is application and twice extraction of information basing on singular value *** approach will use the singular value decomposition for the train matrix,modify the eigenvalue by comparison algorithm,generate new train matrix,and then use the principal component analysis of secondary data *** approach not only reduces redundant information and dimension of the original data,but it eliminates the relevance of each input *** extensive simulations and a number of experiments involving proportional data,we show the merits and the accuracy of the proposed approach.
暂无评论