This paper examines how the use of multiple platforms is tied to awareness of algorithms. It builds on the premise that users interact with ecologies or environments of technologies rather than single platforms. The s...
详细信息
This paper examines how the use of multiple platforms is tied to awareness of algorithms. It builds on the premise that users interact with ecologies or environments of technologies rather than single platforms. The study also supplements work on algorithmic awareness by implementing a mixed-method study to account for how Costa Rican users of Netflix and Spotify understood and related to the algorithms of these platforms. This study combined a survey of 258 participants and 21 in-depth semistructured interviews. Findings demonstrate that multi-platform users were more aware of algorithms and carried out more practical actions to obtain algorithmic recommendations than single-platform users. Although user type did not predict participants' attitudes towards algorithmic recommendations, higher levels of awareness were associated with more positive attitudes towards algorithms. The study also shows that differences in levels of awareness explained users' emotional arousal derived from algorithms.
Near-term quantum devices promise to revolutionize quantumchemistry,but simulations using the current noisy intermediate-scale quantum(NISQ) devices are not practical due to their high susceptibilityto errors. This mo...
详细信息
Near-term quantum devices promise to revolutionize quantumchemistry,but simulations using the current noisy intermediate-scale quantum(NISQ) devices are not practical due to their high susceptibilityto errors. This motivated the design of NISQ algorithms leveragingclassical and quantum resources. While several developments have shownpromising results for ground-state simulations, extending the algorithmsto excited states remains challenging. This paper presents two cost-efficientexcited-state algorithms inspired by the classicalDavidson algorithm. We implemented the Davidson method into the quantumself-consistent equation-of-motion unitary coupled-cluster (q-sc-EOM-UCC)excited-state method adapted for quantum hardware. The circuit strategiesfor generating desired excited states are discussed, implemented,and tested. We demonstrate the performance and accuracy of the proposedalgorithms (q-sc-EOM-UCC/Davidson and its variational variant) bysimulations of H-2, H-4, LiH, and H2O molecules. Similar to the classical Davidson scheme, q-sc-EOM-UCC/Davidsonalgorithms are capable of targeting a small number of excited statesof the desired character.
This paper aims to investigate the capabilities of exploiting optical line-of-sight navigation using star trackers. First, a synthetic image simulator is developed to generate realistic images, which is later exploite...
详细信息
This paper aims to investigate the capabilities of exploiting optical line-of-sight navigation using star trackers. First, a synthetic image simulator is developed to generate realistic images, which is later exploited to test the star tracker's performance. Then, generic considerations regarding attitude estimation are drawn, highlighting how the camera's characteristics influence the accuracy of the estimation. The full attitude estimation chain is designed and analyzed in order to maximize the performance in a deep-space cruising scenario. After that, the focus is shifted to the actual planet-centroiding algorithm, with particular emphasis on the illumination compensation routine, which is shown to be fundamental to achieving the required navigation accuracy. The influence of the center of the planet within the singular pixel is investigated, showing how this uncontrollable parameter can lower performance. Finally, the complete algorithm chain is tested with the synthetic image simulator in a wide range of scenarios. The final promising results show that with the selected hardware, even in the higher noise condition, it is possible to achieve a direction's azimuth and elevation angle error in the order of 1-2 arc sec for Venus, and below 1 arc sec for Jupiter, for a spacecraft placed at 1 AU from the Sun. These values finally allow for a positioning error below 1000 km, which is in line with the current non-autonomous navigation state-of-the-art.& COPY;2023 Optica Publishing Group
COVID-19 is a contagious disease and its several variants put under stress in all walks of life and economy as *** diagnosis of the virus is a crucial task to prevent the spread of the virus as it is a threat to life ...
详细信息
COVID-19 is a contagious disease and its several variants put under stress in all walks of life and economy as *** diagnosis of the virus is a crucial task to prevent the spread of the virus as it is a threat to life in the whole ***,with the advancement of technology,the Internet of Things(IoT)and social IoT(SIoT),the versatile data produced by smart devices helped a lot in overcoming this lethal *** mining is a technique that could be used for extracting useful information from massive *** this study,we used five supervised ML strategies for creating a model to analyze and forecast the existence of COVID-19 using the Kaggle dataset“COVID-19 Symptoms and Presence.”RapidMiner Studio ML software was used to apply the Decision Tree(DT),Random Forest(RF),K-Nearest Neighbors(K-NNs)and Naive Bayes(NB),Integrated Decision Tree(ID3)*** develop the model,the performance of each model was tested using 10-fold cross-validation and compared to major accuracy measures,Cohan’s kappa statistics,properly or mistakenly categorized cases and root means square *** results demonstrate that DT outperforms other methods,with an accuracy of 98.42%and a root mean square error of *** the future,a devisedmodel will be highly recommendable and supportive for early prediction/diagnosis of disease by providing different data sets.
This special section of IEEE/ACM Transactions on Computational Biology and Bioinformatics presents extended versions of some of the best papers accepted at the Eighth International Conference on algorithms for Computa...
详细信息
This special section of IEEE/ACM Transactions on Computational Biology and Bioinformatics presents extended versions of some of the best papers accepted at the Eighth International Conference on algorithms for Computational Biology, AlCoB 2021, held online due to the COVID-19 pandemic on November 9-11, 2021. The conference was organized by the Department of Computer Science at the University of Montana and the Institute for Research Development, Training and Advice - IRDTA, Brussels/London.
The increased need for data, combined with the emergence of powerful Internet of Things (IoT) devices, has resulted in major security concerns. The decision-making related to choosing an adequate cryptographic algorit...
详细信息
The increased need for data, combined with the emergence of powerful Internet of Things (IoT) devices, has resulted in major security concerns. The decision-making related to choosing an adequate cryptographic algorithm to use is, indeed, an example concern that affects the performance of an implementation. Lightweight or tiny ciphers are considered to be the go-to algorithms when talking about embedded systems and IoT devices. Such ciphers, when properly integrated, are expected to have a minimal effect on the overall device utilization and thus provide effective performance. In this paper, we propose a unified analytical framework for lightweight ciphers as implemented within heterogeneous computing environments. This framework considers a carefully identified set of metrics that can adequately enable the capturing, ranking, and classifying the attained performance. To that end, a designer can make effective evaluations and exact adjustments to an implementation. This framework uses three decision-making approaches, namely the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) II, and Fuzzy TOPSIS. Such approaches take into account both hardware and software metrics when deciding on a suitable cryptographic algorithm to adopt. Validation entails a thorough examination and evaluation of several performance classification schemes. The results confirm that the framework is both valid and effective.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://***/licenses/by/4.0/).
The article contains information about machine learning methods used in modern metallurgy. The description of machine learning methods and their role in the processing of "big data" formed at metallurgical e...
详细信息
The article contains information about machine learning methods used in modern metallurgy. The description of machine learning methods and their role in the processing of "big data" formed at metallurgical enterprises are given. The topic relevance has to do with the effectiveness of solving problems aimed at improving production processes using artificial intelligence and machine learning in various metallurgical processing stages.
Minimum cut/maximum flow (min-cut/max-flow) algorithms solve a variety of problems in computer vision and thus significant effort has been put into developing fast min-cut/max-flow algorithms. As a result, it is diffi...
详细信息
Minimum cut/maximum flow (min-cut/max-flow) algorithms solve a variety of problems in computer vision and thus significant effort has been put into developing fast min-cut/max-flow algorithms. As a result, it is difficult to choose an ideal algorithm for a given problem. Furthermore, parallel algorithms have not been thoroughly compared. In this paper, we evaluate the state-of-the-art serial and parallel min-cut/max-flow algorithms on the largest set of computer vision problems yet. We focus on generic algorithms, i.e., for unstructured graphs, but also compare with the specialized GridCut implementation. When applicable, GridCut performs best. Otherwise, the two pseudoflow algorithms, Hochbaum pseudoflow and excesses incremental breadth first search, achieves the overall best performance. The most memory efficient implementation tested is the Boykov-Kolmogorov algorithm. Amongst generic parallel algorithms, we find the bottom-up merging approach by Liu and Sun to be best, but no method is dominant. Of the generic parallel methods, only the parallel preflow push-relabel algorithm is able to efficiently scale with many processors across problem sizes, and no generic parallel method consistently outperforms serial algorithms. Finally, we provide and evaluate strategies for algorithm selection to obtain good expected performance. We make our dataset and implementations publicly available for further research.
To efficiently develop and utilize unconventional energy, it is essential to investigate the characteristics of 3D crack network expansion during sandstone hydraulic fracturing (HF). This paper presents the results of...
详细信息
To efficiently develop and utilize unconventional energy, it is essential to investigate the characteristics of 3D crack network expansion during sandstone hydraulic fracturing (HF). This paper presents the results of physical simulation experiments of sandstone HF under varying shear stress levels, where CT image data acquisition was used to enable 3D reconstruction and establishment of identification algorithms for 3D crack network. The graph theory representation method of crack network was improved using the intersection and extension of crack branch elements as precursors for two vertex sets U and V of a bipartite graph G = (U, V, E), extending the topological structure representation method to 3D crack network. Using topology and fractal theory, a quantitative characterization method of crack network was developed, which comprehensively considered topological structure parameters and fractal dimension. The findings showed that the fractal dimension of the crack increases linearly with the increase of the shear stress level, and the parameters of the crack topological structure increased first and then decreased. While the parameters of the crack topological structure first increased and then decreased.
The use of peak-picking algorithms is an essential step in all nontarget analysis (NTA) workflows. However, algorithm choice may influence reliability and reproducibility of results. Using a real-world data set, the a...
详细信息
The use of peak-picking algorithms is an essential step in all nontarget analysis (NTA) workflows. However, algorithm choice may influence reliability and reproducibility of results. Using a real-world data set, the aim of this study was to investigate how different peak-picking algorithms influence NTA results when exploring temporal and/or spatial trends. For this, drinking water catchment monitoring data, using passive samplers collected twice per year across Southeast Queensland, Australia (n = 18 sites) between 2014 and 2019, was investigated. Data were acquired using liquid chromatography coupled to high-resolution mass spectrometry. Peak picking was performed using five different programs/algorithms (SCIEX OS, MSDial, self-adjusting-feature-detection, two algorithms within MarkerView), keeping parameters identical whenever possible. The resulting feature lists revealed low overlap: 7.2% of features were picked by >3 algorithms, while 74% of features were only picked by a single algorithm. Trend evaluation of the data, using principal component analysis, showed significant variability between the approaches, with only one temporal and no spatial trend being identified by all algorithms. Manual evaluation of features of interest (p-value <0.01, log fold change >2) for one sampling site revealed high rates of incorrectly picked peaks (>70%) for three algorithms. Lower rates (<30%) were observed for the other algorithms, but with the caveat of not successfully picking all internal standards used as quality control. The choice is therefore currently between comprehensive and strict peak picking, either resulting in increased noise or missed peaks, respectively. Reproducibility of NTA results remains challenging when applied for regulatory frameworks.
暂无评论