At the Unit Laka Lantas Polres Lhokseumawe determining accident-prone areas on roads in Lhokseumawe still using manual systems. Determination of accident-prone areas are less effective because the police Unit Laka Lan...
At the Unit Laka Lantas Polres Lhokseumawe determining accident-prone areas on roads in Lhokseumawe still using manual systems. Determination of accident-prone areas are less effective because the police Unit Laka Lantas Polres Lhokseumawe just using the data of the last year and had to repeatedly calculate manually accident-prone areas based on the number of human casualties. In this study the system implementation CUSUM method (Cummulative Summary) to determine the accident-prone areas designed using the web-based programming language PHP. In this system there are three processes to determine areas prone to accidents, the accident rate calculations based on the weighted severity, blacksite calculations using the Z-Score method for determining areas prone to accident blackspots and calculations using the CUSUM method to determine the critical points of an accident. The data used are secondary data obtained from the Police Unit Laka Lantas Polres Lhokseumawe years 2009-2013.
This paper was retracted by IOP Publishing on 12 December 2018. This paper was published due to a technical error and was not intended to be included in this journal. Retraction published: 8 February 2019
This paper was retracted by IOP Publishing on 12 December 2018. This paper was published due to a technical error and was not intended to be included in this journal. Retraction published: 8 February 2019
The selection and prioritization of software requirements represents an area of interest in Search-Based Software Engineering (SBSE) and its main focus is finding and selecting a set of requirements that may be part o...
详细信息
The enormous amount of short reads generated by the new DNA sequencing technologies call for the development of fast and accurate read alignment programs. A first generation of hash table-based methods has been develo...
详细信息
ISBN:
(纸本)9781509003648
The enormous amount of short reads generated by the new DNA sequencing technologies call for the development of fast and accurate read alignment programs. A first generation of hash table-based methods has been developed, including Mapping and Assembly with Quality (MAQ), which is accurate, feature rich and fast enough to align short reads from a single individual .However, MAQ does not support gapped alignment for single-end reads, which makes it unsuitable for alignment of longer reads where indels may occur frequently. The speed of MAQ is also a concern when the alignment is scaled up to the resequencing of hundreds of individuals. Therefore, we carried out an in-depth performance analysis of BWA a popular BWT-based aligner and discovered that its performance is significantly better than MAQ although, it has drawbacks regarding execution speed, time complexity and accuracy. Based on those factors we implemented an improved Burrows-Wheeler Alignment algorithm (BWA), anew read alignment package which is original BWT optimized by source code of Ziv-Lempel (LZ-77) sliding window technique and prefix trie string matching, to efficiently search for inexact and exact matches on tandem repeats against a large reference sequence genome. Our analysis show that search speed of improved BWA significantly increased by approximately 1.40 ×faster than MAQ-32 while achieving sufficiently higher accuracy with percent confidence of 96.7 % and 93.0 %. Moreover, it is more efficient to search exact and inexact matches supported by percent error of 0.05 % single ends and 0.04 % for paired end reads also more effective to search for left and right overlap tandem repeat at percent confidence of 88.9%
Background: Experimental and epidemiological studies indicate an association between exposure to particulate matter (PM) air pollution and increased risk of type 2 diabetes. In view of the high and increasing prevalen...
Background: Experimental and epidemiological studies indicate an association between exposure to particulate matter (PM) air pollution and increased risk of type 2 diabetes. In view of the high and increasing prevalence of diabetes, we aimed to quantify the burden of type 2 diabetes attributable to PM2·5 originating from ambient and household air pollution. Methods: We systematically compiled all relevant cohort and case-control studies assessing the effect of exposure to household and ambient fine particulate matter (PM2·5) air pollution on type 2 diabetes incidence and mortality. We derived an exposure–response curve from the extracted relative risk estimates using the MR-BRT (meta-regression—Bayesian, regularised, trimmed) tool. The estimated curve was linked to ambient and household PM2·5 exposures from the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, and estimates of the attributable burden (population attributable fractions and rates per 100 000 population of deaths and disability-adjusted life-years) for 204 countries from 1990 to 2019 were calculated. We also assessed the role of changes in exposure, population size, age, and type 2 diabetes incidence in the observed trend in PM2·5-attributable type 2 diabetes burden. All estimates are presented with 95% uncertainty intervals. Findings: In 2019, approximately a fifth of the global burden of type 2 diabetes was attributable to PM2·5 exposure, with an estimated 3·78 (95% uncertainty interval 2·68–4·83) deaths per 100 000 population and 167 (117–223) disability-adjusted life-years (DALYs) per 100 000 population. Approximately 13·4% (9·49–17·5) of deaths and 13·6% (9·73–17·9) of DALYs due to type 2 diabetes were contributed by ambient PM2·5, and 6·50% (4·22–9·53) of deaths and 5·92% (3·81–8·64) of DALYs by household air pollution. High burdens, in terms of numbers as well as rates, were estimated in Asia, sub-Saharan Africa, and South Am
The synthesis reaction used in esterification needs high energy consumption and a precise processing time to get the best yield of target. In this study, a model was formulated to optimize glycerol esterification proc...
详细信息
ISBN:
(纸本)9781509003648
The synthesis reaction used in esterification needs high energy consumption and a precise processing time to get the best yield of target. In this study, a model was formulated to optimize glycerol esterification process by minimizing the time needed for the process and maximizing the yield of Mono-glycerides. This optimization has gained importance for boosting the esterification industry and improving the production efficiency. Optimization through adaptive monitoring and control has provided significant advances in the process efficiency, a lower energy consumption and a better product quality. This paper presents the optimization with a computational algorithm in real time and adaptive control (RTAC), as compared to the conventional (traditional) methods to monitor and control of glycerol esterification processes. The identification of esterification status based on temperature and time are evaluated to strengthen the optimization. An adaptive method as feature selection to select wavelength IR sensors at specified intervals was carried out with Relief algorithm and Adaptive Pillar K-means clustering method to set the parameter control was proposed in this paper. Many combinations were evaluated from real time condition process, to achieve the best optimization results. The experimental results demonstrate that real time adaptive control can be achieved by using three clusters, which are heating up, stabilizing and finishing. In RTAC, each cluster has its own parameter to set the control point by the servo motor that was attached to magnetic stirrer-heater. By using optimization parameter for each cluster, esterification process time can be shortened by 15-20 minutes with a higher yield (7% or more), lower range stirrer rotation (300rpm-450 rpm) and a lower final temperature of 200℃ - 210℃.
Background: In an era of shifting global agendas and expanded emphasis on non-communicable diseases and injuries along with communicable diseases, sound evidence on trends by cause at the national level is essential. ...
Background: In an era of shifting global agendas and expanded emphasis on non-communicable diseases and injuries along with communicable diseases, sound evidence on trends by cause at the national level is essential. The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) provides a systematic scientific assessment of published, publicly available, and contributed data on incidence, prevalence, and mortality for a mutually exclusive and collectively exhaustive list of diseases and injuries. Methods: GBD estimates incidence, prevalence, mortality, years of life lost (YLLs), years lived with disability (YLDs), and disability-adjusted life-years (DALYs) due to 369 diseases and injuries, for two sexes, and for 204 countries and territories. Input data were extracted from censuses, household surveys, civil registration and vital statistics, disease registries, health service use, air pollution monitors, satellite imaging, disease notifications, and other sources. Cause-specific death rates and cause fractions were calculated using the Cause of Death Ensemble model and spatiotemporal Gaussian process regression. Cause-specific deaths were adjusted to match the total all-cause deaths calculated as part of the GBD population, fertility, and mortality estimates. Deaths were multiplied by standard life expectancy at each age to calculate YLLs. A Bayesian meta-regression modelling tool, DisMod-MR 2.1, was used to ensure consistency between incidence, prevalence, remission, excess mortality, and cause-specific mortality for most causes. Prevalence estimates were multiplied by disability weights for mutually exclusive sequelae of diseases and injuries to calculate YLDs. We considered results in the context of the Socio-demographic Index (SDI), a composite indicator of income per capita, years of schooling, and fertility rate in females younger than 25 years. Uncertainty intervals (UIs) were generated for every metric using the 25th and 975th ordered 1000 draw value
Hospital El Salvador: a novel paradigm of intensive care in response to COVID-19 in central America. Lancet Glob Health 2021;9: e241?42?In this Comment, the conflict of interest statement should have included the foll...
Hospital El Salvador: a novel paradigm of intensive care in response to COVID-19 in central America. Lancet Glob Health 2021;9: e241?42?In this Comment, the conflict of interest statement should have included the following: ?By virtue of their roles within a public hospital or the Ministry of Health, MB, LC, WH, and XS are government employees. The findings and conclusions in the Comment are only those of the authors.? This correction has been made as of Feb 26, 2021.
Currently visual usability for mobile applications is a must, whilst more demand for personalized the design both aesthetic and personal preference dynamically. It was designed to ease people with high mobility. When ...
详细信息
Currently visual usability for mobile applications is a must, whilst more demand for personalized the design both aesthetic and personal preference dynamically. It was designed to ease people with high mobility. When designing user interface, designers focus on visual usability, user needs, and user preferences. Each user has different preferences about interface designs because of their personality. In this paper, we proposed a visual usability design for mobile application based on user personality. Our approach was based on psychological aspects that enabled to perform personal preference appointing to specific visual usability of preferred experiences. The objectives of this paper are to identify and formulate rules for interface design in mobile application based on user personality. Kansei engineering is a method to get user needs and user preferences based on his or her senses and cognition. Association rule mining and bond measure is a method for finding a relationship between user preferences and user personality.
The size of the embedded software is increasing at a rapid pace. Often, the task of designing hardware that meets a lot of functionality needed in software became a challenging process and it needs more time for proje...
详细信息
The size of the embedded software is increasing at a rapid pace. Often, the task of designing hardware that meets a lot of functionality needed in software became a challenging process and it needs more time for projecting. Then, the compression code is a way to reduce this problem. This paper presents an innovative and efficient approach to code compression. Our method reduces code size by up to 31% (including all extra costs). We performed simulations and analyzes, using the applications from benchmark MiBench and use two embedded processors (ARM and MIPS). Our method is orthogonal to approaches that take into account the particularities of a given instruction set architecture, becoming an independent method for any specific architecture. We have implemented the decompressor using VHDL and FPGA and we obtained only one clock from decompression process.
暂无评论