Objective Despite the high risk of dementia(mainly AD) in aged populations and the medical burden for care and treatment of this disease, the accumulated results still can't establish a direct and complete correla...
详细信息
Objective Despite the high risk of dementia(mainly AD) in aged populations and the medical burden for care and treatment of this disease, the accumulated results still can't establish a direct and complete correlation between related risk factors and physiopathological mechanisms of late-onset AD, it may be a key problem for the prevention and control of the AD. The project has been designed with a view to characterizing the risk factors and biochemical underpinnings of mild cognitive impairment(MCI) and dementia(mainly AD) in the first step of the survey, conducted among residents aged ≥ 65 in multiple communities of Hubei, China. To the end of the project, we will establish an AD risk early warning model that suitable for China, and explore the new blood polypeptide biomarkers for AD onset. Methods We are launching the Hubei Aging Project(HAP), a cohort study of non-demented people aged 65–85. Given the exploratory nature of this project, multidimensional and machine learning techniques are applied, in addition to the traditional multivariate statistical methods. Standardised assessments of mental and physical health, and cognitive function are carried out including informant interviews. Besides health status, medical and family history, demographic and socio-cultural information are explored, as well as education, habitat network, and social behavior. data relative to motor function, including balance, walk, limits of stability, history of falls and accidents are further detailed. Finally, biological examinations, including Apo E genetic polymorphism and amyloid peptide in the blood are carried out. In addition to standard blood parameters. Diagnoses of dementia and MCI are made using standard criteria via consensus diagnosis. Results A total of 3469 subjects were recruited between October 2014 and December 2016. Mean age was 74.4 years(SD 3.9), and 63.5% of the subjects were women. Cognitive diagnoses at inclusion were as follows: normal cognition 93.0% and mild
Decision support in solving problems related to complex systems requires relevant computation models for the agents as well as methods for reasoning on properties of computations performed by agents. Agents are perfor...
详细信息
Decision support in solving problems related to complex systems requires relevant computation models for the agents as well as methods for reasoning on properties of computations performed by agents. Agents are performing computations on complex objects [e.g., (behavioral) patterns, classifiers, clusters, structural objects, sets of rules, aggregation operations, (approximate) reasoning schemes]. In Granular Computing (GrC), all such constructed and/or induced objects are called granules. To model interactive computations performed by agents, crucial for the complex systems, we extend the existing GrC approach to Interactive Granular Computing (IGrC) approach by introducing complex granules (c-granules or granules, for short). Many advanced tasks, concerning complex systems, may be classified as control tasks performed by agents aiming at achieving the high-quality computational trajectories relative to the considered quality measures defined over the trajectories. Here, new challenges are to develop strategies to control, predict, and bound the behavior of the system. We propose to investigate these challenges using the IGrC framework. The reasoning, which aims at controlling of computations, to achieve the required targets, is called an adaptive judgement. This reasoning deals with granules and computations over them. Adaptive judgement is more than a mixture of reasoning based on deduction, induction and abduction. Due to the uncertainty the agents generally cannot predict exactly the results of actions (or plans). Moreover, the approximations of the complex vague concepts initiating actions (or plans) are drifting with time. Hence, adaptive strategies for evolving approximations of concepts are needed. In particular, the adaptive judgement is very much needed in the efficiency management of granular computations, carried out by agents, for risk assessment, risk treatment, and cost/benefit analysis. In the paper, we emphasize the role of the rough set-based metho
bigdata are affecting our work, life, even economy and the development of the society. The bigdata era has already come. This paper first sketches the basic concepts of bigdata, typical 4"V" characteristi...
详细信息
ISBN:
(纸本)9781510806481
bigdata are affecting our work, life, even economy and the development of the society. The bigdata era has already come. This paper first sketches the basic concepts of bigdata, typical 4"V" characteristics as well as related application fields. Next, the paper summarizes the processing technologies of bigdata. Finally, the paper presents important directions of research of bigdata for future, points out challenges in the bigdata Era.
bigdata are becoming a new technology focus both in science and in industry and motivate technology shift to data centric architecture and operational models. There is a vital need to define the basic information/sem...
详细信息
ISBN:
(纸本)9781479951581
bigdata are becoming a new technology focus both in science and in industry and motivate technology shift to data centric architecture and operational models. There is a vital need to define the basic information/semantic models, architecture components and operational models that together comprise a so-called bigdata Ecosystem. This paper discusses a nature of bigdata that may originate from different scientific, industry and social activity domains and proposes improved bigdata definition that includes the following parts: bigdata properties (also called bigdata 5V: Volume, Velocity, Variety, Value and Veracity), data models and structures, data analytics, infrastructure and security. The paper discusses paradigm change from traditional host or service based to data centric architecture and operational models in bigdata. The bigdata Architecture Framework (BDAF) is proposed to address all aspects of the bigdata Ecosystem and includes the following components: bigdata Infrastructure, bigdata Analytics, data structures and models, bigdata Lifecycle Management, bigdata Security. The paper analyses requirements to and provides suggestions how the mentioned above components can address the main bigdata challenges. The presented work intends to provide a consolidated view of the bigdata phenomena and related challenges to modern technologies, and initiate wide discussion.
The present paper concerns the analysis of Computer Aided Software Engineering Tools (such as IBM Rational, CA ERwin Modeling Suite, Aris Toolset, BizAgi, Elma, Power Designer, BPsim (sic) Borland Together Designer) f...
详细信息
ISBN:
(纸本)9789663354125
The present paper concerns the analysis of Computer Aided Software Engineering Tools (such as IBM Rational, CA ERwin Modeling Suite, Aris Toolset, BizAgi, Elma, Power Designer, BPsim (sic) Borland Together Designer) for automation of a development process and change management of typical enterprise business process. The enterprise information system development is based on big data technology. The simulation is used as a basis for enterprise business process improvement tools.
Dig deep into the data with a hands-on guide to machine learning Machine Learning: Hands-On for Developers and Technical Professionals provides hands-on instruction and fully-coded working examples for the most common...
详细信息
ISBN:
(数字)9781118889398
ISBN:
(纸本)9781118889060
Dig deep into the data with a hands-on guide to machine learning Machine Learning: Hands-On for Developers and Technical Professionals provides hands-on instruction and fully-coded working examples for the most common machine learning techniques used by developers and technical professionals. The book contains a breakdown of each ML variant, explaining how it works and how it is used within certain industries, allowing readers to incorporate the presented techniques into their own work as they follow along. A core tenant of machine learning is a strong focus on data preparation, and a full exploration of the various types of learning algorithms illustrates how the proper tools can help any developer extract information and insights from existing data. The book includes a full complement of Instructor's Materials to facilitate use in the classroom, making this resource useful for students and as a professional reference. At its core, machine learning is a mathematical, algorithm-based technology that forms the basis of historical data mining and modern bigdata science. Scientific analysis of bigdata requires a working knowledge of machine learning, which forms predictions based on known properties learned from training data. Machine Learning is an accessible, comprehensive guide for the non-mathematician, providing clear guidance that allows readers to: Learn the languages of machine learning including Hadoop, Mahout, and Weka Understand decision trees, Bayesian networks, and artificial neural networks Implement Association Rule, Real Time, and Batch learning Develop a strategic plan for safe, effective, and efficient machine learning By learning to construct a system that can learn from data, readers can increase their utility across industries. Machine learning sits at the core of deep dive data analysis and visualization, which is increasingly in demand as companies discover the goldmine hiding in their existing data. For the tech professional involved in data
暂无评论