Interval-valued hesitant fuzzy preference relations (IVHFPRs) are useful that allow decision makers to apply several intervals in [0, 1] to denote the uncertain hesitation preference. To derive the reasonable ranking ...
详细信息
Interval-valued hesitant fuzzy preference relations (IVHFPRs) are useful that allow decision makers to apply several intervals in [0, 1] to denote the uncertain hesitation preference. To derive the reasonable ranking order from group decision making with preference relations, two topics must be considered: consistency and consensus. This paper focuses on group decision making with IVHFPRs. First, a multiplicative consistency concept for IVHFPRs is defined. Then, programming models for judging the consistency of IVHFPRs are constructed. Meanwhile, an approach for deriving the interval fuzzy priority weight vector is introduced that adopts the consistency probability distribution as basis. Subsequently, this paper builds several multiplicative consistency-based programming models for estimating the missing values in incomplete IVHFPRs. A consensus index is introduced to measure the agreement degree between individual IVHFPRs, and a method for increasing the consensus level is presented. Finally, a multiplicative consistency-and-consensus-based group decision-making method with IVHFPRs is offered, and a practical decision-making problem is selected to show the application of the new method.
A new type of fuzzy sets called multiplicative linguistic intuitionistic fuzzy sets (MLIFSs) is introduced to denote the asymmetric qualitative preferred and non-preferred cognitions of decision makers. Considering th...
详细信息
A new type of fuzzy sets called multiplicative linguistic intuitionistic fuzzy sets (MLIFSs) is introduced to denote the asymmetric qualitative preferred and non-preferred cognitions of decision makers. Considering the application of MLIFSs, an order relationship is offered and several operational laws are defined. Then, multiplicative linguistic intuitionistic fuzzy preference relations (MLIFPRs) are introduced. A consistency concept is defined and its several properties are researched. A programming model to judge the consistency of MLIFPRs is constructed, and an approach to derive consistent MLIFPRs is provided. Meanwhile, a programming model to ascertain missing values in incomplete MLIFPRs is built. A consensus index is defined to measure consensus and a programming model to improve the consensus level is constructed, which ensures the consistency of individual MLIFPRs. Furthermore, a programing model to determine the weights of the decision makers is established. On the basis of consistency and consensus analysis, an approach to group decision making with MLIFPRs is developed. Finally, an example about evaluating washing machine is selected to show the application of the new method.
In order to make use of the advantages of optical computing and promote the application and popularization of Ternary Optical Computer (TOC), this paper proposes a new computing-data model, presents the corresponding ...
详细信息
In order to make use of the advantages of optical computing and promote the application and popularization of Ternary Optical Computer (TOC), this paper proposes a new computing-data model, presents the corresponding computing-data file in detail, and gives the implementation method of programming application with the model files. It tries to set up a bridge between users and TOC. Experimental results show that the model presented in this paper is correct and the implementation method is feasible. The model can better simplify TOC's application process, and allow users to apply TOC to carry out various computational applications without needing to understand any details of optical implementation.
Hybrid Diversity-aware Collective Adaptive Systems (HDA-CAS) is a new generation of socio-technical systems where both humans and machine peers complement each other and operate collectively to achieve their goals. Th...
详细信息
ISBN:
(纸本)9781509000890
Hybrid Diversity-aware Collective Adaptive Systems (HDA-CAS) is a new generation of socio-technical systems where both humans and machine peers complement each other and operate collectively to achieve their goals. These systems are characterized by the fundamental properties of hybridity and collectiveness, hiding from users the complexities associated with managing the collaboration and coordination of hybrid human/machine teams. In this paper we present the key programming elements of the SmartSociety HDA-CAS platform. We first describe the overall platform's architecture and functionality and then present concrete programming model elements - Collective-based Tasks (CBTs) and Collectives, describe their properties and show how they meet the hybridity and collectiveness requirements. We also describe the associated Java language constructs, and show how concrete use-cases can be encoded with the introduced constructs.
The client computing platform is moving towards a heterogeneous architecture consisting of a combination of cores focused on scalar performance, and a set of throughput-oriented cores. The throughput oriented cores (e...
详细信息
ISBN:
(纸本)9781605583921
The client computing platform is moving towards a heterogeneous architecture consisting of a combination of cores focused on scalar performance, and a set of throughput-oriented cores. The throughput oriented cores (e.g. a GPU) may be connected over both coherent and non-coherent interconnects, and have different ISAs. This paper describes a programming model for such heterogeneous platforms. We discuss the language constructs, runtime implementation, and the memory model for such a programming environment. We implemented this programming environment in a x86 heterogeneous platform simulator. We ported a number of workloads to our programming environment, and present the performance of our programming environment on these workloads.
Both number and diversity of computer-enabled physical objects in our surroundings is rapidly increasing. Such objects offer connectivity and are programmable, which forms basis for new kinds of cyber-physical computi...
详细信息
ISBN:
(纸本)9781509026258
Both number and diversity of computer-enabled physical objects in our surroundings is rapidly increasing. Such objects offer connectivity and are programmable, which forms basis for new kinds of cyber-physical computing environments. This has inspired us to propose a programming model called Action-Oriented programming (AcOP), where focus is at simplifying the creation of applications that build on sharing data and interactions between the devices. However, such co-operative multi-device programs are a huge challenge for security and privacy. Therefore, fine grained control over the publicity of event data and control over the actions are required. Fortunately, a lot can be achieved on the framework side to facilitate developer's use of security and privacy enhancing mechanisms. In this paper, we address this challenge in the context of AcOP.
Traditional High-Performance Computing (HPC) based big-data applications are usually constrained by having to move large amount of data to compute facilities for real-time processing purpose. Modern HPC systems, repre...
详细信息
Traditional High-Performance Computing (HPC) based big-data applications are usually constrained by having to move large amount of data to compute facilities for real-time processing purpose. Modern HPC systems, represented by High-Throughput Computing (HTC) and Many-Task Computing (MTC) platforms, on the other hand, intend to achieve the long-held dream of moving compute to data instead. This kind of data-aware scheduling, typically represented by Hadoop MapReduce, has been successfully implemented in its Map Phase, whereby each Map Task is sent out to the compute node where the corresponding input data chunk is located. However, Hadoop MapReduce limits itself to a one-map-to-one-reduce framework, leading to difficulties for handling complex logics, such as pipelines or workflows. Meanwhile, it lacks built-in support and optimization when the input datasets are shared among multiple applications and/or jobs. The performance can be improved significantly when the knowledge of the shared and frequently accessed data is taken into scheduling decisions. To enhance the capability of managing workflow in modern HPC system, this paper presents CloudFlow, a Hadoop MapReduce based programming model for cloud workflow applications. CloudFlow is built on top of MapReduce, which is proposed not only being data aware, but also shared-data aware. It identifies the most frequently shared data, from both task-level and job-level, replicates them to each compute node for data locality purposes. It also supports user-defined multiple Map- and Reduce functions, allowing users to orchestrate the required data-flow logic. Mathematically, we prove the correctness of the whole scheduling framework by performing theoretical analysis. Further more, experimental evaluation also shows that the execution runtime speedup exceeds 4X compared to traditional MapReduce implementation with a manageable time overhead. (C) 2014 Elsevier B.V. All rights reserved.
High-Performance Reconfigurable Computers (HPRCs) consist of one or more standard microprocessors tightly-coupled with one or more reconfigurable FPGAs. HPRCs have been shown to provide good speedups and good cost/per...
详细信息
High-Performance Reconfigurable Computers (HPRCs) consist of one or more standard microprocessors tightly-coupled with one or more reconfigurable FPGAs. HPRCs have been shown to provide good speedups and good cost/performance ratios, but not necessarily ease of use, leading to a slow acceptance of this technology. HPRCs introduce new design challenges, such as the lack of portability across platforms, incompatibilities with legacy code, users reluctant to change their code base, a prolonged learning curve, and the need for a system-level Hardware/Software co-design development flow. This article presents the evolution and current work on TMD-MPI, which started as an MPI-based programming model for Multiprocessor Systems-on-Chip implemented in FPGAs, and has now evolved to include multiple X86 processors. TMD-MPI is shown to address current design challenges in HPRC usage, suggesting that the MPI standard has enough syntax and semantics to program these new types of parallel architectures. Also presented is the TMD-MPI Ecosystem, which consists of research projects and tools that are developed around TMD-MPI to further improve HPRC usability. Finally, we present preliminary communication performance measurements.
The client computing platform is moving towards a heterogeneous architecture consisting of a combination of cores focused on scalar performance, and a set of throughput-oriented cores. The throughput oriented cores (e...
详细信息
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel a...
详细信息
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work. (C) 2016 Elsevier B.V. All rights reserved.
暂无评论