The authors aim to explore the hidden tag problem in real applications. [...]the authors introduce a general model to expose that the results of theory analysis are far from real situations.
The authors aim to explore the hidden tag problem in real applications. [...]the authors introduce a general model to expose that the results of theory analysis are far from real situations.
Consider a system of queuing stations in tandem having both flexible servers (who are capable of working at multiple stations) and dedicated servers (who can only work at the station to which they are dedicated). We s...
详细信息
Consider a system of queuing stations in tandem having both flexible servers (who are capable of working at multiple stations) and dedicated servers (who can only work at the station to which they are dedicated). We study the dynamic assignment of servers to stations in such systems with the goal of maximizing the long-run average throughput. We also investigate how the number of flexible servers influences the throughput and compare the improvement that is obtained by cross-training another server (i.e., increasing flexibility) with the improvement obtained by adding a resource (i.e., a new server or a buffer space). Finally, we show that having only one flexible server is sufficient for achieving near-optimal throughput in certain systems with moderate to large buffer sizes (the optimal throughput is attained by having all servers flexible). Our focus is on systems with generalist servers who are equally skilled at all tasks, but we also consider systems with arbitrary service rates.
Voluminous amounts of data have been produced, since the past decade as the miniaturization of Internet of things (IoT) devices increases. However, such data are not useful without analytic power. Numerous big data, I...
详细信息
Voluminous amounts of data have been produced, since the past decade as the miniaturization of Internet of things (IoT) devices increases. However, such data are not useful without analytic power. Numerous big data, IoT, and analytics solutions have enabled people to obtain valuable insight into large data generated by IoT devices. However, these solutions are still in their infancy, and the domain lacks a comprehensive survey. This paper investigates the state-of-the-art research efforts directed toward big IoT data analytics. The relationship between big data analytics and IoT is explained. Moreover, this paper adds value by proposing a new architecture for big IoT data analytics. Furthermore, big IoT data analytic types, methods, and technologies for big data mining are discussed. Numerous notable use cases are also presented. Several opportunities brought by data analytics in IoT paradigm are then discussed. Finally, open research challenges, such as privacy, big data mining, visualization, and integration, are presented as future research directions.
Synchronization is an important task in distributed computing since it allows asynchronous systems to simulate synchronous ones. Synchronization among distributed processes can be implemented using waves. A wave is a ...
详细信息
Synchronization is an important task in distributed computing since it allows asynchronous systems to simulate synchronous ones. Synchronization among distributed processes can be implemented using waves. A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Waves consisting of consecutive distinct broadcasts with corresponding feedbacks are referred to as a multi-wave. Solutions to a large number of fundamental problems in distributed computing such as distributed reset [1] and multiphase stabilization [10] require the completion of multi-waves. In this article, we propose a time and state-space optimal snap-stabilizing multi-wave algorithm implementing k distinct consecutive waves (k > 2) in a rooted tree, with O(kh) rounds of delay and at most k + 4 states per process, where h is the height of the tree. A system is said to be snap-stabilizing if it always behaves according to its specification [13]. One of the main advantages of the multi-wave algorithm being snap-stabilizing is that the arbitrary initial configuration has limited or no effect on the pace of the broadcast propagation.
This paper introduces two eigenvalue-based rules for estimating the number of signals impinging on an array of sensors along with a spatially correlated noise field, The first rule, called S, is derived under the assu...
详细信息
This paper introduces two eigenvalue-based rules for estimating the number of signals impinging on an array of sensors along with a spatially correlated noise field, The first rule, called S, is derived under the assumption that the noise spatial covariance is block diagonal or banded, The assumption underlying the second detection rule, named T, is that the temporal correlation of the noise has a shorter length than that of the signals, In both cases, a matrix is built from the array output data covariances, the smallest eigenvalue of which is equal to zero under the hypothesis that the source number is overestimated. The sample distribution of the aforementioned smallest eigenvalue is derived and used to formulate the detection rules S and T, Both these rules are computationally quite simple, Additionally, they can be used with a noncalibrated array, The paper includes numerical examples that tend empirical support to the theoretical findings and illustrate the kind of performance that can be achieved by using the S and T detection rules.
The reduction in cost of computer processing and storage that led to personal computers has stimulated a need for comparable reduction in telecommunication costs to connect distributed personal computers to central co...
详细信息
The reduction in cost of computer processing and storage that led to personal computers has stimulated a need for comparable reduction in telecommunication costs to connect distributed personal computers to central computer facilities. A reexamination of the underlying technical options open to terrestrial and satellite telecommunications alternatives illustrates the advantages of satellite networks for data communication. The principles are illustrated with a description of a particular satellite network technology utilizing user-premises micro Earth stations that are cost-effective for use as accessories to personal computers or equivalent work stations.
The problem of developing real-time embedded computing systems is addressed, and a simpler hierarchically partitioned layered model is suggested. It is recommended for use in the specification of system requirements a...
详细信息
The article presents a speech by computer scientist Leslie Lamport, a computer scientist and the 2013 recipient of the Association for computing Machinery's (ACM's) A.M. Turing Award, entitled 'The Compute...
详细信息
The article presents a speech by computer scientist Leslie Lamport, a computer scientist and the 2013 recipient of the Association for computing Machinery's (ACM's) A.M. Turing Award, entitled 'The Computer Science of Concurrency: The Early Years'. It discusses the history of concurrent algorithms in the 1960s and 1970s. Topics discussed include mutual exclusion, producer-consumer synchronization, and distributed algorithms.
Healthcare artificial intelligence (AI) holds the potential to increase patient safety, augment efficiency and improve patient outcomes, yet research is often limited by data access, cohort curation, and tools for ana...
详细信息
Healthcare artificial intelligence (AI) holds the potential to increase patient safety, augment efficiency and improve patient outcomes, yet research is often limited by data access, cohort curation, and tools for analysis. Collection and translation of electronic health record data, live data, and real-time high-resolution device data can be challenging and time-consuming. The development of clinically relevant AI tools requires overcoming challenges in data acquisition, scarce hospital resources, and requirements for data governance. These bottlenecks may result in resource-heavy needs and long delays in research and development of AI systems. We present a system and methodology to accelerate data acquisition, dataset development and analysis, and AI model development. We created an interactive platform that relies on a scalable microservice architecture. This system can ingest 15,000 patient records per hour, where each record represents thousands of multimodal measurements, text notes, and high-resolution data. Collectively, these records can approach a terabyte of data. The platform can further perform cohort generation and preliminary dataset analysis in 2-5 minutes. As a result, multiple users can collaborate simultaneously to iterate on datasets and models in real time. We anticipate that this approach will accelerate clinical AI model development, and, in the long run, meaningfully improve healthcare delivery.
As the number of objectives and/or dimension of a given problem increases, or a real world optimization problem is modeled in more detail, the optimization algorithm requires more computation time if the computational...
详细信息
As the number of objectives and/or dimension of a given problem increases, or a real world optimization problem is modeled in more detail, the optimization algorithm requires more computation time if the computational resources are fixed. Therefore, some more tools are needed to be developed for deployment of these resources. The parallelization is one of these tools based on distribution of the overall problem to different computational units. In this study, a distributed computing approach for multi-objective evolutionary optimization algorithms is proposed by application of a migration policy which is based on sharing the information for inter-processor collaboration. This idea is also intensified with the crossover operator at the evolutionary algorithms where the migrated solutions are applied to the crossover operator so that the performance of the overall approach increases. Besides, a new metric is defined for evaluation of the performance of the proposed distribution methodology. The performance of the proposed approaches is evaluated via well-known two- and three-objective well-known test problems. (C) 2018 Elsevier Inc. All rights reserved.
暂无评论