In machine translation, discriminative models have almost entirely supplanted the classical noisy-channel model, but are standardly trained using a method that is reliable only in low-dimensional spaces. Two strands o...
详细信息
In machine translation, discriminative models have almost entirely supplanted the classical noisy-channel model, but are standardly trained using a method that is reliable only in low-dimensional spaces. Two strands of research have tried to adapt more scalable discriminative training methods to machine translation: the first uses log-linear probability models and either maximum likelihood or minimum risk, and the other uses linear models and large-margin methods. Here, we provide an overview of the latter. We compare several learning algorithms and describe in detail some novel extensions suited to properties of the translation task: no single correct output, a large space of structured outputs, and slow inference. We present experimental results on a large-scale Arabic-English translation task, demonstrating large gains in translation accuracy.
We present a novel approach for the on-chip generation of a fault-tolerant clock. Our method is based on the hardware implementation of a tick synchronization algorithm from the distributed systems community. We discu...
详细信息
We present a novel approach for the on-chip generation of a fault-tolerant clock. Our method is based on the hardware implementation of a tick synchronization algorithm from the distributed systems community. We discuss the selection of an appropriate algorithm, present the refinement steps necessary to facilitate its efficient mapping to hardware, and elaborate on the key challenges we had to overcome in our actual ASIC implementation. Our measurement results confirm that the approach is indeed capable of creating a globally synchronized clock in a distributed fashion that is tolerant to a (configurable) number of arbitrary faults. This property facilitates eliminating the clock as a single point of failure. Our solution is based on purely asynchronous design, obviating the need for crystal oscillators. It is capable of adapting to parameter variations as well as changes in temperature and power supply-properties that are considered highly desirable for future technology nodes.
Autonomous robotic wheelchairs are widely discussed in recent years. Most of autonomous robotic wheelchairs were developed based on rich computation architecture to deal with complicated navigation efforts such as per...
详细信息
ISBN:
(纸本)9781424409907
Autonomous robotic wheelchairs are widely discussed in recent years. Most of autonomous robotic wheelchairs were developed based on rich computation architecture to deal with complicated navigation efforts such as personal computers or embedded computers. However, such rich computation architectures increase cost of robotic wheelchair when compared to conventional powered wheelchairs. At the same time, large amounts of energy consumptions, lower reliability, and inefficient size of personal computers reduce the practicability of robotic wheelchair. In this paper, we present a low cost solution to solve complicated wheelchair navigation problem based on distributed computing architecture. This solution is developed based on the programmable System-on-Chip (PSoC) computing and control architecture. The tasks within autonomous navigations are categorized into human machine interface, sensor collection, fuzzy logic based navigation functions, dosed loop motion controller, and these tasks are individually implemented using the PSoC. Finally, the task oriented PSoC are integrated and fused to perform autonomous navigations of robotic wheelchairs. Consequently, the proposed PSoC based distributed computing architecture was successfully implemented and experimentally tested. The navigation results were verified using the results inferred from the personal computer based computing architecture.
A service provisioning model in optical grid for distributed computing is proposed, in which grid application resource and optical network resource are regarded as peer entities, and they are abstracted and encapsulat...
详细信息
ISBN:
(纸本)9780819469472
A service provisioning model in optical grid for distributed computing is proposed, in which grid application resource and optical network resource are regarded as peer entities, and they are abstracted and encapsulated to provide the controllable and sharable service. The resource scheduling in optical grid is investigated, and a minimum cost algorithm to co-schedule the resources is presented. The algorithm co-schedules grid application resource and optical network resource to minimize the cost to perform the job. The simulation results show that the minimum cost algorithm can cooptimize usages of grid application resource and optical network resource and decrease the blocking probability of grid system.
Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of...
详细信息
ISBN:
(纸本)9780769540634
Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage resources in response to changing demand patterns in real-time. Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine-grained control to reduce total cost of ownership while also improving agility. The current approaches to enabling real-time, dynamic infrastructure are inadequate, expensive and not scalable to support consumer mass-market requirements. Over time, the server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to automate systems administration functions that are knowledge and labor intensive. This expensive and non-real time paradigm is ill suited for a world where customers are demanding communication, collaboration and commerce at the speed of light. Thanks to hardware assisted virtualization, and the resulting decoupling of infrastructure and application management, it is now possible to provide dynamic visibility and control of services management to meet the rapidly growing demand for cloud-based services. What is needed is a rethinking of the underlying operating system and management infrastructure to accommodate the ongoing transformation of the data center from the traditional server-centric architecture model to a cloud or network-centric model. This paper proposes and describes a reference model for a network-centric datacenter infrastructure management stack that borrows and applies key concepts that have enabled dynamism, scalability, reliability and security in the telecom industry, to the computing industry. Finally
Wideband Code Division Multiple Access (WCDMA) system simulators are commonly used as a design and research tool to aid the deployment of 3G as well as forthcoming wireless communication systems. High data rates, reli...
详细信息
ISBN:
(纸本)9781424410934
Wideband Code Division Multiple Access (WCDMA) system simulators are commonly used as a design and research tool to aid the deployment of 3G as well as forthcoming wireless communication systems. High data rates, reliability, security and Bit Error Rate (BER) performance are key requirements of a 3G system. If realistic simulations are to be performed, these requirements have led to unacceptably long computation times. Inclusion of a rake receiver module, which is mainly employed to address multipath interference, results in further reduction in simulator speed. This paper presents a distributed computing approach using MATLAB to overcome the speed issue of a WCDMA simulator with a rake receiver. We show that the simulation time can be reduced by about 80% by the use of distributed computing combined with code optimizations.
Fueled by burgeoning online services, power and thermal issues are becoming crucial both in terms of utility costs and environmental impact. In this paper, we motivate an approach that puts power/thermal issues at the...
详细信息
ISBN:
(纸本)9781424464043
Fueled by burgeoning online services, power and thermal issues are becoming crucial both in terms of utility costs and environmental impact. In this paper, we motivate an approach that puts power/thermal issues at the heart of distributed computing, and strives to alter the behavior of the computation in order to dynamically adjust to changing energy constraints. The paper discusses three distributed energy adaptation paradigms and shows why they are essential for sustainable computing. The paper also compares and contrasts two basic techniques for energy adaptation in servers, namely power capping and use of sleep modes.
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the sim...
详细信息
ISBN:
(纸本)9780819483805
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF([1]), uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server, this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis ...
详细信息
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard LOCAL model of computation and define LD(t) (for local decision) as the class of decision problems that can be solved in t communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD(t,p,q), containing all languages for which there exists a randomized algorithm that runs in t rounds, accepts correct instances with probability at least p and rejects incorrect ones with probability at least q. We show that p 2 +q = 1 is a threshold for the containment of LD(t) in BPLD(t,p,q). More precisely, we show that there exists a language that does not belong to LD(t) for any t=o(n) but does belong to BPLD(0,p,q) for any p,q ∈ (0,1] such that p 2 +q≤1. On the other hand, we show that, restricted to hereditary languages, BPLD(t,p,q) = LD(O(t)), for any function t and any p,q ∈ (0,1] such that p 2 +q>;1. In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of r
暂无评论