In this paper we present a signaling protocol for QoS differentiation suitable for optical burst switching networks. The proposed protocol is a two-way reservation scheme that employs delayed and in-advance reservatio...
详细信息
In this paper we present a signaling protocol for QoS differentiation suitable for optical burst switching networks. The proposed protocol is a two-way reservation scheme that employs delayed and in-advance reservation of resources. In this scheme delayed reservations may be relaxed, introducing a reservation duration parameter that is negotiated during call setup phase. This feature allows bursts to reserve resources beyond their actual size to increase their successful forwarding probability and is used to provide QoS differentiation. The proposed signaling protocol offers a low blocking probability for bursts that can tolerate the round-trip delay required for the reservations. We present the main features of the protocol and describe in detail timing considerations regarding the call setup and the reservation process. We also describe several methods for choosing the protocol parameters so as to optimize performance and present corresponding evaluation results. Furthermore, we compare the performance of the proposed protocol against that of two other typical reservation protocols, a Tell-and-Wait and a Tell-and-Go protocol.
The limited bandwidth and the user demand for advanced mobile services led to the research of new strategies for efficient channel assignment. The role of a channel assignment scheme is to allocate channels to cells o...
详细信息
The limited bandwidth and the user demand for advanced mobile services led to the research of new strategies for efficient channel assignment. The role of a channel assignment scheme is to allocate channels to cells or mobiles in such a way as to minimize the probability that the incoming calls are blocked and the probability that ongoing calls are dropped. Advanced services require significant part of the available bandwidth and so efficient bandwidth management through efficient channel assignment must be applied. Channel assignment strategies can be designed and evaluated using a simulation environment specialized in cellular communications. This paper thoroughly reviews distributed DCA schemes and presents several novel efficient variations (e.g. hybrid variation) suitable for large scale cellular systems. In this paper, a comprehensive simulation system for evaluating channel assignment schemes in a detailed level is also presented.
With the advent of the grid, task scheduling in heterogeneous environments becomes more and more important. Of particular interest is the fact that especially in scientific experiments a non negligible amount of data ...
详细信息
With the advent of the grid, task scheduling in heterogeneous environments becomes more and more important. Of particular interest is the fact that especially in scientific experiments a non negligible amount of data must be transferred to the processing node before a task can commence execution. Given bandwidth constraints, scheduling both computations and data transfers is required. In this paper we first develop a suitable model that captures heterogeneity in the processing nodes while imposing communication constraints. We proceed by proposing scheduling heuristics with the aim of minimizing the total make span of a set of independent tasks. Through a series of experiments we illustrate the potential of a particular heuristic that is based on backfilling
In this paper we work on the balance between hardware and software implementation of a machine learning algorithm, which belongs to the area of statistical learning theory. We use system-on-chip technology to demonstr...
详细信息
In this paper we work on the balance between hardware and software implementation of a machine learning algorithm, which belongs to the area of statistical learning theory. We use system-on-chip technology to demonstrate the potential usefulness of moving the critical sections of an algorithm into HW: the so-called hardware/software balance. Our experiments show that the approach can achieve speedups using a complex machine learning algorithm called a support vector machine. The experiments are conducted on a real-time Java virtual machine named Java optimized processor
For eventually providing terahertz science with compact and convenient devices,terahertz (1~10THz) quantum-well photodetectors and quantum-cascade lasers are *** design and projected detector performance are presente...
详细信息
For eventually providing terahertz science with compact and convenient devices,terahertz (1~10THz) quantum-well photodetectors and quantum-cascade lasers are *** design and projected detector performance are presented together with experimental results for several test devices,all working at photon energies below and around optical *** limited infrared performance (BLIP) operations are observed for all samples (three in total),designed for different *** temperatures of 17,13,and 12K are achieved for peak detection frequencies of 9.7THz(31μm),5.4THz(56μm),and 3.2THz(93μm),respectively.A set of THz quantum-cascade lasers with identical device parameters except for doping concentration is *** δ-doping density for each period varies from 3.2×1010 to 4.8×*** observe that the lasing threshold current density increases monotonically with doping ***,the measurements for devices with different cavity lengths provide evidence that the free carrier absorption causes the waveguide loss also to increase *** the observed maximum lasing temperature is best at a doping density of 3.6×1010cm-2.
In this paper we introduce a new test-data compression method for IP cores with unknown structure. The proposed method encodes the test data provided by the core vendor using a new, very effective compression scheme b...
详细信息
ISBN:
(纸本)9783981080100
In this paper we introduce a new test-data compression method for IP cores with unknown structure. The proposed method encodes the test data provided by the core vendor using a new, very effective compression scheme based on multilevel Huffman coding. Specifically, three different kinds of information are compressed using the same Huffman code, and thus significant test data reductions are achieved. A simple architecture is proposed for decoding on-chip the compressed data. Its hardware overhead is very low and comparable to that of the most efficient methods in the literature. Additionally, the proposed technique offers increased probability of detection of unmodeled faults since the majority of the unknown values of the test set are replaced by pseudorandom data generated by an LFSR.
We consider a security problem on a distributed network. We assume a network whose nodes are vulnerable to infection by threats (e.g. viruses), the attackers. A system security software, the defender, is available in ...
详细信息
We consider a security problem on a distributed network. We assume a network whose nodes are vulnerable to infection by threats (e.g. viruses), the attackers. A system security software, the defender, is available in the system. However, due to the network’s size, economic and performance reasons, it is capable to provide safety, i.e. clean nodes from the possible presence of attackers, only to a limited part of it. The objective of the defender is to place itself in such a way as to maximize the number of attackers caught, while each attacker aims not to be caught. In [7], a basic case of this problem was modeled as a non-cooperative game, called the Edge model. There, the defender could protect a single link of the network. Here, we consider a more general case of the problem where the defender is able to scan and protect a set of k links of the network, which we call the Tuple model. It is natural to expect that this increased power of the defender should result in a better quality of protection for the network. Ideally, this would be achieved at little expense on the existence and complexity of Nash equilibria (profiles where no entity can improve its local objective unilaterally by switching placements on the network). In this paper we study pure and mixed Nash equilibria in the model. In particular, we propose algorithms for computing such equilibria in polynomial time and we provide a polynomial-time transformation of a special class of Nash equilibria, called matching equilibria, between the Edge model and the Tuple model, and vice versa. Finally, we establish that the increased power of the defender results in higher-quality protection of the network.
In this paper, we propose a new system that conveys the virtual contact feeling to people. Virtually, we make the real contact feeling that they touch the tip of their finger to the object. As the friction exits betwe...
详细信息
In this paper, we propose a new system that conveys the virtual contact feeling to people. Virtually, we make the real contact feeling that they touch the tip of their finger to the object. As the friction exits between the tip of their finger and the object, we make the friction, and will be able to consummate the virtual contact feeling. Nowadays, only the visual information of the object is conveyed to the user on the screen on Web. By this technique, not only the visual information of the object but also the real contact feeling will be able to be conveyed to the user in the near future. And, we think that even use as a training system for the medical treatment and use as the braille system which even visual disability people can use the Internet. The real contact feeling of the object needs comparatively high frequency. Then, first of all, we focus the real contact feeling that is made from low frequency. At present, we study to make the real contact feeling without friction
The control mode is the core concept for the prediction of human performance in CREAM. In this paper, we propose a probabilistic method for determining the control mode which is a substitute for the existing determini...
详细信息
This paper proposes a new efficient method for modeling and simulation of control systems using an adjacency matrix in graph theory. A Java simulation platform, which has been developed based on the proposed method, i...
详细信息
This paper proposes a new efficient method for modeling and simulation of control systems using an adjacency matrix in graph theory. A Java simulation platform, which has been developed based on the proposed method, is introduced. It provides an interactive graphical environment for modeling and simulation of control systems
暂无评论