Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and tim...
详细信息
Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T. Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d−2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d. The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.
Workflow technology has recently been introduced in pervasive and mobile computing due to its inherent capabilities of task coordination and interoperability among heterogeneous resources. Pervasive and mobile applica...
详细信息
Workflow technology has recently been introduced in pervasive and mobile computing due to its inherent capabilities of task coordination and interoperability among heterogeneous resources. Pervasive and mobile applications require context i.e. situation information for adaptation to efficiently achieve goals. Current workflow designers are very useful in designing rigid or evolving process models but provide little or no support for designing context aware workflows. CAWD is a tool that supports design of context aware workflows by providing means for integration and inference of context in workflow models and also provides strategies for dynamic selection, addition or deletion of activities or services. This tool is developed on top of workflow foundation. NET 3.5 and can be used as a standalone application for designing and verifying the developed context aware workflow models. In this paper we describe the architecture and implementation details of this tool supported by an example scenario.
In this work, we develop and evaluate a theoretical model, which we then use to study the impact of the synchronization frequency on the performance of dynamic self-scheduling algorithms. These algorithms are used to ...
详细信息
We use web-scale N-grams in a base NP parser that correctly analyzes 95.4%of the base NPs in natural ***-scale data improves *** is,there is no data like more *** scales log-linearly with the number of parameters in t...
详细信息
We use web-scale N-grams in a base NP parser that correctly analyzes 95.4%of the base NPs in natural ***-scale data improves *** is,there is no data like more *** scales log-linearly with the number of parameters in the model(the number of unique N-grams).The web-scale N-grams are particularly helpful in harder cases,such as NPs that contain conjunctions.
Porting an application written for personal computer to embedded devices requires conversion of floating-point numbers and operations into fixed-point ones. Testing the conversion hence requires the latter be as close...
详细信息
Online video archives provide a large amount of multimedia presentation contents through the Internet. But, it takes a long time to find what they really want to watch from a lot of presentation videos. We have been d...
详细信息
This paper describes a supervised reinforcement learning-based model for discrete environment domains. The model was tested within the domain of backgammon game. Our results show that a supervised actor-critic based l...
详细信息
This paper describes a supervised reinforcement learning-based model for discrete environment domains. The model was tested within the domain of backgammon game. Our results show that a supervised actor-critic based learning model is capable of improving the initial performance and then eventually reach similar performance levels as those obtained by TD-Gammon, an artificial neural network player (ANN) trained by temporal differences.
Most of the search engines search for keywords to answer the queries from users. The search engines usually search web pages for the required information. However they filter the pages from searching unnecessary pages...
详细信息
Most of the search engines search for keywords to answer the queries from users. The search engines usually search web pages for the required information. However they filter the pages from searching unnecessary pages by using advanced algorithms. These search engines can answer topic wise queries efficiently and effectively by developing state-of-art algorithms. However they are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. The main focus of these search engines is solving these queries with close to accurate results in small time using much researched algorithms. However, it shows that such search engines are vulnerable in answering intelligent queries using this approach. They either show inaccurate results with this approach or show accurate but (could be) unreliable results. With the keywords based searches they usually provide results from blogs (if available) or other discussion boards. The user cannot have a satisfaction with these results due to lack of trusts on blogs etc. To get the trusted results search engines require searching for pages that maintain such information at some place. This requires including domain knowledge in the web pages to help search engines in answering intelligent queries. The layered model of Semantic Web provides solution to this problem by providing tools and technologies to enable machine readable semantics in current web contents.
Recently various sensors, such as GPS and compass devices, can be cost-effectively manufactured and this allows their deployment in conjunction with mobile video cameras. Hence, recorded clips can automatically be ann...
详细信息
Textual entailment has been recently defined as a common solution for modeling language variability in different NLP tasks. Textual Entailment is formally defined as a relationship between a coherent text T and a lang...
详细信息
Textual entailment has been recently defined as a common solution for modeling language variability in different NLP tasks. Textual Entailment is formally defined as a relationship between a coherent text T and a language expression, the hypothesis H. T is said to entail H (T → H) if the meaning of H can be inferred from the meaning of T. An entailment function e(T,H) thus maps an entailment pair T-H to a true value (i.e., true if the relationship holds, false otherwise). Alternatively, e(T,H) can be also intended as a probabilistic function mapping the pair T-H to a real value between 0 and 1, expressing the confidence with which a human judge or an automatic system estimate the relationship to hold.
暂无评论