Increasingly embedded devices are turning to two technologies to achieve high performance and enable efficient programmability as well as product usability. the first is multi-core processing on FPGA devices in which ...
详细信息
Increasingly embedded devices are turning to two technologies to achieve high performance and enable efficient programmability as well as product usability. the first is multi-core processing on FPGA devices in which the multi-core architecture allows software to map application-level parallelism to inherent parallel fabric to offer better performance, the re-configurability leads to flexible and adaptive designs. the second is wireless communications that allow sensors to be distributed flexibly across a structure for example in the case of a body area network. this paper describes the ongoing design of a multi RF channel, multi-core embedded design which will be used as a generic FPGA solution to meet the requirements of both e-healthapplications as well as robotics applications.
this paper presents a novel approach for the classification of the religious scriptures, the Hadith (sayings of Prophet Muhammad (plural Ahadith)). Muhadith is a distributed, Cloud based expert system that uses the Ha...
详细信息
this paper presents a novel approach for the classification of the religious scriptures, the Hadith (sayings of Prophet Muhammad (plural Ahadith)). Muhadith is a distributed, Cloud based expert system that uses the Hadith science to classify Ahadith among 24 types from seven broad categories. Classification of the Hadith is a complex and sensitive task, and can only be performed by an expert of the Hadith sciences. Muhadith expert system is designed to imitate the Hadith experts for Hadith classification, and to enable a computer to behave like a Hadith expert to discriminate the authentic Ahadith from unauthentic ones. this paper presents the relationship and mapping of the expert system technology onto Hadith sciences, and technicalities involved in designing of the Muhadith expert system. We also propose solutions for the communicational and interoperability problems faced by the legacy web based distributed expert systems. We employ service oriented architecture to overcome the communicational problem and a candidature for the Software as a Service (SaaS) for the Cloud computing. the expert system also provides a reasoning facility that enables the user to look into the classification details. Muhadith expert system has been designed by merging the ideas from the domains of expert systems, Web technologies, and distributedcomputing systems. this type of an effort on the topic is rare and applying them in the domain of Hadith is our humble contribution.
An important asset in the skill set of any software project manager is the ability to somewhat accurately estimate the effort required to develop a software application. Acquiring this asset, however, requires a thoro...
详细信息
An important asset in the skill set of any software project manager is the ability to somewhat accurately estimate the effort required to develop a software application. Acquiring this asset, however, requires a thorough understanding of the factors that may affect the accuracy of these estimates. this paper presents the results of an empirical study conducted to determine the causes of variation in the accuracy of effort estimations for different application and task types. A Pakistani software house that specializes in developing financial transaction processing applications is chosen for this empirical study. Actual and estimated values for software development effort are gathered and analyzed for four different types of applications - web-based, database, parallel processing, and telephony - each having six different types of tasks i.e. business-development, new features, usability, security, support, and performance. Over 1000 data points are considered. Analysis of the results reveals, for instance, that the effort for web-based applications is mostly underestimated while the effort for telephony applications is mostly overestimated. the underestimation in web-based applications is usually due to a failure to account for the learning curve associated with rapidly changing web technologies while the overestimation in telephony applications is usually due to a failure to account for the usage of third-party components.
the new science gateway MoSGrid (Molecular Simulation Grid) enables users to submit and process molecular simulation studies on a large scale. A conformational analysis of guanidine zinc complexes, which are active ca...
详细信息
Every day, we create 2.5 quintillion bytes of data - so much that 90% of the data in the world today has been created in the last two years alone. this data comes from everywhere: sensors used to gather climate inform...
详细信息
Every day, we create 2.5 quintillion bytes of data - so much that 90% of the data in the world today has been created in the last two years alone. this data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. the IDC sizing of the digital universe - information that is either created or captured in digital form and then replicated in 2006 - is 161 Exabyte, growing to 988 Exabyte in 2010, representing a compound annual growth rate (CAGR) of 57%. A variety of system architectures have been implemented for data-intensive computing and large-scale data analysis applications including parallel and distributed relational database management systems which have been available to run on shared nothing clusters of processing nodes for more than two decades. However most data growth is with data in unstructured form and new processing paradigms with more flexible data models were needed. Several solutions have emerged including the MapReduce architecture pioneered by Google and now available in an open-source implementation called Hadoop used by Yahoo, Facebook, and others. 20% of the world's servers go into huge data centers by the “Big 5” - Google, Microsoft, Yahoo, Amazon, eBay [1].
the purpose of this talk is to provide a comprehensive state of the art concerning the evolution of data management systems from uni-processor systems to large scale distributed systems. We focus our study on the quer...
详细信息
ISBN:
(纸本)9781450313070
the purpose of this talk is to provide a comprehensive state of the art concerning the evolution of data management systems from uni-processor systems to large scale distributed systems. We focus our study on the query processing and optimization methods. For each environment, we recall their motivations and point out main characteristics of proposed methods, especially, the nature of decision-making (centralized or decentralized control for high level of scalability), adaptive level (intra-operator and/or inter-operator), impact of parallelism (partitioned and pipelined parallelism) and dynamicity (e.g. elasticity) of execution models.
the sizes of databases have seen exponential growth in the past, and such growth is expected to accelerate in the future, withthe steady drop in storage cost accompanied by a rapid increase in storage capacity. Many ...
详细信息
the sizes of databases have seen exponential growth in the past, and such growth is expected to accelerate in the future, withthe steady drop in storage cost accompanied by a rapid increase in storage capacity. Many years ago, a terabyte database was considered to be large, but nowadays they are sometimes regarded as small, and the daily volumes of data being added to some databases are measured in terabytes. In the future, petabyte and exabyte databases will be common. With such volumes of data, it is evident that the sequential processing paradigm will be unable to cope, for example, even assuming a data rate of 1 terabyte per second, reading through a petabyte database will take over 10 days. To effectively manage such volumes of data, it is necessary to allocate multiple resources to it, very often massively so. the processing of databases of such astronomical proportions requires an understanding of how high-performance systems and parallelism work. Besides the massive volume of data in the database to be processed, some data has been distributed across the globe in a Grid environment. these massive data centres are also a part of the emergence of Cloud computing, where data access has shifted from local machines to powerful servers hosting web applications and services, making data access across the Internet using standard web browsers pervasive. this adds another dimension to such systems. this talk, based on our recent published book [1], discusses fundamental understanding of parallelism in data-intensive applications, and demonstrates how to develop faster capabilities to support them. this includes the importance of indexing in parallel systems [2-4], specialized algorithms to support various query processing [5-9], as well as objectoriented scheme [10-12]. parallelism in databases has been around since the early 1980s, when many researchers in this area aspired to build large special-purpose database machines -- databases employing dedicated specialized
this paper introduces invasive computing, a new paradigm for programming parallel architectures. the goals are to enable the development and execution of resource aware programs that can dynamically allocate and free ...
详细信息
ISBN:
(纸本)9780889868649
this paper introduces invasive computing, a new paradigm for programming parallel architectures. the goals are to enable the development and execution of resource aware programs that can dynamically allocate and free new resources in phases with more parallelism. To allocate more resources, applications use the invade operation and to free them the retreat. the research is conducted within the framework of the Transregional Collaborative Research Centre 89 funded by the German Science Foundation.
this paper discusses a parallel immune algorithm (IA) for detection of lung cancer in chest X-ray images based on object shared space. the template matching method is combined to the algorithm and JavaSpaces is used a...
详细信息
the paper presents SIGMA (Semantic Government Mash-up Application), a platform able to create mash-ups by providing access to open governmental data. the proposed solution is based on the existing semantic Web technol...
详细信息
暂无评论