this paper presents the design and implementation of a microcode-based and an FSM-based memory built-in self test (BIST) controllers using Xilinx Spartan XC3S500E FPGA. the controllers are written in Very High Speed I...
详细信息
ISBN:
(纸本)9780889867307
this paper presents the design and implementation of a microcode-based and an FSM-based memory built-in self test (BIST) controllers using Xilinx Spartan XC3S500E FPGA. the controllers are written in Very High Speed Integrated Circuit Hardware Description Language (VHDL) code and verified using Xilinx ISE design tools. Synthesis and implementation on the field programmable logic array (FPGA) device when testing several memory sizes are carried out based on March C-, March C and March X, MATS+ and MATS test pattern algorithms. Logic area utilization and flexibility of the two controllers are evaluated. We show that the microcode-based controller has better flexibility but occupies larger logic area compared to the FSM-based controller.
Scheduling can be defined as the allocation of resources over time to perform a collection of tasks. It is a decision making process that has a goal to optimize one or more than one objective functions. In manufacturi...
详细信息
ISBN:
(纸本)9780889867307
Scheduling can be defined as the allocation of resources over time to perform a collection of tasks. It is a decision making process that has a goal to optimize one or more than one objective functions. In manufacturing, the purpose of scheduling is to minimize the production time and costs, by telling a production facility what to make, when, with which staff, and on which equipment. Production scheduling aims to maximize the efficiency of the operation and reduce costs. Along withthis paper, we solve scheduling in manufacturing problem using problem-solving method (PSM). We use knowledge modeling approach namely as ontology to model scheduling and PSM. Ontology can be seen as information model that explicitly describes the various entities and abstractions that exist in a scheduling and problem-solving, along withtheir properties. While Problem-solving Method (PSM) is provides the vocabulary necessary to characterize the search based problem-solving behavior of the scheduling task.
Globalization has led to unlimited information between geographically remote locations and insight of a global common market. When constructing website applications for use on various industries, developers need to de...
详细信息
ISBN:
(纸本)9780889867307
Globalization has led to unlimited information between geographically remote locations and insight of a global common market. When constructing website applications for use on various industries, developers need to deal with a wide range of users from different countries. thus, multilingual system is implemented in order to make available multilingual environment in those applications. However, it is time-consuming to define all the possible languages for multilingual system manually, it would be desirable to automate the adoption of language identification for text-based documents. To address this need, we introduce language identification of Arabic script documents with letter frequency based. Techniques used for identification are fuzzy ARTMAP and default ARTMAP, which are belong to neural network architectures that perform incremental supervised learning. Arabic script documents such as Arabic, Persian and Urdu were used for performing language identification. From the experiments, we have found that fuzzy ARTMAP has performed better than the default ARTMAP in Arabic script language identification.
Reliability in computer or engineering systems is undoubtedly a key requirement in the development process. Safety within critical control systems, and reliable data transfers, require tolerance to unexpected and unwa...
详细信息
ISBN:
(纸本)9780889867307
Reliability in computer or engineering systems is undoubtedly a key requirement in the development process. Safety within critical control systems, and reliable data transfers, require tolerance to unexpected and unwanted phenomena. In biology, new cells can replace damaged cells [1], DNA is able to repair and replicate with error control [1]. these processes are essential to maintain the overall organism. Biology has often been a successful inspiration in computation (artificial neural networks, genetic algorithms, ant colony optimisation, etc) although conventional computation differs widely from natural computation. In this respect, [2] introduced systemic computation (SC), a model of interacting systems with natural characteristics and suggested a new computer architecture. Following this work, [3] introduced a systemic computer as a virtual machine running on conventional computers. In this paper we show, using a genetic algorithm implementation running on this platform, how crash-proof programs following the SC paradigm have native fault-tolerance and easily integrated self-maintenance.
Dynamic parallelization and optimization of a loop is a crucial issue for enhancing the performance of sequential programs as loops account for a large fraction of execution time. Loop level parallelism can also be ex...
详细信息
ISBN:
(纸本)9780889867307
Dynamic parallelization and optimization of a loop is a crucial issue for enhancing the performance of sequential programs as loops account for a large fraction of execution time. Loop level parallelism can also be extracted efficiently due to its regular structure. Based on the observation that only a limited number of paths are executed frequently in hot loops, we propose a hardware hot loop path detector to specify such hot loops and their hot paths accurately so that the dynamic optimizer may utilize the detected information effectively. the detector consists of a stack structured bit-tracing unit that identifies loop paths at a subroutine level, a hot loop detector that detects hot loops by utilizing loop path information and a hot path accumulator of loop paths. Experiments using SPEC CINT2000 show that loop paths occupy a small fraction (14.46%) of Ball-Larus paths but are detected frequently (64.45% of Ball-Larus paths). A combined small scale hot loop detector and hot path accumulator (32 entries each) attain a detection accuracy of 97.10% for the hottest loop path and 93.83% for the top 2 hottest loop paths and their order within hot loops.
this paper presents the application of Support Vector Machine classifier for security surveillance system. Recently, research in image processing has raised much interest in the security surveillance systems community...
详细信息
ISBN:
(纸本)9780889867307
this paper presents the application of Support Vector Machine classifier for security surveillance system. Recently, research in image processing has raised much interest in the security surveillance systems community. Weapon detection is one of the greatest challenges facing by the community recently. In order to overcome this issue, application of the popularly used Support Vector Machine classifier is performed to focus on the need of detecting dangerous weapons. In this paper, we take advantage of the classifier to categorize images object withthe hope to detect dangerous weapons effectively. In order to validate the effectiveness of Support Vector Machine classifier, several classifiers are used to compare the overall accuracy of the system. these classifiers include Neural Network, Decision Trees, Naïve Bayes and k-Nearest Neighbor methods. the final outcome of this research clearly indicates that Support Vector Machine has the ability in improving the classification accuracy using the extracted features.
the advancement of technology has led to the conversion of traditional courses into Web courses. this conversion is becoming easier and occurring more systematically in higher education. A growing demand for continuin...
详细信息
ISBN:
(纸本)9780889866997
the advancement of technology has led to the conversion of traditional courses into Web courses. this conversion is becoming easier and occurring more systematically in higher education. A growing demand for continuing education is changing the characteristic structure of the tertiary student population, with more students working full time and carrying family responsibilities. In addition, education is increasingly embracing active learning models over the traditional transmission mode of instruction. these factors are making Web-based courses easier to implement, more desirable, and pedagogically relevant. However, as the medium is relatively new, it is not yet precisely known what variables contribute more to students' learning. Withthe prospect of more Web-based courses being taken by an increasing number of students, it is important to gain a better understanding of how Web-based courses influence students learning [1]. Also, research is needed to obtain more understanding of the learning factors that influence student success in web-based learning. Web-based learning has been suggested to be the future of all types of distance learning [2].
With more software architects and developers coding for parallel execution, how fair tasks are scheduled by the operating system becomes an important criteria. Software code may comprise of small sections that are par...
详细信息
ISBN:
(纸本)9780889867307
With more software architects and developers coding for parallel execution, how fair tasks are scheduled by the operating system becomes an important criteria. Software code may comprise of small sections that are parallelizable and every possible performance gain should be exploited by the software developer. In order to exploit fine grain parallelism, software developers need the confidence that the operating system is able to fairly schedule their parallelized tasks. Most schedulers attempt to allocate resources to tasks fairly based on the task's priority. However, this fairness cannot be achieved in an ideal manner and hence it is only an approximate fairness. Actual experience with various schedulers varies and currently, there is no tool to qualitatively measure and compare them. this paper presents a tool to measure fairness and provides an intuitive representation of the results through the comparison of two different kernel schedulers of the open source Linux operating system.
this paper explores a new approach of using a multiobjective evolutionary algorithm (MOEA) to evolve robot controllers in performing phototaxis task while avoiding obstacles or navigating through a maze in a simulated...
详细信息
ISBN:
(纸本)9780889867307
this paper explores a new approach of using a multiobjective evolutionary algorithm (MOEA) to evolve robot controllers in performing phototaxis task while avoiding obstacles or navigating through a maze in a simulated environment, to overcome problems involving more than one objective, where these objectives usually trade-off among each other and are expressed in different units. Experiments were conducted in six sets within a 10% noise environment with different task environment complexities to investigate whether the MOEA is effective for controller synthesis. A simulated Khepera robot is evolved by a Pareto-frontier Differential Evolution (PDE) algorithm, and learned through a 3-layer feed-forward artificial neural network, attempting to simultaneously fulfill two conflicting objectives of maximizing robot phototaxis behavior while minimizing the neural network's hidden neurons by generating a Pareto optimal set of controllers. Results showed that robot controllers could be successfully developed using the MOEA.
Most of the existing summarization tools serve as a general purpose summarizer, rarely as the domain specific summarizer;e.g.: medical [14] and law [15] field documents summarizer. this paper describes a framework of ...
详细信息
ISBN:
(纸本)9780889867307
Most of the existing summarization tools serve as a general purpose summarizer, rarely as the domain specific summarizer;e.g.: medical [14] and law [15] field documents summarizer. this paper describes a framework of an automatic summary generation of one specific domain that is oil palm literature. In order to support the whole framework, the oil palm corpus is developed. the work is based on two different paradigms which is extraction and abstraction. By incorporating these two important methods in one summarization framework, the quality of the produced summary will greatly improve. A Nearly-New IE (ANNIE) is used as the backbone in extraction process. the sentences are then ranked for potential inclusion in the summary using a weighted word frequency known as Term Frequency-Inverse Document Frequency (TF-IDF). In the abstraction process, the oil palm corpus is used to support the summarization procedure. Using the training corpus, the output will be more precise may gather all the important facts from the pre-determined information retrieval process.
暂无评论