The conventional ways of building software are accepted to produce rigid systems that impede the processes of change typical for contemporary organisations. In this paper, we propose that software can be made more ada...
详细信息
ISBN:
(纸本)159593409X
The conventional ways of building software are accepted to produce rigid systems that impede the processes of change typical for contemporary organisations. In this paper, we propose that software can be made more adaptable and tuned to the needs of changing organisations, if it is built using organisation-inspired principles and software structures such as Virtual Organisations, roles and norms. Agent-based software engineering is already using these principles, and we extend the state of the art in that domain by proposing an "open systems" approach, where agents can join and leave Virtual Organisations at will, taking on different roles as needed. Reasoning on organisational roles and norms is facilitated by formalised contract templates and automatic conflict resolution strategies. In terms of overall lifecycle, a system is initiated to satisfy a set of formalised requirements. Agents respond to bids for joining a Virtual Organisation, where each bid is for a contract-based coalition. In this paper, we describe our approach and outline a set of research challenges. Copyright 2006 ACM.
We study how to design experiments to measure the success rates of phishing attacks that are ethical and accurate, which are two requirements of contradictory forces. Namely, an ethical experiment must not expose the ...
详细信息
ISBN:
(纸本)9781595933232
We study how to design experiments to measure the success rates of phishing attacks that are ethical and accurate, which are two requirements of contradictory forces. Namely, an ethical experiment must not expose the participants to any risk;it should be possible to locally verify by the participants or representatives thereof that this was the case. At the same time, an experiment is accurate if it is possible to argue why its success rate is not an upper or lower bound of that of a real attack - this may be difficult if the ethics considerations make the user perception of the experiment different from the user perception of the attack. We introduce several experimental techniques allowing us to achieve a balance between these two requirements, and demonstrate how to apply these, using a context aware phishing experiment on a popular online auction site which we call "rOnlWe study how to design experiments to measure the success rates of phishing attacks that are ethical and accurate,which are two requirements of contradictory forces. Namely, an ethical experiment must not expose the participants to any risk;it should be possible to locally verify by the participants or representatives thereof that this was the case. At the same time, an experiment is accurate if it is possible to argue why its success rate is not an upper or lower bound of that of a real attack - this may be difficult if the ethics considerations make the user perception of the experiment different from the user perception of the attack. We introduce several experimental techniques allowing us to achieve a balance between these two requirements, and demonstrate how to apply these, using a context aware phishing experiment on a popular online auction site which we call "rOnl". Our experiments exhibit a measured average yield of 11% per collection of unique users. This study was authorized by the Human Subjects Committee at Indiana University (Study #05-10306).. Our experiments exhibit a measured a
One of the main issues in comparative genomics is the study of chromosomal gene order in one or more related species. Recently identifying sets of orthologous genes in several genomes has become getting important, sin...
详细信息
ISBN:
(纸本)1860946232
One of the main issues in comparative genomics is the study of chromosomal gene order in one or more related species. Recently identifying sets of orthologous genes in several genomes has become getting important, since a cluster of similar genes helps us to predict the function of unknown genes. For this purpose, the whole genome alignment is usually used to determine horizontal gene transfer, gene duplication, and gene loss between two related genomes. Also it is well known that a novel visualization tool of the whole genome alignment would be very useful for understanding genome organization and the evolutionary process. In this paper, we propose a method for identifying and visualizing the alignment of the whole genome alignment, especially for detecting gene clusters between two aligned genomes. Since the current rigorous algorithm for finding gene clusters has strong and artificial constraints, they are not useful for coping with "noisy" alignments. We developed the system AlignScope to provide a simplified structure for genome alignment at any level, and also to help us to find gene clusters. In this experiment, we have tested AlignScope on several microbial genomes.
Evaluation of fluid flow through micromachined nanoporous membranes using a custom-built automated system designed specifically for nanofluidic applications is presented. The nanoporous membranes are fabricated by a p...
详细信息
Most ontology development methodologies and tools for ontology management deal with ontology snapshots, i.e. they model and manage only the most recent version of ontologies, which is inadequate for contexts where the...
详细信息
ISBN:
(纸本)1595933530
Most ontology development methodologies and tools for ontology management deal with ontology snapshots, i.e. they model and manage only the most recent version of ontologies, which is inadequate for contexts where the history of the ontology is of interest, such as historical archives. This work presents a modeling for entity and relationship timelines in the Protégé tool, complemented with a visualization plug-in, which enables users to examine entity evolution along the timeline. Copyright 2006 ACM.
Reinforcement learning (RL) was originally proposed as a framework to allow agents to learn in an online fashion as they interact with their environment. Existing RL algorithms come short of achieving this goal becaus...
详细信息
ISBN:
(纸本)1595933832
Reinforcement learning (RL) was originally proposed as a framework to allow agents to learn in an online fashion as they interact with their environment. Existing RL algorithms come short of achieving this goal because the amount of exploration required is often too costly and/or too time consuming for online learning. As a result, RL is mostly used for offline learning in simulated environments. We propose a new algorithm, called BEETLE, for effective online learning that is computationally efficient while minimizing the amount of exploration. We take a Bayesian model-based approach, framing RL as a partially observable Markov decision process. Our two main contributions are the analytical derivation that the optimal value function is the upper envelope of a set of multivariate polynomials, and an efficient point-based value iteration algorithm that exploits this simple parameterization.
Maximum margin discriminant analysis (MMDA) was proposed that uses the margin idea for feature extraction. It often outperforms traditional methods like kernel principal component analysis (KPCA) and kernel Fisher dis...
详细信息
ISBN:
(纸本)1595933395
Maximum margin discriminant analysis (MMDA) was proposed that uses the margin idea for feature extraction. It often outperforms traditional methods like kernel principal component analysis (KPCA) and kernel Fisher discriminant analysis (KFD). However, as in other kernel methods, its time complexity is cubic in the number of training points m, and is thus computationally inefficient on massive data sets. In this paper, we propose an (1 + Ε) 2-approximation algorithm for obtaining the MMDA features by extending the core vector machines. The resultant time complexity is only linear in m, while its space complexity is independent of m. Extensive comparisons with the original MMDA, KPCA, and KFD on a number of large data sets show that the proposed feature extractor can improve classification accuracy, and is also faster than these kernel-based methods by more than an order of magnitude. Copyright 2006 ACM.
This paper proposes a novel strategy that uses hypergraph partitioning and K-way iterative mapping-refinement heuristics for scheduling a batch of data-intensive tasks with batch-shared I/O behavior on heterogeneous c...
ISBN:
(纸本)9781424400546
This paper proposes a novel strategy that uses hypergraph partitioning and K-way iterative mapping-refinement heuristics for scheduling a batch of data-intensive tasks with batch-shared I/O behavior on heterogeneous collections of storage and compute clusters. The strategy formulates file sharing among tasks as a hypergraph to minimize the I/O overheads due to duplicate file transfers and employs a K-way iterative mapping-refinement scheme to adapt to the heterogeneity of compute clusters and storage networks in the system. We evaluate the proposed approach through real experiments and simulations on application scenarios from two application domains; satellite data processing and biomedical imaging. Our experimental results show that our approach can achieve significant performance improvement over algorithms such as HPS, Shortest Job First, MinMin, MaxMin and Sufferage for workloads with high degree of shared I/O among tasks.
Although message passing using MPI is the dominant model for parallel programming today, the significant effort required to develop high-performance MPI applications has prompted the development of several parallel pr...
详细信息
Breast ultrasound (US) in conjunction with digital mammography has come to be regarded as the gold standard for breast cancer diagnosis. While breast US has certain advantages over digital mammography, it suffers from...
详细信息
暂无评论