FluidDB is a new structured storage system, available online for limited alpha test, which is designed to be able to easily store objects and relations among them (using tags). It is accessible through a simple REST i...
详细信息
FluidDB is a new structured storage system, available online for limited alpha test, which is designed to be able to easily store objects and relations among them (using tags). It is accessible through a simple REST interface, which is usually wrapped in a high-level language library. These features make them an ideal candidate for acting as the substrate of a persistent or pool based evolutionary algorithm, the Fluid Evolutionary Algorithm, presented for the first time in this paper. Our objective is to present a proof of concept and also to show how design decisions (about how and how often to use the pool, for instance) affect running time and algorithmic performance; we also show how FluidDB features positively affect algorithm design. These measures are mainly intended as a baseline measure, which can be improved on as FluidDB and fluid evolutionary algorithms (co-)evolve.
Ns-2 and its successor ns-3 are discrete-event simulators. Ns-3 is still under development, but offers some interesting characteristics for developers while ns-2 still has a big user base. This paper remarks current d...
详细信息
Molecular communication is a promising paradigm to implement nanonetworks, the interconnection of nanomachines. Catalytic nanomotors constitute one of the techniques that have been proposed for medium-range molecular ...
详细信息
In this research paper, the frequency analysis of point rainfall data is examined. The emphasis has been placed on the statistical analysis of storm events. The design of urban stormwater management systems based on a...
详细信息
ISBN:
(纸本)9780784411148
In this research paper, the frequency analysis of point rainfall data is examined. The emphasis has been placed on the statistical analysis of storm events. The design of urban stormwater management systems based on analytical probabilistic modeling approach depends on the statistical analysis of input meteorology, rainfall. The long-term rainfall record is discretized into independent storm events by defining an inter event time definition (IETD) and each event is characterized by four event characteristics (e.g., rainfall event volume, duration, intensity, interevent time). The time series of discretized storm events representing each of the four characteristics are fitted with probability density functions. The parameters of the PDFs of rainfall characteristics constitute the input to the analytical probabilistic models. As a complement to continuous simulation models (e.g., US EPA SWMM), the computational efficient analytical probabilistic models can use parameters of PDFs as the input to the model for urban stormwater analysis. The objective of this research is to design and build a software utility for engineers and professionals which will perform the aforementioned statistical rainfall analysis. The research methodology is as follows: (1) various sources of freely accessible rainfall records were explored such as (NOAA and NCDC web sites);(2) Class and Relationship diagrams were developed for the overall system architecture (note that the system is separated into a back-end parsing engine, and a series of front-end applications);(3) the Python programming language was selected for development, and the back-end system architecture was implemented;(4) a series of tests were performed to assess the proper functionality of the system;(5) the front-end systems, including a plotting application and a web-based interface, were developed. This tool can be useful for any location in the United States that has a viable rainfall record, and could be used to generate a
Two families of the point-symmetric linear coupled consolidation models - differing in one boundary condition - are treated. The models are with space dimension one (oedometric), two (cylindrical) or three (spherical)...
详细信息
Every day, hundreds or even thousands of computers are infected with financial malware (i.e. Zeus) that forces them to become zombies or drones, capable of joining massive financial botnets that can be hired by well-o...
详细信息
One of the most important reasons of the high rate of accidents would largely lend itself to ineffective data collection and evaluation process since the necessary information cannot be obtained effectively from the t...
详细信息
One of the most important reasons of the high rate of accidents would largely lend itself to ineffective data collection and evaluation process since the necessary information cannot be obtained effectively from the traffic accidents reports (TAR). The discord and dealing with non-relevant data may appear at four levels: (1) Country and Cultural, (2) Institutional and organizational, (3) Data collection, (4) Data analysis and Evaluation. The case findings are consistent with this knowledge put forward in the literature;there is a transparency problem in coordination between the institutions as well as the inefficient TAR data, which is open to manipulation;the problem of under-reporting and inappropriate data storage prevails before the false statistical evaluation methods. The old-fashioned data management structure causes incompatibility with the novel technologies, avoiding timely interventions in reducing accidents and alleviating the fatalities. Transmission of the data to the interest agencies for evaluation and effective operation of the ITS-based systems should be considered. The problem areas were explored through diagnoses at institutional, data collection, and evaluation steps and the solutions were determined accordingly for the case city of Izmir.
The quality of a classic software engineering process depends on the completeness of project documents and on the inter-phase consistency. In this paper, a method for passing from the requirement specification to the ...
The quality of a classic software engineering process depends on the completeness of project documents and on the inter-phase consistency. In this paper, a method for passing from the requirement specification to the class model is proposed. First, a developer browses the text of the requirements, extracts the word sequences, and places them as terms into the glossary. Next, the internal ontology logic for the glossary needs to be elaborated. External ontology sources, as Wikipedia or domain ontology services, may be used to support this stage. At the end, the newly built ontology is transformed to the class model. The whole process may be supported with semi-automated, interactive tools. The result should be the class model with better completeness and consistency than using traditional methods.
This paper presents a new tuning method based on model parameters identified in closed-loop. For classical controllers such as PI(D) controllers a large number of simple tuning methods for various application areas ex...
This paper presents a new tuning method based on model parameters identified in closed-loop. For classical controllers such as PI(D) controllers a large number of simple tuning methods for various application areas exist. However, when it comes to designing a generalised predictive controller (GPC) four parameters have to be specified. To choose those parameters is not a trivial task since they are not directly related to control or regulation performance. The presented tuning method exploits model-parameters to select suitable controller parameters. Additionally, a Rhinehart filter is incorporated in the design to decrease the impact of noise, therefore, a fifth parameter has to be optimised. The proposed method has been tested in simulation and on a real system.
The worldwide Digital Signage market has been getting increasingly popular in recent years. Nevertheless, for service providers, the Digital Signage business is still not easy to manage and time-consuming to operate. ...
详细信息
暂无评论