Cating jobs to resources is a main concept of distributed control systems. If we have a fair system to decision and job management, this concept will result in appropriate job allocation in manufacturing industrial en...
详细信息
Cating jobs to resources is a main concept of distributed control systems. If we have a fair system to decision and job management, this concept will result in appropriate job allocation in manufacturing industrial environments. In retrospect, too many attempts have been made and enormous results achieved. Especially, distributed control systems (DCS) on the basis of parallel machines. In this regard Pigeon Hole Principle can be a proper tool in order to job allocation in DCSs. Besides, some of other works compared with considering job assigning aspect. Eventually, state-of-the-art extension algorithm named pigeon Hole principle will be presented for assigning jobs to machines in a distributed control systems for the whole the system. This paper presents new algorithms in order to devote job to appropriate machine this algorithm can be applied in particular when the industrial manufacturing system encounters unpredicted bottlenecks. So PHP can be expand and utilize for throughout the industrial manufacturing system (factory) whereby considering induction lemma.
One cannot image today's life without mechatronic systems, which have to be developed in a joint effort by teams of mechanical engineers, electrical engineers, control engineers and software engineers. Often these...
详细信息
One cannot image today's life without mechatronic systems, which have to be developed in a joint effort by teams of mechanical engineers, electrical engineers, control engineers and software engineers. Often these systems are applied in safety critical environments like in cars or aircrafts. This requires systems that function correctly and do not cause hazardous situations. However, random errors due to wear or external influences cannot be completely excluded. Consequently, we have to perform a hazard analysis for the system. Further, the union of four disciplines in one system requires the development and analysis of the system as a whole. We present a component-based hazard analysis that considers the entire mechatronic system including hardware, i.e. mechanical and electrical components, and software components. Our approach considers the physical properties of different types of flow in mechatronic systems. We have identified reusable patterns for the failure behavior which can be generated automatically. This reduces the effort for the developer. As cycles, e.g. control cycles, are an internal part of every mechatronic system our approach is able to handle cycles. The presented approach has been applied to a real-life case study.
Dataflow-based application specifications are widely used in model-based design methodologies for signal processing systems. In this paper, we develop a new model called the dataflow schedule graph (DSG) for represent...
详细信息
Dataflow-based application specifications are widely used in model-based design methodologies for signal processing systems. In this paper, we develop a new model called the dataflow schedule graph (DSG) for representing a broad class of dataflow graph schedules. The DSG provides a graphical representation of schedules based on dataflow semantics. In conventional approaches, applications are represented using dataflow graphs, whereas schedules for the graphs are represented using specialized notations, such as various kinds of sequences or looping constructs. In contrast, the DSG approach employs dataflow graphs for representing both application models and schedules that are derived from them. Our DSG approach provides a precise, formal framework for unambiguously representing, analyzing, manipulating, and interchanging schedules. We develop detailed formulations of the DSG representation, and present examples and experimental results that demonstrate the utility of DSGs in the context of heterogeneous signal processing system design.
Particle filters (or PFs) are widely used for the tracking problem in dynamic systems. Despite their remarkable tracking performance and flexibility, PFs require intensive computation and communication, which are stri...
详细信息
Particle filters (or PFs) are widely used for the tracking problem in dynamic systems. Despite their remarkable tracking performance and flexibility, PFs require intensive computation and communication, which are strictly constrained in wireless sensor networks (or WSNs). Thus, distributed particle filters (or DPFs) have been studied to distribute the computational workload onto multiple nodes while minimizing the communication among them. However, weight normalization and resampling in generic PFs cause significant challenges in the distributed implementation. Few existing efforts on DPF could be implemented in a completely distributed manner. In this paper, we design a completely distributed particle filter (or CDPF) for target tracking in sensor networks, and further improve it with neighborhood estimation toward minimizing the communication cost. First, we describe the particle maintenance and propagation mechanism, by which particles are maintained on different sensor nodes and propagated along the target trajectory. Then, we design the CDPF algorithm by adjusting the order of PFs' four steps and leveraging the data aggregation during particle propagation. Finally, we develop a neighborhood estimation method to replace the measurement broadcasting and the calculation of likelihood functions. With this approximate estimation, the communication cost of DPFs can be minimized. Our experimental evaluations show that although CDPF incurs about 50% more estimation error than semi-distributed particle filter (or SDPF), its communication cost is lower than that of SDPF by as much as 90%.
To generate the 3D GIS urban noise map, we need a lot of computing power. To meet the requirement, this research used MPI with our own distributed and parallel processing model. MPI is known to have the advantage of p...
详细信息
To generate the 3D GIS urban noise map, we need a lot of computing power. To meet the requirement, this research used MPI with our own distributed and parallel processing model. MPI is known to have the advantage of portability and reduce the processing time of the map. The distributed and parallel model is used when the text file that has the noise values of a designated area is processed to make jpg files. We experimented the distributed and parallel model and found that the processing time was increased linearly when the size of the file was increased. Thus, we prove that the model works well in generating the 3D GIS urban noise map.
Cyber physical systems have many non-functional requirements, which always crosscut the whole system modules. That may cause the code tangle and scatter, make the systems hard to design, reuse and maintain, and affect...
详细信息
Cyber physical systems have many non-functional requirements, which always crosscut the whole system modules. That may cause the code tangle and scatter, make the systems hard to design, reuse and maintain, and affect performance of systems badly. AOP is a new software development paradigm, which could attain a higher level of separation of concerns in both functional and non-functional matters by introducing aspect, for the implementation of crosscutting concerns. Different aspects can be designed separately, and woven into systems. In this paper, we propose a four-stage method of aspect-oriented MDA for non-functional properties to develop cyber physical systems. The model- based development, aspect-oriented approach and formal methods are integrated effectively for the development of the nonfunctional properties of distributed cyber physical systems. A case study illustrates the aspect oriented MDA development of cyber physical systems.
General-purpose middleware must often be specialized for resource-constrained, real-time and embedded systems to improve their response-times, reliability, memory footprint, and even power consumption. software engine...
详细信息
General-purpose middleware must often be specialized for resource-constrained, real-time and embedded systems to improve their response-times, reliability, memory footprint, and even power consumption. softwareengineering techniques, such as aspect-oriented programming (AOP), feature-oriented programming (FOP), and reflection make the specialization task simpler, albeit still requiring the system developer to manually identify the system invariants, and sources of performance and memory footprint bottlenecks that determine the required specializations. Specialization reuse is also hampered due to a lack of common taxonomy to document the recurring specializations. This paper presents the GeMS (Generative Middleware Specialization) framework to address these challenges. We present results of applying GeMS to a distributed Real-time and Embedded (DRE) system case study that depict a 21-35% reduction in footprint, and a 3̃6% improvement in performance while simultaneously alleviating 9̃7% of the developer efforts in specializing middleware.
Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visualization of this time series is a desired step before starting a...
详细信息
Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visualization of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data, as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar to other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of I/O bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.
The rapid growth of mobile applications has imposed new threats to privacy: users often find it challenging to ensure that their privacy policies are consistent with the requirements of a diverse range of of mobile ap...
详细信息
The rapid growth of mobile applications has imposed new threats to privacy: users often find it challenging to ensure that their privacy policies are consistent with the requirements of a diverse range of of mobile applications that access personal information under different contexts. This problem exacerbates when applications depend on each other and therefore share permissions to access resources in ways that are opaque to an end-user. To meet the needs of representing privacy requirements and of resolving dependencies issues in privacy policies, we pro-pose an extension to the P-RBAC model for reasoning about plausible scenarios that can exploit such weaknesses of mobile systems. This work has been evaluated using the case studies on several Android mobile applications.
Participants in modern automation engineering projects typically work distributed and in parallel. Therefore, there are advanced approaches for integrating the data of specific engineering tools and systems. However, ...
详细信息
Participants in modern automation engineering projects typically work distributed and in parallel. Therefore, there are advanced approaches for integrating the data of specific engineering tools and systems. However, in these projects there are also participants who do not work with the specific engineering tool set but provide important data updates, e.g., customer representatives. A major challenge is to provide systematic and efficient quality assurance for these inputs. In this paper we describe an approach to provide efficient quality assurance when importing data from general purpose tools such as Excel into an integrated engineering data set. We report on experiences with initial prototypes and compare the improved with a traditional data import process to discuss advantages, risks, and further improvements.
暂无评论