Mediaprocessors provide high performance by using both instruction- and data-level parallelism. Because of the increased computing power, transferring data between off- and on-chip memories without slowing down the co...
详细信息
Mediaprocessors provide high performance by using both instruction- and data-level parallelism. Because of the increased computing power, transferring data between off- and on-chip memories without slowing down the core processor's performance is challenging. Two methods, data cache and direct memory access, address this problem in different ways.
A distribution system emergency dispatch technique based on outage conditions is proposed that is based on searching from a database containing precalculated solutions. Load flow solutions for possible outage cases un...
详细信息
A distribution system emergency dispatch technique based on outage conditions is proposed that is based on searching from a database containing precalculated solutions. Load flow solutions for possible outage cases under various conditions are simulated and calculated and kept in the database. At the occurrence of fault, a suitable solution is retrieved from this database and use to restore the nonfaulted out-or-service area. The method, which is implemented on an AT&T 3B2/300 supermicrocomputer, finds the appropriate solution in a very efficient way. A real-time computation for load-flow solutions is also suggested to improve the database for better system operation. Tests to support the technique are discussed.< >
Full use of the parallel computation capabilities of present and expected CPUs and GPUs requires use of vector extensions. Yet many actors in dataflow systems for digital signal processing have internal state (or, eq...
详细信息
Full use of the parallel computation capabilities of present and expected CPUs and GPUs requires use of vector extensions. Yet many actors in dataflow systems for digital signal processing have internal state (or, equivalently, an edge that loops from the actor back to itself) that impose serial dependencies between actor invocations that make vectorizing across actor invocations impossible. Ideally, issues of inter-thread coordination required by serial data dependencies should be handled by code written by parallel programming experts that is separate from code specifying signal processing operations. The purpose of this paper is to present one approach for so doing in the case of actors that maintain state. We propose a methodology for using the parallel scan (also known as prefix sum) pattern to create algorithms for multiple simultaneous invocations of such an actor that results in vectorizable code. Two examples of applying this methodology are given: (1) infinite impulse response filters and (2) finite state machines. The correctness and performance of the resulting IIR filters and one class of FSMs are studied.
A system called net optimization and resource allocation (NORA) is introduced for the evaluation and programming of parallel signal processors, based on a dataflow representation of the signal processing application....
详细信息
A system called net optimization and resource allocation (NORA) is introduced for the evaluation and programming of parallel signal processors, based on a dataflow representation of the signal processing application. The main feature of this approach is that the scheduling and resource allocation can be done at compile time. It is made possible by the fact that most signal processing algorithms have constant dataflow. The resulting hardware is much simpler because no overhead is needed for the real-time scheduling, as in usual dataflow systems. Therefore a realization can easily be obtained using either commercially available components or VLSI technology. The proposed system comprises four main components: (1) a vector oriented dataflow compiler for the translation of a high-level language description of algorithms into a dataflow graph; (2) a critical path analysis for the evaluation of the minimal computation time of the algorithm, where block scheduling is assumed; (3) a schedule optimization for the determination of the minimal computation time under limited resources, not taking into account limitations imposed by the interconnection structure and temporary storage; and (4) a combined schedule optimization and resource allocation that maps a signal processing application onto a given hardware configuration and generates a formal microprogram.< >
data modeling is a critical component in the development of next-generation consumer electronics (CE), as it provides massive data support to implement smart services related to CE products, such as complex correlatio...
详细信息
data modeling is a critical component in the development of next-generation consumer electronics (CE), as it provides massive data support to implement smart services related to CE products, such as complex correlation analysis of the CE market, the development of technology, and consumer behaviors. However, the existing works still face the following challenges: (1) the manual data processing is time-consuming and labor-intensive while analyzing the massive data with domain knowledge;(2) insufficient attention is given to the analysis of relationships among data collections;and (3) redundant data is not eliminated, leading to excess computational burden. To address these challenges, we propose an efficient automatic data modeling method that employs domain-aware knowledge, significantly reducing the cost of data modeling. Our approach starts by leveraging a knowledge-based classifier to extract domain-related resources from open common single-document summarization datasets (OC-SDS), thus reducing data acquisition expenses. Then after collecting related documents by original summaries, a two-stage filtering process is applied to eliminate redundant and non-related documents. Finally, the original summaries are iteratively updated until reaching the threshold, enhancing informativeness and introducing novel expressions without the need for manual marking. As a practical application, we take emergency news as an example and built a typical dataset based on our method. After an extensive analysis, the empirical results evidence that our method is superior in scale and quality compared to the existing methods, with over 13.7% more related data, providing a valuable contribution to the field of CE product development.
The existence of a programming error is often indicated by the occurrence of a dataflow anomaly. The detection of such anomalies can be used for error detection and the upgrading of software quality. A new, efficient...
详细信息
The existence of a programming error is often indicated by the occurrence of a dataflow anomaly. The detection of such anomalies can be used for error detection and the upgrading of software quality. A new, efficient algorithm is proposed that is capable of detecting anomalous dataflow patterns in a program represented by a graph. The algorithm based on static analysis scans the paths entering and leaving each node of the graph, thus revealing anomalous data action combinations. Fosdick and Osterweil (1976) proposed an algorithm implementing this type of approach. The proposed approach presents a general framework that not only fills a gap in the previous approach, but also offers both time and space improvements.
In a typical data pipeline, the dataflow starts from the first node, where the data is initiated, and moves to the last node in the pipeline, where the processed data will be stored. Due to the sheer number of involve...
详细信息
In a typical data pipeline, the dataflow starts from the first node, where the data is initiated, and moves to the last node in the pipeline, where the processed data will be stored. Due to the sheer number of involved participants, it is crucial to protect the dataflow integrity in the pipeline. While previous studies have outlined solutions to this matter, the solution for an untrusted data pipeline is still left unexplored, which motivates us to propose SIGNORA. Our proposal combines the concept of a chain of signatures with blockchain receipt to provide dataflow integrity. The chain of signatures provides a non-repudiation guarantee from participants, while the hash of the data and signatures is anchored in the blockchain for a non-tampering guarantee through blockchain receipt. Aside from that, SIGNORA also satisfies essential requirements of running data pipeline processing in an open and untrusted environment, such as (i) providing reliable identity management, (ii) solving the trust and accountability issues through a reputation system, (iii) supporting various devices through multiple cryptographic algorithms (i.e., ECDSA, EdDSA, RSA, and HMAC), and (iv) off-chain processing. Our experiment results show that SIGNORA can provide dataflow integrity provisioning in multiple scenarios of data payload size with reasonable overhead. Furthermore, the cost of smart contract methods has also been analyzed, and several off-chain solutions have been addressed to reduce transaction costs. Finally, the reputation system can adapt to the history of nodes' activities by increasing their scores when they actively perform honest behavior while reducing their scores when they become inactive. Therefore, SIGNORA can provide a high degree of accountability for participants collaborating in an untrusted environment.
Managing privacy in the IoT presents a significant challenge. We make the case that information obtained by auditing the flows of data can assist in demonstrating that the systems handling personal data satisfy regula...
详细信息
Managing privacy in the IoT presents a significant challenge. We make the case that information obtained by auditing the flows of data can assist in demonstrating that the systems handling personal data satisfy regulatory and user requirements. Thus, components handling personal data should be audited to demonstrate that their actions comply with all such policies and requirements. A valuable side-effect of this approach is that such an auditing process will highlight areas where technical enforcement has been incompletely or incorrectly specified. There is a clear role for technical assistance in aligning privacy policy enforcement mechanisms with data protection regulations. The first step necessary in producing technology to accomplish this alignment is to gather evidence of dataflows. We describe our work producing, representing and querying audit data and discuss outstanding challenges.
NASA's Goddard Earth Sciences data and Information Services Center has developed the Goddard Interactive Online Visualization ANd aNalysis Infrastructure or "Giovanni," an asynchronous Web-service-based ...
详细信息
NASA's Goddard Earth Sciences data and Information Services Center has developed the Goddard Interactive Online Visualization ANd aNalysis Infrastructure or "Giovanni," an asynchronous Web-service-based workflow management system for Earth science data. Giovanni has been providing an intuitive and responsive interface for visualizing, analyzing, and intercomparing multisensor data using only a Web browser to scientists and other users. Giovanni supports many types of single- and multiparameter visualizations and statistical analyses. The interface also provides users with capabilities for downloading images and data in multiple formats. Giovanni supports open and standard data protocols and formats. Finally, Giovanni provides users with a data lineage that describes, in detail, the algorithms used in processing the data including caveats and other scientifically pertinent information.
Rate-optimal scheduling of iterative data-flow graphs requires the computation of the iteration period bound. According to the formal definition, the total computational delay in each directed loop in the graph has to...
详细信息
Rate-optimal scheduling of iterative data-flow graphs requires the computation of the iteration period bound. According to the formal definition, the total computational delay in each directed loop in the graph has to be calculated in order to determine that bound. As the number of loops cannot be expressed as a polynomial function of the number of nodes in the graph, this definition cannot be the basis of an efficient algorithm. This paper presents a polynomial-time algorithm for the computation of the iteration period bound based on longest path matrices and their multiplications.
暂无评论