This paper investigates the properties of highly thinned ultrawideband (UWB) arrays. The design aim is high resolution and very low side radiation levels (SL). One-and two-dimensional ultrasparse UWB arrays can be des...
详细信息
This paper investigates the properties of highly thinned ultrawideband (UWB) arrays. The design aim is high resolution and very low side radiation levels (SL). One-and two-dimensional ultrasparse UWB arrays can be designed to achieve both. The minimum available pulse-echo SL is shown to approach N-4 where N is the number of elements in the transmit and receive arrays. Periodic thinning is shown to be superior to random thinning, and amplitude taper is shown to raise the SL. Two-dimensional curvilinear deployment of elements are shown to outperform rectilinear designs, and different transmit and receive arrays in pulse-echo systems are shown to outperform systems that use the same array for transmit and receive. Very low SL is achievable in an ultrasparse UWB system with so few elements that echo signal-to-noise ratio (SNR) rather than SL becomes the constraint on the minimum number of elements required by the system for the array to be useful for imaging. For example, in ultrasonic pulse-echo breast imaging, SL approximate to -70 dB is desired to distinguish small cysts from tumors. A 2-D randomly thinned array requires about 10,000 elements. A 2-D ultrasparse UWB periodic array requires less than 100 to satisfy SL, a reduction of 100:1, but provides insufficient SNR. A 500-element, 7.5 MHz array operating with 4 cm penetration depth satisfies both. Experimental results demonstrate the theory.
This paper presents a crowd evacuation simulation approach that is based on navigation knowledge and two-layer control mechanism. In this approach, using the multi-population cultural algorithm framework, the control ...
详细信息
This paper presents a crowd evacuation simulation approach that is based on navigation knowledge and two-layer control mechanism. In this approach, using the multi-population cultural algorithm framework, the control mechanism of the crowd evacuation simulation is divided into two parts, namely, the belief and population spaces. The population space is divided into groups (sub-populations), and a leader is selected in each group according to a fitness value. The belief space comprises multiple agents and a knowledge base. Each navigation agent corresponds to a group leader. A navigation agent obtains a leader's position through the acceptance function and later passes the information to the knowledge base. On the basis of the position, the obstacles, and the congestion situation provided by the navigation agent, the knowledge base management agent dynamically plans the path and provides the navigation agent the next position along the path. The navigation agent later passes the information to the leader through 'the affection function. The individuals in the group follow the leader through the social force model in moving to the location provided by the navigation agent. The entire process is repeated until the exit is reached. The path information that successfully reached the exit is recorded, and the knowledge base is updated. This method establishes the relationship between the population and the navigation agent with knowledge and transforms a blindly moving crowd into a guided evacuation as the mass evacuation simulation problem is decomposed into a sub-problem of moving blocks. This approach effectively solves the problem of microscopic models because each individual calculates the path and resolves the slow speed problem. The simulation results illustrate the effectiveness of this method. (C) 2018 Elsevier Inc. All rights reserved.
The problem of creating a practical computational tool to perform transient contingency analysis in a power systen control cen ter is considered. The computer program developed may be used in a control center computer...
详细信息
The problem of creating a practical computational tool to perform transient contingency analysis in a power systen control cen ter is considered. The computer program developed may be used in a control center computer where a state estimation program or a consistent set of load flowdata is available. It can be run on a periodic basis over a previously specified contingency list, or it can be called by the system operator to analyze a particular contingency.
data transfers in computing systems with memory hierarchies usually prolong computing time and, consequently, cause degradation of system performance. A method to determine data processing rates and the relative utili...
详细信息
data transfers in computing systems with memory hierarchies usually prolong computing time and, consequently, cause degradation of system performance. A method to determine data processing rates and the relative utilization of memories for various system configurations under a variety of program loads is presented. According to this method, a program-independent ultimate data processing rate is derived from characteristics of the processor and the fastest random access memory of the system, and degradation factors are determined by combining statistics of the dataflow of actual programs and hardware parameters of the processor and all memories. The statistics of dataflow in the memory hierarchy are obtained by analyzing a number of recorded address traces of executed programs. The method presented permits quick evaluation of system performance for arbitrary time periods and for maximum and minimum concurrence of operation of processors and memories.
Arrows are a popular form of abstract computation. Being more general than monads, they are more broadly applicable, and, in particular, are a good abstraction for signal processing and dataflow computations. Most not...
详细信息
Arrows are a popular form of abstract computation. Being more general than monads, they are more broadly applicable, and, in particular, are a good abstraction for signal processing and dataflow computations. Most notably, arrows form the basis for a domain-specific language called Yampa, which has been used in a variety of concrete applications, including animation, robotics, sound synthesis, control systems, and graphical user interfaces. Our primary interest is in better understanding the class of abstract computations captured by Yampa. Unfortunately, arrows are not concrete enough to do this with precision. To remedy this situation, we introduce the concept of commutative arrows that capture a noninterference property of concurrent computations. We also add an init operator that captures the causal nature of arrow effects, and identify its associated law. To study this class of computations in more detail, we define an extension to arrows called causal commutative arrows (CCA), and study its properties. Our key contribution is the identification of a normal form for CCA called causal commutative normal form (CCNF). By defining a normalization procedure, we have developed an optimization strategy that yields dramatic improvements in performance over conventional implementations of arrows. We have implemented this technique in Haskell, and conducted benchmarks that validate the effectiveness of our approach. When compiled with the Glasgow Haskell Compiler (GHC), the overall methodology can result in significant speedups.
Information influences the decisions that investors make in the markets. Whether this information is true or false can be quantified and distinguished by markets. To study how information propagates through markets, w...
详细信息
Information influences the decisions that investors make in the markets. Whether this information is true or false can be quantified and distinguished by markets. To study how information propagates through markets, we propose an information flow game based on an evolutionary game approach. In reality, investors transmit profits or losses when they transmit information, because there are values associated with information in the market. In the information flow game, information is represented by its value. Investors in the game can choose to be sharers or silencers. Sharers share their information with their neighbors according to a sharing rate alpha, which is a key quantity in the model. In the evolutionary process, we show that more sharers emerge when the market is full of rumors, especially as the sharing rate increases. Higher values of the sharing rate reduce the standard deviation of the information value in such markets, whereas the opposite occurs in markets that largely consist of true information. The reactions of the investors are asymmetric, which indicates that investors are more sensitive to losses than to profits. Furthermore, as the network becomes more random, a higher sharing rate becomes more beneficial for the stability of the emergence of sharers if information is generally false, whereas a lower sharing rate is helpful for the stability of the emergence of sharers if information is generally true.
The analysis of power system performance has always been enhanced by close interaction between engineer and computation, and recent developments in high-speed displays, minicomputer capacity, and computational technol...
详细信息
The analysis of power system performance has always been enhanced by close interaction between engineer and computation, and recent developments in high-speed displays, minicomputer capacity, and computational technology are encouraging major revisions of computing practices for power system analysis. A comprehensive power system analysis package, PSS/2, is described which has been developed specifically for use with dedicated computers. This dedicated computer approach allows load flow, short-circuit, and dynamic simulation work, data base maintenance, and printed report preparation to be handled in the interactive mode at lower cost than could be possible by alternative batch or time-shared computing methods. The subjects covered are data organization, computational techniques, user interface, and operational experience.
Discusses the reconfigurable computing (RC), a revolutionary approach to data processing. Comparison between RC and the standard microprocessor-based computing; Problem with microprocessors; Promise of RC; How reconfi...
详细信息
Discusses the reconfigurable computing (RC), a revolutionary approach to data processing. Comparison between RC and the standard microprocessor-based computing; Problem with microprocessors; Promise of RC; How reconfigurable computers can change both their hardware and software; Programming problems with RC. INSETS: Part imitates life.;untitled (wealth of resources provided by the World Wide Web)..
Whether programming PLCs in an IEC 61131-3 language or developing embeddedcontrol in C or C ++, an abudance of tools aids and abets powerful and well-written code.
Whether programming PLCs in an IEC 61131-3 language or developing embeddedcontrol in C or C ++, an abudance of tools aids and abets powerful and well-written code.
The initial personalization of multilayer ceramic substrates begins with the formation of vias in the individual green ceramic sheets. A fully automatic computer-controlled tool to perform this function has been devel...
详细信息
The initial personalization of multilayer ceramic substrates begins with the formation of vias in the individual green ceramic sheets. A fully automatic computer-controlled tool to perform this function has been developed by IBM and is in operation on its manufacturing lines. The auto punch tool creates more than 40 000 vias of two different sizes (0.14 mm and 0.15 mm) in green ceramic sheets of two different thicknesses (0.20 mm and 0.28 mm). The positional distribution of the vias is on a 0.3 mm grid. Other than the grid the vias may be randomly distributed. Their positional accuracy is within 0.03 mm at 3σ. The vias are created at a rate of 1440 vias per second. All data needed to operate the tool and to create the vias flow automatically from a host computer (IBM 3081) through an area controller (IBM Series I) to the tool controller (IBM Series I). Manufacturing data concerning each individual green ceramic sheet are returned to the factory host computer.
暂无评论