As a typical social media in Web 2.0, blogs have attracted a surge of researches. Unlike the traditional studies, the social networks mined from Internet are very large, which makes a lot of social network analyzing a...
详细信息
ISBN:
(纸本)9781605586762
As a typical social media in Web 2.0, blogs have attracted a surge of researches. Unlike the traditional studies, the social networks mined from Internet are very large, which makes a lot of social network analyzing algorithms to be intractable. According to this phenomenon, this paper addresses the novel problem of efficient social networks analyzing on blogs. This paper turns to account the structural characteristics of real large-scale complex networks, and proposes a novel shortest path approximate algorithm to calculate the distance and shortest path between nodes efficiently. The approximate algorithm then is incorporated with social network analysis algorithms and measurements for large-scale social networks analysis. We illustrate the advantages of the approximate analysis through the centrality measurements and community mining algorithms. The experiments demonstrate the effectiveness of the proposed algorithms on blogs, which indicates the necessity of taking account of the structural characteristics of complex networks when optimizing the analysis algorithms on large-scale social networks. Copyright 2009 ACM.
It is found that stable proton acceleration from a thin foil irradiated by a linearly polarized ultraintense laser can be realized for appropriate foil thickness and laser intensity. A dual-peaked electrostatic field,...
详细信息
It is found that stable proton acceleration from a thin foil irradiated by a linearly polarized ultraintense laser can be realized for appropriate foil thickness and laser intensity. A dual-peaked electrostatic field, originating from the oscillating and nonoscillating components of the laser ponderomotive force, is formed around the foil surfaces. This field combines radiation-pressure acceleration and target normal sheath acceleration to produce a single quasimonoenergetic ion bunch. A criterion for this mechanism to be operative is obtained and verified by two-dimensional particle-in-cell simulation. At a laser intensity of ∼5.5×1022 W/cm2, quasimonoenergetic GeV proton bunches are obtained with ∼100 MeV energy spread, less than 4° spatial divergence, and ∼50% energy conversion efficiency from the laser.
The performance of IO-intensive applications is determined by the hit ratio of local disk cache and the IO latency of missed disk accesses. To improve the IO performance, in this paper we propose PIB, a peer-to-peer I...
详细信息
Building application level multicast (ALM) for media streaming, we had better keep the invariability of source to-end delay (S2EDelay) of any node on the tree, so as to avoid delay jitter problem. In this paper, we pr...
详细信息
Building application level multicast (ALM) for media streaming, we had better keep the invariability of source to-end delay (S2EDelay) of any node on the tree, so as to avoid delay jitter problem. In this paper, we proposed an aggregation-based ALM architecture, which aggregates the unstable nodes within a local area into a relative stable aggregation node Anode, and then builds tree by Anodes. Anode shields node's dynamicity inside and behaves as a stable one outside. To build the architecture, we first proposed an Anode structure by n-dimension anti-cube, and proposed a distributed algorithm to bind the logic view and physical view of an Anode dynamically by the differentiation and assimilation operations, which maintain the integrality of the logic view while the physical view changes dynamically. Experiment results indicate that our algorithm can keep the S2EDelay invariability of the built ALM tree effectively.
Consistency and responsiveness are two important factors in providing the sense of reality in distributed Virtual Environment (DVE). However, it is not easy to optimize both aspects because of the trade-off between th...
详细信息
Consistency and responsiveness are two important factors in providing the sense of reality in distributed Virtual Environment (DVE). However, it is not easy to optimize both aspects because of the trade-off between these two factors. As a result, most existing consistency maintenance methods ignored the responsiveness requirements, or just assumed a simple responsiveness requirement model which cannot meet the real need of DVE systems. In this paper, we first present a new responsiveness requirement model. The model can describe requirement satisfaction situation of each node. Base on this model, we propose a responsiveness requirement based consistency method. The method can adjust the utilization of time resource according to the requirements of different nodes and improve the overall responsiveness performance by at least 20%. Therefore, it provides a good support to increase the applicability of DVE systems.
The performance gap for high performance applications has been widening over time. High level program transformations are critical to improve applications' performance, many of which concern the determination of o...
详细信息
Many FPGA implementations for QR decomposition have been studied on small-scale matrix and all of them are presented individually. However to the best of our knowledge, there is no FPGA-based accelerator for large-sca...
详细信息
Many FPGA implementations for QR decomposition have been studied on small-scale matrix and all of them are presented individually. However to the best of our knowledge, there is no FPGA-based accelerator for large-scale QR decomposition. In this paper, we propose a unified FPGA accelerator structure for large-scale QR decomposition. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for QR decomposition. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 15 PEs can integrated into an Altera StratixII EP2S130F1020C5 on our self-designed board. Experimental results show that a factor of 4 speedup and the maximum powerperformance of 60.9 can be achieved compare to Pentium Dual CPU with double SSE thread.
This paper introduces a new method based on parallel failure recovery, for the fault tolerance issue of parallel programs. In case a process fails, other surviving processes will compute the task of the failed one in ...
详细信息
This paper introduces a new method based on parallel failure recovery, for the fault tolerance issue of parallel programs. In case a process fails, other surviving processes will compute the task of the failed one in parallel, so that the overhead for fault tolerance is leveled down. The paper presents the design and implementation of the parallel FFT using the new approach, and works on finding an optimum number of processes that participate in parallel failure recovery. Finally, an experiment is done to show the better performance of the parallel failure recovery over that of checkpointing, and to show the effectiveness of our solution for the best number of processes participating parallel failure recovery.
Peer-to-peer media streaming has been an important service on the internet in recent years. The Data-driven (or mesh-based) structure is adopted by most working systems,in which data scheduling is one of the important...
详细信息
Peer-to-peer media streaming has been an important service on the internet in recent years. The Data-driven (or mesh-based) structure is adopted by most working systems,in which data scheduling is one of the important ***, those frequently used scheduling algorithms are often faced with such a case: A neighbor peer takes up its bandwidth to deliver the packets that other neighbors can also supply, but some packets only held by it are not *** packets can not be delivered in the current scheduling cycle, even though that the other neighbors have surplus bandwidth. This is a kind of waste of bandwidth and decreases the throughput of transmission. In this paper we propose anew scheduling algorithm aiming at the optimal throughput:Bipartite-matching based Block Scheduling algorithm(BBS).We convert the original data scheduling problem to a problem of finding a maximum match on the correspond bipartite graph, then assign data packets to neighbors according to the maximum match. We evaluate the performance of BBS with extensive experiments and the results show that BBS throughput and provides better streaming quality than those frequently used scheduling algorithms.
There are a lot of important and sensitive data in databases, which need to be protected from attacks. To secure the data, Cryptography support is an effective mechanism. However, a tradeoff must be made between the p...
详细信息
暂无评论