Self-supervised time series anomaly detection (TSAD) demonstrates remarkable performance improvement by extracting high-level data semantics through proxy tasks. Nonetheless, most existing self-supervised TSAD techniq...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Self-supervised time series anomaly detection (TSAD) demonstrates remarkable performance improvement by extracting high-level data semantics through proxy tasks. Nonetheless, most existing self-supervised TSAD techniques rely on manual- or neural-based transformations when designing proxy tasks, overlooking the intrinsic temporal patterns of time series. This paper proposes a local temporal pattern learning-based time series anomaly detection (LTPAD). LTPAD first generates sub-sequences. Pairwise sub-sequences naturally manifest proximity relationships along the time axis, and such correlations can be used to construct supervision and train neural networks to facilitate the learning of temporal patterns. Time intervals between two sub-sequences serve as labels for sub-sequence pairs. By classifying these labeled data pairs, our model captures the local temporal patterns of time series, thereby modeling the temporal pattern-aware "normality". Abnormal scores of testing data are acquired by evaluating their conformity to these learned patterns shared in training data. Extensive experiments show that LTPAD significantly outperforms state-of-the-art competitors.
Temporal Language Grounding (TLG) aims to localize moments in untrimmed videos that are most relevant to natural language queries. While existing weakly-supervised methods have achieved significant success in explorin...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Temporal Language Grounding (TLG) aims to localize moments in untrimmed videos that are most relevant to natural language queries. While existing weakly-supervised methods have achieved significant success in exploring cross-modal relationships, they still face a critical bottleneck: the interference of task-irrelevant information in query embeddings. To address this issue, we propose TLG Frequency Spiking (TFS), a dimensional mask derived from the frequency domain that models the varying importance specific to different queries. By enhancing the understanding of queries, TFS effectively optimizes the cross-modal alignment of visual and textual modalities. Experimental results show that TFS significantly outperforms state-of-the-art baselines on both the Charades-STA and ActivityNet-Captions datasets.
distributed software systems are becoming more and more complex *** is easy to find a huge amount of computing nodes in a nationwide or global information *** example,We Chat(Wei Xin),a well-known mobile application i...
详细信息
distributed software systems are becoming more and more complex *** is easy to find a huge amount of computing nodes in a nationwide or global information *** example,We Chat(Wei Xin),a well-known mobile application in China,has reached a record of 650 million monthly active users in the third quarter of *** the same time,researchers are starting to talk about software systems which have billions of lines of codes[1]or can last one hundred years.
Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and th...
详细信息
Virtual Machine(VM) allocation for multiple tenants is an important and challenging problem to provide efficient infrastructure services in cloud data centers. Tenants run applications on their allocated VMs, and the network distance between a tenant's VMs may considerably impact the tenant's Quality of Service(Qo S). In this study, we define and formulate the multi-tenant VM allocation problem in cloud data centers, considering the VM requirements of different tenants, and introducing the allocation goal of minimizing the sum of the VMs' network diameters of all tenants. Then, we propose a Layered Progressive resource allocation algorithm for multi-tenant cloud data centers based on the Multiple Knapsack Problem(LP-MKP). The LP-MKP algorithm uses a multi-stage layered progressive method for multi-tenant VM allocation and efficiently handles unprocessed tenants at each stage. This reduces resource fragmentation in cloud data centers, decreases the differences in the Qo S among tenants, and improves tenants' overall Qo S in cloud data centers. We perform experiments to evaluate the LP-MKP algorithm and demonstrate that it can provide significant gains over other allocation algorithms.
In data center networks, resource allocation based on workload is an effective way to allocate the infrastructure resources to diverse cloud applications and satisfy the quality of service for the users, which refers ...
详细信息
In data center networks, resource allocation based on workload is an effective way to allocate the infrastructure resources to diverse cloud applications and satisfy the quality of service for the users, which refers to mapping a large number of workloads provided by cloud users/tenants to substrate network provided by cloud providers. Although the existing heuristic approaches are able to find a feasible solution, the quality of the solution is not guaranteed. Concerning this issue, based on the minimum mapping cost, this paper solves the resource allocation problem by modeling it as a distributed constraint optimization problem. Then an efficient approach is proposed to solve the resource allocation problem, aiming to find a feasible solution and ensuring the optimality of the solution. Finally, theoretical analysis and extensive experiments have demonstrated the effectiveness and efficiency of our proposed approach.
The publish/subscribe(pub/sub)paradigm is a popular communication model for data dissemination in large-scale distributed ***,scalability comes with a contradiction between the delivery latency and the memory *** one ...
详细信息
The publish/subscribe(pub/sub)paradigm is a popular communication model for data dissemination in large-scale distributed ***,scalability comes with a contradiction between the delivery latency and the memory *** one hand,constructing a separate overly per topic guarantees real-time dissemination,while the number of node degrees rapidly increases with the number of *** the other hand,maintaining a bounded number of connections per node guarantees small memory cost,while each message has to traverse a large number of uninterested nodes before reaching the *** this paper,we propose Feverfew,a coverage-based hybrid overlay that disseminates messages to all subscribers without uninterested nodes involved in,and increases the average number of node connections slowly with an increase in the number of subscribers and *** major novelty of Feverfew lies in its heuristic coverage mechanism implemented by combining a gossip-based sampling protocol with a probabilistic searching *** on the practical workload,our experimental results show that Feverfew significantly outperforms existing coverage-based overlay and DHT-based overlay in various dynamic network environments.
To reduce the access latencies of end hosts,latency-sensitive applications need to choose suitably close service machines to answer the access requests from end *** K nearest neighbor search locates K service machines...
详细信息
To reduce the access latencies of end hosts,latency-sensitive applications need to choose suitably close service machines to answer the access requests from end *** K nearest neighbor search locates K service machines closest to end hosts,which can efficiently optimize the access latencies for end *** work has weakness in terms of the accuracy and *** to the scalable and accurate K nearest neighbor search problem,we propose a distributed K nearest neighbor search method called DKNNS in this *** machines are organized into a locality-aware multilevel *** first locates a service machine that starts the search process based on a farthest neighbor search scheme,then discovers K nearest service machines based on a backtracking approach within the proximity region containing the target in the latency *** analysis,simulation results and deployment experiments on the Planetlab show that,DKNNS can determine K approximately optimal service machines,with modest completion time and query ***,DKNNS is also quite stable that can be used for reducing frequent searches by caching found nearest neighbors.
Broadcast authentication is a critical security service in wireless sensor networks. A protocol named μTESLA[1] has been proposed to provide efficient authentication service for such networks. However, when applied t...
详细信息
Broadcast authentication is a critical security service in wireless sensor networks. A protocol named μTESLA[1] has been proposed to provide efficient authentication service for such networks. However, when applied to applications such as time synchronization and fire alarm in which broadcast messages are sent infrequently, μTESLA encounters problems of wasted key resources and slow message verification. This paper presents a new protocol named GBA (Generalized broadcast authentication), for efficient broadcast authentication in these applications. GBA utilises the one-way key chain mechanism of μTESLA, but modifies the keys and time intervals association, and changes the key disclosure mechanism according to the message transmission model in these applications. The proposed technique can take full use of key resources, and shorten the message verification time to an acceptable level. The analysis and experiments show that GBA is more efficient and practical than μESLA in appli ations with various message transmission models.
Wireless sensor networks constitute the platform of a broad range of applications related to national security, surveillance, military, health care, and environmental monitoring. The coverage of WSN has answered the q...
详细信息
Recently correlation filter based trackers have attracted considerable attention for their high computational efficiency. However, they cannot handle occlusion and scale variation well enough. This paper aims at preve...
详细信息
Recently correlation filter based trackers have attracted considerable attention for their high computational efficiency. However, they cannot handle occlusion and scale variation well enough. This paper aims at preventing the tracker from failure in these two situations by integrating the depth information into a correlation filter based tracker. By using RGB-D data, we construct a depth context model to reveal the spatial correlation between the target and its surrounding regions. Furthermore, we adopt a region growing method to make our tracker robust to occlusion and scale variation. Additional optimizations such as a model updating scheme are applied to improve the performance for longer video sequences. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed tracker performs favourably against state-of-the-art algorithms.
暂无评论