Tables,typically two-dimensional and structured to store large amounts of data,are essential in daily activities like database queries,spreadsheet manipulations,Web table question answering,and image table information...
详细信息
Tables,typically two-dimensional and structured to store large amounts of data,are essential in daily activities like database queries,spreadsheet manipulations,Web table question answering,and image table information *** these table-centric tasks with Large Language Models(LLMs)or Visual Language Models(VLMs)offers significant public benefits,garnering interest from academia and *** survey provides a comprehensive overview of table-related tasks,examining both user scenarios and technical *** covers traditional tasks like table question answering as well as emerging fields such as spreadsheet manipulation and table data *** summarize the training techniques for LLMs and VLMs tailored for table ***,we discuss prompt engineering,particularly the use of LLM-powered agents,for various tablerelated ***,we highlight several challenges,including diverse user input when serving and slow thinking using chainof-thought.
data partitioning techniques are pivotal for optimal data placement across storage devices,thereby enhancing resource utilization and overall system ***,the design of effective partition schemes faces multiple challen...
详细信息
data partitioning techniques are pivotal for optimal data placement across storage devices,thereby enhancing resource utilization and overall system ***,the design of effective partition schemes faces multiple challenges,including considerations of the cluster environment,storage device characteristics,optimization objectives,and the balance between partition quality and computational ***,dynamic environments necessitate robust partition detection *** paper presents a comprehensive survey structured around partition deployment environments,outlining the distinguishing features and applicability of various partitioning strategies while delving into how these challenges are *** discuss partitioning features pertaining to database schema,table data,workload,and runtime *** then delve into the partition generation process,segmenting it into initialization and optimization stages.A comparative analysis of partition generation and update algorithms is provided,emphasizing their suitability for different scenarios and optimization ***,we illustrate the applications of partitioning in prevalent database products and suggest potential future research directions and *** survey aims to foster the implementation,deployment,and updating of high-quality partitions for specific system scenarios.
The cross-domain knowledge diffusion from science to policy is a prevalent phenomenon that demands academic attention. To investigate the characteristics of cross-domain knowledge diffusion from science to policy, thi...
详细信息
The cross-domain knowledge diffusion from science to policy is a prevalent phenomenon that demands academic attention. To investigate the characteristics of cross-domain knowledge diffusion from science to policy, this study suggests using the citation of policies to scientific articles as a basis for quantifying the diffusion strength, breadth, and speed. The study reveals that the strength and breadth of cross-domain knowledge diffusion from scientific papers to policies conform to a power-law distribution, while the speed follows a logarithmic normal distribution. Moreover, the papers with the highest diffusion strength, breadth, and fastest diffusion speed are predominantly from world-renowned universities, scholars, and top journals. The papers with the highest diffusion strength and breadth are mostly from social sciences, especially economics, those with the fastest diffusion speed are mainly from medical and life sciences, followed by social sciences. The findings indicate that cross-domain knowledge diffusion from science to policy follows the Matthew effect, whereby individuals or institutions with high academic achievements are more likely to achieve successful cross-domain knowledge diffusion. Furthermore, papers in the field of economics tend to have the higher cross-domain knowledge diffusion strength and breadth, while those in medical and life sciences have the faster cross-domain knowledge diffusion speed. 86 Annual Meeting of the Association for Information Science & Technology | Oct. 27 – 31, 2023 | London, United Kingdom. Author(s) retain copyright, but ASIS&T receives an exclusive publication license.
Local differential privacy(LDP)approaches to collecting sensitive information for frequent itemset mining(FIM)can reliably guarantee *** current approaches to FIM under LDP add"padding and sampling"steps to ...
详细信息
Local differential privacy(LDP)approaches to collecting sensitive information for frequent itemset mining(FIM)can reliably guarantee *** current approaches to FIM under LDP add"padding and sampling"steps to obtain frequent itemsets and their frequencies because each user transaction represents a set of *** current state-of-the-art approach,namely set-value itemset mining(SVSM),must balance variance and bias to achieve accurate ***,an unbiased FIM approach with lower variance is highly *** narrow this gap,we propose an Item-Level LDP frequency oracle approach,named the Integrated-with-Hadamard-Transform-Based Frequency Oracle(IHFO).For the first time,Hadamard encoding is introduced to a set of values to encode all items into a fixed vector,and perturbation can be subsequently applied to the *** FIM approach,called optimized united itemset mining(O-UISM),is pro-posed to combine the padding-and-sampling-based frequency oracle(PSFO)and the IHFO into a framework for acquiring accurate frequent itemsets with their ***,we theoretically and experimentally demonstrate that O-UISM significantly outperforms the extant approaches in finding frequent itemsets and estimating their frequencies under the same privacy guarantee.
While deep learning techniques have shown promising performance in the Major Depressive Disorder (MDD) detection task, they still face limitations in real-world scenarios. Specifically, given the data scarcity, some e...
详细信息
Text-to-image synthesis refers to generating visual-realistic and semantically consistent images from given textual descriptions. Previous approaches generate an initial low-resolution image and then refine it to be h...
详细信息
Text-to-image synthesis refers to generating visual-realistic and semantically consistent images from given textual descriptions. Previous approaches generate an initial low-resolution image and then refine it to be high-resolution. Despite the remarkable progress, these methods are limited in fully utilizing the given texts and could generate text-mismatched images, especially when the text description is complex. We propose a novel finegrained text-image fusion based generative adversarial networks(FF-GAN), which consists of two modules: Finegrained text-image fusion block(FF-Block) and global semantic refinement(GSR). The proposed FF-Block integrates an attention block and several convolution layers to effectively fuse the fine-grained word-context features into the corresponding visual features, in which the text information is fully used to refine the initial image with more details. And the GSR is proposed to improve the global semantic consistency between linguistic and visual features during the refinement process. Extensive experiments on CUB-200 and COCO datasets demonstrate the superiority of FF-GAN over other state-of-the-art approaches in generating images with semantic consistency to the given texts.
Person re-identification is a prevalent technology deployed on intelligent *** have been remarkable achievements in person re-identification methods based on the assumption that all person images have a sufficiently h...
详细信息
Person re-identification is a prevalent technology deployed on intelligent *** have been remarkable achievements in person re-identification methods based on the assumption that all person images have a sufficiently high resolution,yet such models are not applicable to the open *** real world,the changing distance between pedestrians and the camera renders the resolution of pedestrians captured by the camera *** low-resolution(LR)images in the query set are matched with high-resolution(HR)images in the gallery set,it degrades the performance of the pedestrian matching task due to the absent pedestrian critical information in LR *** address the above issues,we present a dualstream coupling network with wavelet transform(DSCWT)for the cross-resolution person re-identification ***,we use the multi-resolution analysis principle of wavelet transform to separately process the low-frequency and high-frequency regions of LR images,which is applied to restore the lost detail information of LR ***,we devise a residual knowledge constrained loss function that transfers knowledge between the two streams of LR images and HR images for accessing pedestrian invariant features at various *** qualitative and quantitative experiments across four benchmark datasets verify the superiority of the proposed approach.
The video grounding(VG) task aims to locate the queried action or event in an untrimmed video based on rich linguistic descriptions. Existing proposal-free methods are trapped in the complex interaction between video ...
详细信息
The video grounding(VG) task aims to locate the queried action or event in an untrimmed video based on rich linguistic descriptions. Existing proposal-free methods are trapped in the complex interaction between video and query, overemphasizing cross-modal feature fusion and feature correlation for VG. In this paper, we propose a novel boundary regression paradigm that performs regression token learning in a transformer. Particularly, we present a simple but effective proposal-free framework, namely video grounding transformer(ViGT), which predicts the temporal boundary using a learnable regression token rather than multi-modal or cross-modal features. In ViGT, the benefits of a learnable token are manifested as follows.(1) The token is unrelated to the video or the query and avoids data bias toward the original video and query.(2) The token simultaneously performs global context aggregation from video and query ***, we employed a sharing feature encoder to project both video and query into a joint feature space before performing cross-modal co-attention(i.e., video-to-query attention and query-to-video attention) to highlight discriminative features in each modality. Furthermore, we concatenated a learnable regression token [REG] with the video and query features as the input of a vision-language transformer. Finally, we utilized the token [REG] to predict the target moment and visual features to constrain the foreground and background probabilities at each timestamp. The proposed ViGT performed well on three public datasets:ANet-Captions, TACoS, and YouCookⅡ. Extensive ablation studies and qualitative analysis further validated the interpretability of ViGT.
Domain adaptation aims to transfer knowledge from the labeled source domain to an unlabeled target domain that follows a similar but different ***,adversarial-based methods have achieved remarkable success due to the ...
详细信息
Domain adaptation aims to transfer knowledge from the labeled source domain to an unlabeled target domain that follows a similar but different ***,adversarial-based methods have achieved remarkable success due to the excellent performance of domain-invariant feature presentation ***,the adversarial methods learn the transferability at the expense of the discriminability in feature representation,leading to low generalization to the target *** this end,we propose a Multi-view Feature Learning method for the Over-penalty in Adversarial Domain ***,multi-view representation learning is proposed to enrich the discriminative information contained in domain-invariant feature representation,which will counter the over-penalty for discriminability in adversarial ***,the class distribution in the intra-domain is proposed to replace that in the inter-domain to capture more discriminative information in the learning of transferrable *** experiments show that our method can improve the discriminability while maintaining transferability and exceeds the most advanced methods in the domain adaptation benchmark datasets.
Previous methods on knowledge base question generation (KBQG) primarily focus on refining the quality of a single generated question. However, considering the remarkable paraphrasing ability of humans, we believe that...
详细信息
暂无评论