In recent years, the advancement of generative AI has profoundly influenced its application across various industries. One such industry is e-commerce where this technology can enhance both customer experience as well...
详细信息
The bilinear mixture model (BMM) is often used in hyperspectral unmixing (HU) for incorporating nonlinear effects, which is considered more general than the widely used linear mixture model (LMM). Existing BMM-based H...
详细信息
In recent years, the advancement of generative AI has profoundly influenced its application across various industries. One such industry is e-commerce where this technology can enhance both customer experience as well...
In recent years, the advancement of generative AI has profoundly influenced its application across various industries. One such industry is e-commerce where this technology can enhance both customer experience as well as merchants’ productivity and profitability. In this paper, our objective is to review some of the potential use cases of generative AI technology in different areas of an online store. Specifically, we will focus on the use cases of product description generation, sentiment analysis of product reviews, and product tagging and categorization. We utilize various prompt engineering techniques to suggest exemplary implementations of these applications using large language models (LLMs). Lastly, the paper will discuss the possible risks and challenges that come with using generative AI in these contexts.
As one kind of Event-Related Potentials (ERPs), P300 plays an important role in studying neural activities and cognitive processes, and lays the foundation for P300 speller - the Brain computer Interface (BCI) working...
详细信息
Transformers have significantly impacted machine learning, particularly in natural language processing and computer vision, due to their robust attention mechanisms and scalability. Although it is very successful and ...
详细信息
The bilinear mixture model (BMM) is often used in hyperspectral unmixing (HU) for incorporating nonlinear effects, which is considered more general than the widely used linear mixture model (LMM). Existing BMM-based H...
The bilinear mixture model (BMM) is often used in hyperspectral unmixing (HU) for incorporating nonlinear effects, which is considered more general than the widely used linear mixture model (LMM). Existing BMM-based HU methods often lack identifiability guarantees of the endmembers and their abundances unless some stringent conditions are met. This work puts forth a new framework for BMM-based HU. Our method models the hyperspectral image as a latent factor-structured block-term tensor decomposition model with multilinear rank $(L_{r},\ L_{r},\ 1)$ terms (“LL 1” model for short). This way, the HU task boils down to finding the block terms of the tensor model. The LL 1 model's essential uniqueness naturally provides identifiability guarantees of the endmembers/abundances under reasonably mild conditions. An alternating gradient projection (GP) algorithm is proposed to takle the formulated tensor decomposition-based BMM-HU problem. Simulations on semi-real and real datasets show the high-quality unmixing performance of the proposed GP algorithm compared to state-of-the-art methods.
In this paper, a clustering approach called CATRSO is proposed. The selection of cluster heads (CH) is performed by considering the trust value of the nodes in order to select the most trustworthy nodes as CH and Rat ...
详细信息
A simple graph G with p vertices is said to be vertex-Euclidean if there exists a bijection f : V(G) → {1, 2, . . ., p} such that f(v1) + f(v2) > f(v3) for each C3-subgraph with vertex set {v1, v2, v3}, where f(v1...
详细信息
The main goal of this research to calculate the triple integrals which has continuous functions numerically by two methods are RO(SuMM) and RO(SuMM)that obtained by Romberg acceleration with two composite rules. The f...
详细信息
Transformers have significantly impacted machine learning, particularly in natural language processing and computer vision, due to their robust attention mechanisms and scalability. Although it is very successful and ...
详细信息
ISBN:
(数字)9798350388572
ISBN:
(纸本)9798350388589
Transformers have significantly impacted machine learning, particularly in natural language processing and computer vision, due to their robust attention mechanisms and scalability. Although it is very successful and has the attention mechanism to explain it, it is still a black box in general. This lack of transparency is not conducive to applications, nor is it conducive to the development of more advanced models. We systematically analyze current methodologies for interpreting the attention patterns, hidden representations, and decision-making processes within Transformers. Additionally, we investigate how these insights aid in refining Transformer architectures and inspire the creation of innovative models that extend beyond the conventional Transformer frameworks. Due to its expertise in long-term dependencies and data scalability, Transformer undoubtedly has great advantages in foundation language models, achieving multimodality, and processing discrete data, but it is inferior to Mamba in terms of efficiency, especially when processing continuous data. It is inferior to Diffusion Model when processing image data because it is not good at grasping the distribution of high-dimensional data and is relatively less suitable for image generation model. By integrating achievements in both explaining and advancing Transformer-like architectures, this paper serves as a valuable resource for researchers aiming to enhance the performance, transparency, and efficiency of Transformers in various applications and to develop models that surpass current Transformer paradigms.
暂无评论