To improve the efficiency in face recognition with high-dimension features extracted from deep model, a fast recognition method based on hash coding is proposed. Different from others, the hash coding and the cascade ...
详细信息
ISBN:
(纸本)9781538621653
To improve the efficiency in face recognition with high-dimension features extracted from deep model, a fast recognition method based on hash coding is proposed. Different from others, the hash coding and the cascade network are designed for a two-stage face recognition. Firstly, the low-dimensional and high-dimensional features are extracted according to different models. Secondly, the low-dimensional features are quantized into hash codes by a piecewise function. And then, the first-identify is completed by calculating hamming distance between the hash codes. Finally, the second-identify is completed by calculating cosine distance between the high-dimensional features of face images after the first-identify. The experimental results show that the method proposed can improve the Rank-1 recognition efficiency up to 64% while the accuracy is the same as VGG.
In image retrieval tasks, effective hash coding methods can achieve better similarity preservation, so as to gain efficient retrieval results. Existing methods are usually divided into constructing global center const...
详细信息
ISBN:
(纸本)9783031402913;9783031402920
In image retrieval tasks, effective hash coding methods can achieve better similarity preservation, so as to gain efficient retrieval results. Existing methods are usually divided into constructing global center constraints and utilizing local feature constraints. These methods mainly use the category labels of images as numbers, and do not fully exploit the value of classification labels as textual knowledge information. In this paper, we propose a novel and efficient hash coding framework. In the proposed method, the generation method of hash centers is improved, so that each image can correspond to multiple hash centers. In this way, the loss function for local similarity constraint is proposed, and the joint loss function for deep model is designed. Furthermore, based on the text-image pre-trained large model, we extract the text of the category label from the image as knowledge information, which is used to construct better hash codes for the image retrieval query, so as to improve the image retrieval evaluation effect. Extensive experiments on several large-scale datasets verify that the proposed method achieves better results than the state-of-the-art hash coding methods.
In grant-free ultra-reliable low-latency communications (URLLC), pilot transmission at transmitter and pilot retrieval at receiver are two basic operations of uplink access over the air to support user activity detect...
详细信息
ISBN:
(纸本)9781728173078
In grant-free ultra-reliable low-latency communications (URLLC), pilot transmission at transmitter and pilot retrieval at receiver are two basic operations of uplink access over the air to support user activity detection (UAD), channel estimation and data transmission. The overall process requires distinguishable pilots or reference symbols but easily suffers from pilot-aware attack, causing high latency and low reliability. We in this paper propose a feature-embedded hash (FEH) coding method to embed wireless signal features into hash coding such that the transmission and retrieval of pilot (TRP) can be protected by probabilistically encoding time-frequency resources. Particularly, pilot values are hashed at transmitters as lots of distinguishable subcarrier activation patterns (SAP) and the receiver retrieves pilots of interest from observed SAP. This process requires the joint design of attack mode detection (AMD), user activity detection (UAD). With these, channel estimation and data transmission can be secured. Novel expressions for the failure probability of grant-free URLLC system are derived to characterize system reliability under short-data packet transmission. Simulations demonstrate the effectiveness of FEH coding against attack.
In this paper, the unfavourable effects of hash coding on the convergence of CMAC learning are investigated in details based on the fact that CMAC learning is equivalent to the Gauss-Seidel iteration for solving a lin...
详细信息
ISBN:
(纸本)0780342534
In this paper, the unfavourable effects of hash coding on the convergence of CMAC learning are investigated in details based on the fact that CMAC learning is equivalent to the Gauss-Seidel iteration for solving a linear system of equations. A set of theoretic results are obtained concerning the convergence of CMAC learning in section II. It is pointed out that hash coding may give rise to divergence or at least deteriorate the convergence behavior and the causes of such phenomena are revealed in a matrix-theoretic approach. We propose a compensatory measure which is shown to be effective in minimizing the unfavourable effects of hash coding by simulation.
The proliferation of medical imaging in clinical diagnostics has led to an overwhelming volume of image data, presenting a challenge for efficient storage, management, and retrieval. Specifically, the rapid growth in ...
详细信息
The proliferation of medical imaging in clinical diagnostics has led to an overwhelming volume of image data, presenting a challenge for efficient storage, management, and retrieval. Specifically, the rapid growth in the use of imaging modalities such as Computed Tomography (CT) and X-rays has outpaced the capabilities of conventional retrieval systems, necessitating more sophisticated approaches to assist in clinical decisionmaking and research. Our study introduces a novel deep hash coding-based Content-Based Medical Image Retrieval (CBMIR) framework that uses a convolutional neural network (CNN) combined with hash coding for efficient and accurate retrieval. The model integrates a Dense block-based feature learning network, a hash learning block, and a spatial attention block to enhance feature extraction specific to medical imaging. We reduce dimensionality by applying the Reconstruction Independent Component Analysis (RICA) algorithm while preserving diagnostic information. The framework achieves a mean average precision (mAP) of 0.85 on ChestX-ray8, 0.82 on TCIA-CT, 0.84 on MIMIC-CXR, and 0.82 on LIDC-IDRI datasets, with retrieval times of 675 ms, 663 ms, 735 ms, and 748 ms, respectively. Comparisons with ResNet and DenseNet confirm the effectiveness of our model, enhancing medical image retrieval significantly for clinical decision-making and research.
Nowadays, with the rapid development of commerce, how to effectively improve the performance of an recommendation system has aroused great concern. However, traditional recommendation system requires users to log in t...
详细信息
Nowadays, with the rapid development of commerce, how to effectively improve the performance of an recommendation system has aroused great concern. However, traditional recommendation system requires users to log in their accounts, which brings poor user experience. This paper presents a novel recommendation system by using face recognition technologies to extract face attribute information as the input automatically. The system first obtains the user information of identity, gender, age, and then gets feedback by expression analysis. Based on the acquired face attributes, we propose to extract compact binary user features by integrating denoising autoencoder and hash coding, which can effectively improve the computing *** hash features from DAE-H-Face and DAE are further combined to enhance the representation ability. Finally, Hamming similarity-based collaborative filtering is used for recommendation. Experimental results on the MovieLens database show that the proposed recommendation method has better effectiveness and robustness. Moreover, the results also demonstrate its advantages to the cold start problem.
In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. T...
详细信息
In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash-coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency. The new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods. in such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to "catch" the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods. Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time. [ABSTRACT FROM AUTHOR]
Let S be the set of rational exponential expressions with complex rational coefficients, single level exponentiation, and no division or element i in the exponents. Let p be a prime integer of the form 4q ~ 1, where q...
详细信息
Finite-precision leads to many problems in geometric methods from CAD or Computational Geometry. Until now, using exact rational arithmetic was a simple, yet much too slow, solution to be of any practical use in real-...
详细信息
Finite-precision leads to many problems in geometric methods from CAD or Computational Geometry. Until now, using exact rational arithmetic was a simple, yet much too slow, solution to be of any practical use in real-scale applications. A recent optimization - the lazy rational arithmetic [5] - seems promising: It deters exact computations until they become either unnecessary (in most cases) or unavoidable;in such a context, only indispensable computations are performed exactly, that is, those without which any given decision cannot be reached safely using only floating-point arithmetic. This paper takes stock of the lazy arithmetic paradigm: principles, functionalities and limits, speed, possible variants and extensions, difficulties, problems solved or left unresolved.
hashing algorithm has attracted great attention in recent years. In order to improve the query speed and retrieval accuracy, this paper proposes an adaptive and asymmetric residual hash (AASH) algorithm based on resid...
详细信息
hashing algorithm has attracted great attention in recent years. In order to improve the query speed and retrieval accuracy, this paper proposes an adaptive and asymmetric residual hash (AASH) algorithm based on residual hash, integrated learning, and asymmetric pairwise loss. The specific description of the AASH algorithm is as follows: 1) the integrated learning model is proposed based on transfer learning and multi-feature fusion strategy to learn the database hash code;2) the residual hash model is proposed based on ResNet-50 to learn the query image hash code;3) the asymmetric pairwise loss is proposed and the parameters of the residual hash model is optimized based on the database hash code;4) the algorithm learns the database hash code and the query image hash code in an asymmetric manner, and integrates the feature learning part and the hash-coded part in one frame. The experimental results on three different datasets fully demonstrate that the proposed AASH method has better performance than most symmetric and asymmetric deep hash algorithms. Specifically, the optimal result of the AASH algorithm is 0.971 on Cifar10 when the hyperparameter is 100 and the hash code length is 32. The optimal result of the AASH algorithm is 0.945 on ceil images when the hyperparameter is 10 and the hash code length is 24. The optimal result of the AASH algorithm is 0.945 on FD-XJ when the hyperparameter is 15 and the hash code length is 32. In addition, the algorithm verifies convergence, time loss, and effectiveness.
暂无评论