Single-photon sensors are novel devices with extremely high single-photon sensitivity and temporal ***,these advantages also make them highly susceptible to ***,single-photon cameras face severe quantization as low as...
详细信息
Single-photon sensors are novel devices with extremely high single-photon sensitivity and temporal ***,these advantages also make them highly susceptible to ***,single-photon cameras face severe quantization as low as 1 bit/*** factors make it a daunting task to recover high-quality scene information from noisy single-photon *** current image reconstruction methods for single-photon data are mathematical approaches,which limits information utilization and algorithm *** this work,we propose a hybrid information enhancement model which can significantly enhance the efficiency of information utilization by leveraging attention mechanisms from both spatial and channel ***,we introduce a structural feature enhance module for the FFN of the transformer,which explicitly improves the model's ability to extract and enhance high-frequency structural information through two symmetric convolution ***,we propose a single-photon data simulation pipeline based on RAW images to address the challenge of the lack of single-photon *** results show that the proposed method outperforms state-of-the-art methods in various noise levels and exhibits a more efficient capability for recovering high-frequency structures and extracting information.
Category-level pose estimation methods have received widespread attention as they can be generalized to intra-class unseen objects. Although RGB-D-based category-level methods have made significant progress, reliance ...
详细信息
This article investigates the adaptive resource allocation scheme for digital twin (DT) synchronization optimization over dynamic wireless networks. In our considered model, a base station (BS) continuously colle...
详细信息
This article investigates the adaptive resource allocation scheme for digital twin (DT) synchronization optimization over dynamic wireless networks. In our considered model, a base station (BS) continuously collects f...
详细信息
The next generation of wireless communications seeks to deeply integrate artificial intelligence (AI) with user-centric communication networks, with the goal of developing AI-native networks that more accurately addre...
详细信息
The next generation of wireless communications seeks to deeply integrate artificial intelligence (AI) with user-centric communication networks, with the goal of developing AI-native networks that more accurately address user requirements. The rapid development of large language models (LLMs) offers significant potential in realizing these goals. However, existing efforts that leverage LLMs for wireless communication often overlook the considerable gap between human natural language and the intricacies of real-world communication systems, thus failing to fully exploit the capabilities of LLMs. To address this gap, we propose a novel LLM-driven paradigm for wireless communication that innovatively incorporates the nature language to structured query language (NL2SQL) tool. Specifically, in this paradigm, user personal requirements is the primary focus. Upon receiving a user request, LLMs first analyze the user intent in terms of relevant communication metrics and system parameters. Subsequently, a structured query language (SQL) statement is generated to retrieve the specific parameter values from a high-performance real-time database. We further utilize LLMs to formulate and solve an optimization problem based on the user request and the retrieved parameters. The solution to this optimization problem then drives adjustments in the communication system to fulfill the user’s requirements. To validate the feasibility of the proposed paradigm, we present a prototype system. In this prototype, we consider user-request centric semantic communication (URC-SC) system in which a dynamic semantic representation network at the physical layer adapts its encoding depth to meet user requirements, such as improved data transmission quality or reduced data transmission latency. Additionally, two LLMs are employed to analyze user requests and generate SQL statements, respectively. Simulation results demonstrate the effectiveness of the prototype in personalizing and addressing user r
The objective of research on the MIDAS project is to demonstrate the viability of a multiprocessor approach to computing and to develop a general purpose, extensible architecture which can be used to address the growi...
详细信息
The objective of research on the MIDAS project is to demonstrate the viability of a multiprocessor approach to computing and to develop a general purpose, extensible architecture which can be used to address the growing computational requirements of the scientific community. To be successful in this endeavor, however, requires more than simply designing, or even constructing, new hardware structures. To achieve high performance on future systems will require software approaches which can exploit parallel architectures as fully as possible. A critical issue, therefore, is to understand the functional requirements of a large class of applications. The requirernents must be critically examined and, in many cases, new approaches to old algorithms investigated. For highly parallel systems to be effectively utilized, they must be flexible and adaptable to specific application requirements. In particular, they will probably need a variety of control and communication mechanisms to facilitate load balancing of a problem and to minimize bottlenecks in performance. The effective utilization of highly parallel architectures, however, raises many new questions. Traditional operating system structures must, for example, be re-examined. Fault-tolerant environments, with error recovery capability, become increasingly important as the number of components and processors increases. Language extensions to support parallel operations are obviously required, and ultimately new languages are needed to explicitly exploit parallel constructs. A new generation of development and debugging tools will be necessary in order to examine programn performance and operation in an asynchronous parallel environment.
Aggressive design using level-sensitive latches and wave pipelining has been proposed to meet the increasing need for higher performance digital systems. The optimal clocking problem for such designs has been formulat...
详细信息
Aggressive design using level-sensitive latches and wave pipelining has been proposed to meet the increasing need for higher performance digital systems. The optimal clocking problem for such designs has been formulated using an accurate timing model, However, this problem has been difficult to solve because of its nonconvex solution space, The best algorithms to date employ linear programs to solve an overconstrained case that has a convex solution space, yielding suboptimal solutions to the general problem, A new efficient (cubic complexity) algorithm, Gpipe, exploits the geometric characteristics of the fun nonconvex solution space to determine the maximum single-phase clocking rate for a closed pipeline with a specified degree of wave pipelining. Introducing or increasing wave pipelining by permanently enabling some latches is also investigated, Sufficient conditions have been found to identify which latches can be removed in this fashion so as to guarantee no decrease and permit a possible increase in the clock rate. Although increasing the degree of wave pipelining can result in faster clocking, wave pipelining is often avoided in design due to difficulties in stopping and restarting the pipeline under stall conditions without losing data or in reduced rate testing of the circuit. To solve this problem, which has not previously been addressed, we present conditions and implementation methods that insure the stoppability and restartability of a wave pipeline.
In this paper we apply a recently formulated general timing model of synchronous operation to the special case of latch-controlled pipelined circuits. The model accounts for multiphase synchronous clocking, correctly ...
详细信息
In this paper we apply a recently formulated general timing model of synchronous operation to the special case of latch-controlled pipelined circuits. The model accounts for multiphase synchronous clocking, correctly captures the behavior of level-sensitive latches, handles both short- and long-path delays, accommodates wave pipelining, and leads to a comprehensive set of timing constraints. Pipeline circuits are important because of their frequent use in computer systems. We define their concurrency as a function of the clock schedule and degree of wave pipelining. We then identify a special class of clock schedules, coincident multiphase clocks, which provide a lower bound on the value of the optimum cycle time. We show that the region of feasible solutions for single-phase clocking can be nonconvex or even disjoint, and derive a closed-form expression for the minimum cycle time of a restricted but practical form of single-phase clocking. We compare these forms of clocking on three pipeline examples and highlight some of the issues in pipeline synchronization.
A general theory for designing minimum-area layouts of static series-parallel CMOS functional cells (also called complex gates) in a standard cell layout style is presented. T. Uehara and W.M. vanCleemput. (1981) orig...
详细信息
A general theory for designing minimum-area layouts of static series-parallel CMOS functional cells (also called complex gates) in a standard cell layout style is presented. T. Uehara and W.M. vanCleemput. (1981) originally formulated this as the graph optimization problem of finding the minimum number of dual trails that cover a multigraph model of M of a cell. The present theory provides a formalism for the analysis of series-parallel graphs and identifies the mathematical structures that underlie the layout problem. It also leads to two efficient algorithms for designing minimum area functional cell layouts. The first algorithm, TrailTrace, accepts an ordering of M that is fixed, typically for performance reasons, and produces the minimum area layout for that ordering. Its time complexity is linear in the number of transistors in the cell. The second algorithm, R-TrailTrace, reorders M and produces the best layout area that can be achieved for any reordering that preserves the cell's functionality.< >
Recent studies have focused on leveraging large-scale artificial intelligence (LAI) models to improve semantic representation and compression capabilities. However, the substantial computational demands of LAI models ...
详细信息
暂无评论