Over forty years ago, Vannevar Bush articulated his vision of a “Memex” machine: “associative indexing, … whereby any item may be caused at will to select immediately and automatically another” [Bush 45]. In the ...
ISBN:
(纸本)0201142376
Over forty years ago, Vannevar Bush articulated his vision of a “Memex” machine: “associative indexing, … whereby any item may be caused at will to select immediately and automatically another” [Bush 45]. In the sixties, Engelbart [Engelbart, English 68] built collaborative systems to provide idea structuring and sharing. Nelson [Nelson 81] coined “hypertext” and proposed world-wide networks for publishing, linking, annotating and indexing multiple versions of documents. With increasing numbers of research projects, papers, panels and conferences, and commercially available systems (e.g. Notecards by Xerox, Guide by Owl and HyperCard by Apple) in recent years, hypertext may be an idea whose time has finally come — or at least a phenomenon not to be *** goal of this panel is not to define hypertext or hypermedia (at its simplest: non-linearly arranged and accessed information), debate its uniqueness, explain implementation issues, or survey the many applications and contributions in the field (see [Conklin 87] for an excellent survey of Hypertext, and the Proceedings of Hypertext '87 Workshop at University of North Carolina, Chapel Hill). Rather, we intend to approach it from the perspective of the information user: reader, searcher, author. The panel will address the following issues:Are the processes of authoring and understanding helped or hindered by the non-linear structure of hypertext, for which kinds of tasks and users? What is the difference between a hypertext writer and a knowledge engineer? In searching for information, what is the difference between browsing and querying?What experiments need to be done? What tools, environment or interfaces can improve the process of information creation and access? Can the overhead of creating or interpreting structure be reduced?When will hypertext replace paper, or should it? How do functions of author and reader co-evolve? Could this revolutionize society like the printing press? Why didn't the panelists cre
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as...
详细信息
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as GPT series, have shown promise in automating HDL code generation. However, current LLM-based approaches face significant challenges in meeting real-world hardware design requirements, particularly in handling complex designs and ensuring code correctness. Our evaluations reveal that the functional correctness rate of LLM-generated HDL code significantly decreases as design complexity increases. In this paper, we propose the AutoSilicon framework, which aims to scale up the hardware design capability of LLMs. AutoSilicon incorporates an agent system, which 1) allows for the decomposition of large-scale, complex code design tasks into smaller, simpler tasks; 2) provides a compilation and simulation environment that enables LLMs to compile and test each piece of code it generates; and 3) introduces a series of optimization strategies. Experimental results demonstrate that AutoSilicon can scale hardware designs to projects with code equivalent to over 10,000 tokens. In terms of design quality, it further improves the syntax correctness rate and functional correctness rate compared with approaches that do not employ any extensions. For example, compared to directly generating HDL code using GPT-4-turbo, AutoSilicon enhances the syntax correctness rate by an average of 35.8% and improves functional correctness by an average of 35.6%.
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data manag...
详细信息
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data management. The combination of Web3.0 and edge content caching holds promise in providing low-latency data access for CAVs’ real-time applications. Web3.0 enables the reliable pre-migration of frequently requested content from content providers to edge nodes. However, identifying optimal edge node peers for joint content caching and replacement remains challenging due to the dynamic nature of traffic flow in IoV. Addressing these challenges, this article introduces GAMA-Cache, an innovative edge content caching methodology leveraging Graph Attention Networks (GAT) and Multi-Agent Reinforcement Learning (MARL). GAMA-Cache conceptualizes the cooperative edge content caching issue as a constrained Markov decision process. It employs a MARL technique predicated on cooperation effectiveness to discern optimal caching decisions, with GAT augmenting information extracted from adjacent nodes. A distinct collaborator selection mechanism is also developed to streamline communication between agents, filtering out those with minimal correlations in the vector input to the policy network. Experimental results demonstrate that, in terms of service latency and delivery failure, the GAMA-Cache outperforms other state-of-the-art MARL solutions for edge content caching in IoV.
暂无评论