A great challenge for web site designers is how to ensure users' easy access to important web pages efficiently. In this paper we present a clustering-based approach to address this problem. Our approach to this c...
详细信息
By introducing a discrete Frenet frame, this paper first proposes 3D discrete clothoid splines to extend the planar discrete clothoid splines of Schneider and Kobbelt. On the basis of 3D discrete clothoid spline curve...
详细信息
Accurate prediction of sea surface temperature (SST) is of high importance in marine science, benefiting applications ranging from ecosystem protection to extreme weather forecasting and climate analysis. Wide-area SS...
详细信息
Accurate prediction of sea surface temperature (SST) is of high importance in marine science, benefiting applications ranging from ecosystem protection to extreme weather forecasting and climate analysis. Wide-area SST usually shows diverse SST patterns in different sea areas due to the changes of temperature zones and the dynamics of ocean currents. However, existing studies on SST prediction often focus on small-area predictions and lack the consideration of diverse SST patterns. Furthermore, SST shows an annual periodicity, but the periodicity is not strictly adherent to an annual cycle. Existing SST prediction methods struggle to adapt to this non-strict periodicity. To address these two issues, we proposed the Cross-Region Graph Convolutional Network with Periodicity Shift Adaptation (RGCN-PSA) model which is equipped with the Cross-Region Graph Convolutional Network module and the Periodicity Shift Adaption module. The Cross-Region Graph Convolutional Network module enhances wide-area SST prediction by learning and incorporating diverse SST patterns. Meanwhile, the periodicity Shift Adaptation module accounts for the annual periodicity and enable the model to adapt to the possible temporal shift automatically. We conduct experiments on two real-world SST datasets, and the results demonstrate that our RGCN-PSA model obviously outperforms baseline models in terms of prediction accuracy. The code of RGCN-PSA model is available at https://***/ADMIS-TONGJI/RGCN-PSA/.
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as...
详细信息
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as GPT series, have shown promise in automating HDL code generation. However, current LLM-based approaches face significant challenges in meeting real-world hardware design requirements, particularly in handling complex designs and ensuring code correctness. Our evaluations reveal that the functional correctness rate of LLM-generated HDL code significantly decreases as design complexity increases. In this paper, we propose the AutoSilicon framework, which aims to scale up the hardware design capability of LLMs. AutoSilicon incorporates an agent system, which 1) allows for the decomposition of large-scale, complex code design tasks into smaller, simpler tasks; 2) provides a compilation and simulation environment that enables LLMs to compile and test each piece of code it generates; and 3) introduces a series of optimization strategies. Experimental results demonstrate that AutoSilicon can scale hardware designs to projects with code equivalent to over 10,000 tokens. In terms of design quality, it further improves the syntax correctness rate and functional correctness rate compared with approaches that do not employ any extensions. For example, compared to directly generating HDL code using GPT-4-turbo, AutoSilicon enhances the syntax correctness rate by an average of 35.8% and improves functional correctness by an average of 35.6%.
This volume presents the accepted papers for the 4th International Conference onGridandCooperativecomputing(GCC2005),heldinBeijing,China,during November 30 – December 3, *** conferenceseries of GCC aims to provide an...
详细信息
ISBN:
(数字)9783540322771
ISBN:
(纸本)9783540305101
This volume presents the accepted papers for the 4th International Conference onGridandCooperativecomputing(GCC2005),heldinBeijing,China,during November 30 – December 3, *** conferenceseries of GCC aims to provide an international forum for the presentation and discussion of research trends on the theory, method, and design of Grid and cooperative computing as well as their scienti?c, engineering and commercial applications. It has become a major annual event in this area. The First International Conference on Grid and Cooperative computing (GCC2002)***2003received550submissions,from which 176 regular papers and 173 short papers were accepted. The acceptance rate of regular papers was 32%, and the total acceptance rate was 64%. GCC 2004 received 427 main-conference submissions and 154 workshop submissions. The main conference accepted 96 regular papers and 62 short papers. The - ceptance rate of the regular papers was 23%. The total acceptance rate of the main conference was 37%. For this conference, we received 576 submissions. Each was reviewed by two independent members of the International Program Committee. After carefully evaluating their originality and quality, we accepted 57 regular papers and 84 short papers. The acceptance rate of regular papers was 10%. The total acc- tance rate was 25%.
Code generation models based on the pre-training and fine-tuning paradigm have been increasingly attempted by both academia and industry, resulting in well-known industrial models such as Codex, CodeGen, and PanGu-Cod...
详细信息
Code generation models based on the pre-training and fine-tuning paradigm have been increasingly attempted by both academia and industry, resulting in well-known industrial models such as Codex, CodeGen, and PanGu-Coder. After being pre-trained on a large-scale corpus of code, a model is further fine-tuned with datasets specifically for the target downstream task, e.g., generating code from natural language description. The target code being generated can be classified into two types: a standalone function, i.e., a function that invokes or accesses only built-in functions and standard libraries, and a non-standalone function, i.e., a function that invokes or accesses user-defined functions or third-party *** effectively generate code especially non-standalone functions (largely ignored by existing work), in this article, we present Wenwang, an approach to improving the capability of a pre-trained model on generating code beyond standalone functions. Wenwang consists of two components: a fine-tuning dataset named WenwangData and a fine-tuned model named WenwangCoder. Compared with existing fine-tuning datasets, WenwangData additionally covers non-standalone functions. Besides the docstring and code snippet for a function, WenwangData also includes its contextual information collected via program analysis. Based on PanGu-Coder, we produce WenwangCoder by fine-tuning PanGu-Coder on WenwangData with our context-aware fine-tuning technique so that the contextual information can be fully leveraged during code generation. On CoderEval and HumanEval, WenwangCoder outperforms three state-of-the-art models with similar parameter sizes (at the scale of around 300M), namely CodeGen, PanGu-Coder, and PanGu-FT. Although WenwangCoder does not outperform ChatGPT on HumanEval, WenwangCoder with smaller model parameter sizes can achieve similar effects to ChatGPT on CoderEval. Our experimental results also shed light on a number of promising optimization directions based on
暂无评论