Computer Graphics Forum is the premier journal for in-depth technical articles on computer graphics. Published jointly by Wiley and Eurographics, we enable our readers to keep pace with this fast-moving field with our...
Computer Graphics Forum is the premier journal for in-depth technical articles on computer graphics. Published jointly by Wiley and Eurographics, we enable our readers to keep pace with this fast-moving field with our rapid publication times and coverage of major international computer graphics events, including the annual Eurographics Conference proceedings. We welcome original research papers on a wide range of topics including image synthesis, rendering, perception and *** following is a list of the most cited articles based on citations published in the last three years, according to CrossRef.
concurrency control protocols play a vital role in ensuring the correctness of databases when transactions are processed in parallel. Plor is a non-real-time concurrency control protocol based on the 2-phase locking p...
详细信息
concurrency control protocols play a vital role in ensuring the correctness of databases when transactions are processed in parallel. Plor is a non-real-time concurrency control protocol based on the 2-phase locking protocol. Plor utilizes the Wound-Wait scheme, a timestamp-based scheme for deadlock prevention, as it provides lower tail latency. One problem of Plor with that it requires transactions to fetch timestamps from a single centralized atomic counter. This paper evaluates the implementation of the Fair Thread ID method (FairTID), a decentralized approach where transactions use the thread IDs as their timestamps instead. The FairTID method shows up to 1.5 times throughput improvement while reducing latency by 1.6 times and the deadline-miss ratio by 1.67 times over the baseline protocol. The application of the method is formally verified for fairness reasoning.
In the cloud computing environment, workflow scheduling presents a significant challenge due to the unpredictable and dynamic nature of user demands and cloud resources. To address the complexities of workflow schedul...
详细信息
In the cloud computing environment, workflow scheduling presents a significant challenge due to the unpredictable and dynamic nature of user demands and cloud resources. To address the complexities of workflow scheduling, this paper introduces a dynamic multi-objective workflow scheduling model that comprehensively considers task completion time, load balancing, as well as dynamic changes in power consumption and cost in real-world scenarios. To effectively solve this model and better adapt to dynamic multi-objective optimization problems, we propose a dynamic reference vector guided evolutionary algorithm (DRVEA). The proposed algorithm incorporates an adaptive random mutation strategy, which dynamically adjusts the evolutionary process based on changing optimization goals, thereby enhancing convergence and solution diversity. Experimental results, obtained from both workflow scheduling simulations and standard multi-objective test environments, demonstrate that the proposed algorithm outperforms existing methods, achieving superior results in both solution quality and adaptability.
While formal models of concurrency tend to focus on synchronous communication, asynchronous communication is relevant in practice. In this paper, we will discuss asynchronous communication in the context of session-ba...
详细信息
While formal models of concurrency tend to focus on synchronous communication, asynchronous communication is relevant in practice. In this paper, we will discuss asynchronous communication in the context of session-based concurrency, the model of computation in which session types specify the structure of the two-party protocols implemented by the channels of a communicating process. We overview recent work on addressing the challenge of ensuring the deadlock-freedom property for message-passing processes that communicate asynchronously in cyclic process networks governed by session types. We offer a gradual presentation of three typed process frameworks and outline how they may be used to guarantee deadlock freedom for a concurrent functional language with sessions.
Mixed-precision computation is a promising method for substantially improving high-performance computing applications. However, using mixed-precision data is a double-edged sword. While it can improve computational pe...
详细信息
Mixed-precision computation is a promising method for substantially improving high-performance computing applications. However, using mixed-precision data is a double-edged sword. While it can improve computational performance, the reduction in precision introduces more uncertainties and errors. As a result, precision tuning is necessary to determine the optimal mixed-precision configurations. Much effort is therefore spent on selecting appropriate variables while balancing execution time and numerical accuracy. Auto-tuning (AT) is one of the technologies that can assist in alleviating this intensive task. In recent years, ppOpen-AT, an AT language, introduced a directive for mixed-precision tuning called "Blocks." In this study, we investigated an AT strategy for the "Blocks" directive for multi-region tuning of a program. The non-hydrostatic icosahedral atmospheric model (NICAM), a global cloud-resolving model, was used as a benchmark program to evaluate the effectiveness of the AT strategy. Experimental results indicated that when a single region of the program performed well in mixed-precision computation, combining these regions resulted in better performance. When tested on the supercomputer "Flow" Type I (Fujitsu PRIMEHPC FX1000) and Type II (Fujitsu PRIMEHPC CX1000) subsystems, the mixed-precision NICAM benchmark program tuned by the AT strategy achieved a speedup of nearly 1.31x on the Type I subsystem compared to the original double-precision program, and a 1.12x speedup on the Type II subsystem.
Over the past decade, as demand for cloud services has surged, the strategic selection of these services has become increasingly crucial. The growing complexity within the cloud industry underscores the urgent need fo...
详细信息
Over the past decade, as demand for cloud services has surged, the strategic selection of these services has become increasingly crucial. The growing complexity within the cloud industry underscores the urgent need for a robust model for choosing cloud services effectively. Users often struggle to make informed decisions due to the dynamic nature and varying quality of available cloud services. In response, this paper introduces a novel decision-making approach aimed at optimizing the selection process by identifying the most suitable combination of cloud services. The focus is on integrating these services into a cohesive ensemble to better fulfill user requirements. In contrast to existing methodologies, our approach evaluates cloud services on a continuous scale, taking into account critical tasks such as workload balancing, storage management, and network resource handling. We propose a model for selecting optimal composite cloud services, which includes real-time optimization and addresses the consideration of null values for Quality of Service (QoS)-based attributes (e.g., response time, cost, availability, and reliability) in the dataset-a factor overlooked by current literature. The proposed algorithm is inspired by computational intelligence and driven by an evolutionary algorithm-based approach that undergoes evaluation across multiple datasets. The results illustrate its superiority, showcasing its ability to outperform existing optimization-based methods in terms of execution time.
Quantum computers are capable of performing large-scale calculations in a shorter time than conventional classical computers. Because quantum computers are realized in microscopic physical systems, unintended change i...
详细信息
Quantum computers are capable of performing large-scale calculations in a shorter time than conventional classical computers. Because quantum computers are realized in microscopic physical systems, unintended change in the quantum state is unavoidable due to interaction between environment, and it would lead to error in computation. Therefore, quantum error correction is needed to detect and correct errors that have occurred. In this paper, we propose quantum computer architecture for quantum error correction by taking account that the components of a quantum computer with quantum dots in silicon are divided into multiple temperature layers inside and outside the dilution refrigerator. Based on the required performance and possible processing capacity, each component was distributed in various temperature layers: the chip with qubits and the chip for generation of precise analog signals to control qubits are placed on 100 mK and 4 K stages inside the dilution refrigerator, respectively, while real-time digital processing is performed outside the dilution refrigerator. We then experimentally demonstrate the digital control sequence for quantum error correction combined with a simulator which simulates quantum states based on control commands from the digital processing system. The simulator enables the proof-of-principle experiment of system architecture independent of the development of the chips. The real time processing including determination of feed-forward operation and transmission of feed-forward operation commands is carried out by a field-programmable gate array (FPGA) outside the dilution refrigerator within 0.01 ms for bit-flip or phase-flip error corrections. This is a sufficiently short time compared to the assumed relaxation time, which is the approximate time that the quantum state can be preserved, meaning that our proposed architecture is applicable to quantum error correction.
暂无评论