Intelligent Transportation Systems (ITS) have significantly improved transportation quality by using applications capable of monitoring, managing, and improving the transportation system. However, the large number of ...
详细信息
Intelligent Transportation Systems (ITS) have significantly improved transportation quality by using applications capable of monitoring, managing, and improving the transportation system. However, the large number of devices required to provide data to ITS applications has become a challenge in recent years, particularly the high installation and maintenance costs made broad deployment impracticable. Despite several advances in smart city research and the internet of things (IoT), research on ITS is still in the early stages. In this sense, to improve data collection and maintenance strategies for ITS systems, this article proposes a virtual infrastructure model based on data reuse, mainly autonomous vehicle (AV) data, to support ITS applications. It presents design choices and challenges for deploying a virtual infrastructure based on Beyond 5G (B5G) communication and data reuse, followed by developing a proof of concept of an AV data acquisition system evaluated through simulation. The results show that the extra data collection module results in a 1.1% increase in total memory usage with direct sensor collection and a 2.6% increase with application performance management (APM) data collection on the reference hardware. This data reuse setup can significantly improve ITS data challenges with minimal impact on current technology stack on the Autonomous vehicles currently in circulation.
PurposeIdentifying data reuse is challenging, due to technical reasons, and, in particular, incorrect citation practices among scholars. This paper aims to propose an automatic method to track the reuse of data deposi...
详细信息
PurposeIdentifying data reuse is challenging, due to technical reasons, and, in particular, incorrect citation practices among scholars. This paper aims to propose an automatic method to track the reuse of data deposited in the archives joined to the CESSDA (Consortium of European Social Science data Archives) infrastructure. The paper also offers an overview on the identified data to understand the characteristics of the most reused data ***/methodology/approachThe reuse of data sets stored in the GESIS data archive, the biggest CESSDA data archive, and cited in publications indexed by Scopus, is tracked. Metadata of publications, and those of data sets, allow us to understand the characteristics and circumstances in which data reuse *** contribution demonstrates the possibility of tracking data reuse through an automatic way, despite the technical difficulties in doing it. Evidence about the most reused data are shown, highlighting some limits in the tracking practices of reuse. Finally, some suggestions to the actors involved in data sharing are ***/valueThe originality of this work is the provision of an automatic procedure to investigate and measure the data reuse, providing information on how it happens. This is uncommon in the social science literature and archives, that usually adopt inaccurate metrics to measure data reuse.
Policy and scholarly discourse emphasizing the panacea of Open (research) data shapes expectations, and directs and legitimizes investments in data technologies and infrastructures. This is driven by the hope that Ope...
详细信息
Policy and scholarly discourse emphasizing the panacea of Open (research) data shapes expectations, and directs and legitimizes investments in data technologies and infrastructures. This is driven by the hope that Open data will quicken the pace of research and innovation through data reuse, and that they do so more effectively than other access regimes, such as stewarded and proprietary data. Drawing on Leonelli's relational framework and Gadamer's hermeneutical conceptualization of a horizon of meanings, data reuse can be understood as a fitting process. In the latter, a researcher engages in a hermeneutical dialogical interaction with the data's affordances with the goal of making a scientific contribution. Moreover, the fitting process takes place within a researcher's bounded individual horizon (BIH), defined as an intentional orientation towards the future;it is made up of the relations and circumstances that modulate each researcher's unique situation. Seen thus, data reuse is likely to result from the persistence of a researcher's desire or need to make a scientific contribution, independently of the data access regime. What is more, the necessary interaction between potential reusers and data curators or owners can open up the interpretive affordances of data in the context of proprietary and stewarded data, making data more mutable compared to the relative immutability of data in open repositories. Accordingly, stewarded data, with the proper curation and digital preservation services, might provide a more sustainable form of sharing and reusing data where privacy is at stake.
data access usually leads to more than 50% of the power cost in a modern signal processing system. To realize a low-power design, how to reduce the memory access power is a critical issue. data reuse (DR) is a techniq...
详细信息
data access usually leads to more than 50% of the power cost in a modern signal processing system. To realize a low-power design, how to reduce the memory access power is a critical issue. data reuse (DR) is a technique that recycles the data read from memory and can be used to reduce memory access power. In this paper, a systematic method of DR exploration for low-power architecture design is presented. For a start, the signal processing algorithms should be formulated as the nested loops structures, and data locality is explored by use of loop analysis. Then, corresponding DR techniques are applied to reduce memory access power. The proposed design methodology is applied to the motion estimation (ME) algorithms of H.264 video coding standard. After analyzing the ME algorithms, suitable parallel architectures and processing flows of the integer ME (IME) and fractional ME (FME) are proposed to achieve efficient DR. The amount of memory access is respectively reduced to 0.91 and 4.37% in the proposed IME and FME designs, and thus lots of memory access power is saved. Finally, the design methodology is also beneficial for other signal processing systems with a low-power consideration.
A data reuse-based fast subpixel motion estimation (SME) method for high efficiency video coding (HEVC) is proposed. Since SME is one of the most computation-intensive tools in the encoder process, conventional resear...
详细信息
A data reuse-based fast subpixel motion estimation (SME) method for high efficiency video coding (HEVC) is proposed. Since SME is one of the most computation-intensive tools in the encoder process, conventional research on SME focused on the reduction of the computational complexity. The applied data-reuse architecture for the design of fast SME substantially reduces computational complexity at the cost of a reasonable increase in memory bandwidth. The core of the proposed data-reuse method is the replacement of redundant computations in SME with the memory access operations of previously computed values. The proposed method was tested in the latest video coding standard, HEVC, with experimental results showing a reduction in operational complexity of similar to 64.14%, and a reduction in encoding time of similar to 56.13%, compared to the SME in the HEVC reference encoder. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)
Vast amounts of valuable historical tunnelling site investigation data remain underutilized due to inefficient content-based archiving and searching tools. This study introduces a novel data-driven framework that inte...
详细信息
Vast amounts of valuable historical tunnelling site investigation data remain underutilized due to inefficient content-based archiving and searching tools. This study introduces a novel data-driven framework that integrates transfer learning with reverse image search to revolutionize the utilization of historical data in tunnelling projects. The method indexes excavated tunnel sections with corresponding tunnel face images and identifies similarities between projects based on geological features. Transfer learning with pre-trained deep learning models is employed to compress tunnel face images into compact, lower-dimensional vectors, enabling efficient similarity searches. This transformation converts geological information into comparable vectors, enhancing the efficiency and speed of data searches. An online cloud service is developed to allow engineers to access similar historical projects in real-time. To enhance the quality of the compressed vectors, this study developed a multi-level feature extraction method. This method markedly improves the deep learning models’ ability to accurately identify major features from rock images. When applied to a diverse range of tunnel excavation projects in China, the model exhibited an impressive accuracy of over 90% in retrieving projects with similar geological features. This underscores the model’s potential as a robust tool for enhancing data management and decision-making in tunnelling engineering.
Motion estimation (ME) is a kernel algorithm in many video applications. Full search integer ME (FSIME) can find the best result but it usually takes plenty of time. Traditionally, only intra-frame data reuse is consi...
详细信息
Motion estimation (ME) is a kernel algorithm in many video applications. Full search integer ME (FSIME) can find the best result but it usually takes plenty of time. Traditionally, only intra-frame data reuse is considered for FSIME. In this paper, a new inter-frame data reuse method is proposed to further utilize inter-frame data reuse for true ME. ME in frame rate up-conversion (FRUC-ME), a kind of true ME, is used as a case study. For FRUC-ME with the new inter-frame data reuse method, a frame is loaded to the on-chip buffer only once instead of twice and used for two interpolated frames. Two levels of the new method are proposed, Inter-D and new Inter-E, which give a good tradeoff between off-chip memory bandwidth and on-chip buffer size. New data access order is used to implement the new data reuse method. The proposed data reuse method (Inter-D) demands less off-chip memory bandwidth than its intra-frame counterpart (Intra-D), and the off-chip memory traffic and power consumption are both reduced by 37.5% for FRUC-ME.
Purpose - The purpose of this paper is to quantitatively examine factors of trust in data reuse from the reusers' perspectives. Design/methodology/approach - This study utilized a survey method to test the propose...
详细信息
Purpose - The purpose of this paper is to quantitatively examine factors of trust in data reuse from the reusers' perspectives. Design/methodology/approach - This study utilized a survey method to test the proposed hypotheses and to empirically evaluate the research model, which was developed to examine the relationship each factor of trust has with reusers' actual trust during data reuse. Findings - This study found that the data producer (H1) and data quality (H3) were significant, as predicted, while scholarly community (H3) and data intermediary (H4) were not significantly related to reusers' trust in data. Research limitations/implications - Further disciplinary specific examinations should be conducted to complement the study findings and fully generalize the study findings. Practical implications - The study finding presents the need for engaging data producers in the process of data curation, preferably beginning in the early stages and encouraging them to work with curation professionals to ensure data management quality. The study finding also suggests the need for re-defining the boundaries of current curation work or collaborating with other professionals who can perform data quality assessment that is related to scientific and methodological rigor. Originality/value - By analyzing theoretical concepts in empirical research and validating the factors of trust, this study fills this gap in the data reuse literature.
Although recent advances in resistive random access memory (ReRAM)-based accelerator designs for deep convolutional neural networks (CNNs) offer energy-efficiency improvements over CMOS-based accelerators, they have a...
详细信息
Although recent advances in resistive random access memory (ReRAM)-based accelerator designs for deep convolutional neural networks (CNNs) offer energy-efficiency improvements over CMOS-based accelerators, they have a large number of energy consuming data transactions. In this paper, we propose MAX(2), a multi-tile ReRAM accelerator framework for supporting multiple CNN topologies, that maximizes on-chip data reuse and reduces on-chip bandwidth to minimize energy consumption due to data movement. Building upon the fact that a large filter can he built with a stack of smaller (3 x 3) filters, we design every tile with nine processing elements (PEs). Each PE consists of multiple ReRAM subarrays to compute the dot product. The PEs operate in a systolic fashion, thereby maximizing input feature map reuse and minimizing interconnection cast. MAX(2) chooses the data size granularity in the systolic array in conjunction with weight duplication to achieve very high area utilization without requiring additional peripheral circuits. We provide a detailed energy and area breakdown of each component at the PE level, tile level, and system level. The system-level evaluation in 32-nm node on several VGG-network benchmarks shows that the MAX(2) can improve computation efficiency (TOPs/s/mm(2)) by 2.5x and energy efficiency (TOPs/s/W) by 5.2x compared with a state-of-the-art ReRAM-based accelerator.
Due to increasing diversity and complexity of applications in embedded systems, accelerator designs trading-off area/energy-efficiency and design-productivity are becoming a further crucial issue. Targeting applicatio...
详细信息
Due to increasing diversity and complexity of applications in embedded systems, accelerator designs trading-off area/energy-efficiency and design-productivity are becoming a further crucial issue. Targeting applications in the category of Recognition, Mining, and Synthesis (RMS), this study proposes a novel accelerator design to achieve a good trade-off in efficiency and design-productivity (or reusability) by introducing a new computing paradigm called "approximate computing" (AC). Leveraging from the facts that frequently executed parts of applications (i.e., hotspots) are conventionally the target of acceleration and that RMS applications are error-tolerant and often take similar input data repeatedly, our proposed accelerator reuses previous computational results of similar enough data to reduce computations. The proposed accelerator is composed of a simple controller and a dedicated memory to store limited sets of previous input data with corresponding computational results in a hotspot. Therefore, this accelerator can be applied to different and/or multiple hotspots/applications only through small extension of the controller, to achieve efficient accelerator design and resolve the design-productivity issue. We conducted quantitative evaluations using a representative RMS application (image compression) to demonstrate the effectiveness of our method over conventional ones with precise computing. Moreover, we provide important findings on parameter exploration for our accelerator design, offering a wider applicability of our accelerator to other applications.
暂无评论