In this work we investigate the problem of detection and location of single and unlinked multiple k-coupling faults in n x 1 random-access memories (RAMs). This fault model covers all crosstalks between any k cells in...
详细信息
In this work we investigate the problem of detection and location of single and unlinked multiple k-coupling faults in n x 1 random-access memories (RAMs). This fault model covers all crosstalks between any k cells in n x 1 RAMs. The problem of memory testing has been reduced to the problem of the generation of (n, k - 1)-exhaustive backgrounds. We have obtained practical test lengths, for a memory size around 1 M, for detecting up to 6-couplings by exhaustive tests and up to 9-couplings by near-exhaustive tests. The best known test algorithms up to now provide for the detection of 5-couplings only in a 1 M memory, using exhaustive tests. Beyond these parameters, test lengths were impractical. Furthermore, our method for generation of (n, k - 1)-exhaustive backgrounds yields short test lengths giving rise to considerably shorter testing times than the present most efficient tests for large n and for k greater than 3. Our test lengths are 50% shorter than other methods for the case of detecting up to 5-couplings in a 1 Mbit RAM. The systematic nature of both our tests enables us to use a built-in self-test (BIST) scheme, for RAMs, with low hardware overhead. For a 1 Mbit memory, the BIST area overhead for the detection of 5-couplings is less than 1% for SRAM and 6.8% for a DRAM. For the detection of 9-couplings with 99% or higher probability, the BIST area overhead is less than 0.2% for SRAM and 1.5% for DRAM.
Advancements in artificial intelligence (AI) and low-earth orbit (LEO) satellites have promoted the application of large remote sensing foundation models for various downstream tasks. However, direct downloading of th...
详细信息
Hadoop Yarn is an open-source cluster manager responsible for resource management and job scheduling. However, data-driven applications are typically organized into workflows that consist of a series of jobs with depe...
Hadoop Yarn is an open-source cluster manager responsible for resource management and job scheduling. However, data-driven applications are typically organized into workflows that consist of a series of jobs with dependencies. Yarn does not manage users' workflows and only considers the current job rather than the entire workflow when scheduling. In practice, multiple workflows share the same Yarn cluster and are pre-assigned separate Yarn resource queues to avoid mutual interference. However, this coarse-grained resource division can sometimes result in low resource utilization and increased pending time of jobs on the Yarn queue. For instance, one resource queue may have exhausted its quota while still having pending jobs, while other queues may have availab.e resources but cannot begin executing any jobs due to unfulfilled data dependencies. To address this problem, we propose a deep reinforcement learning-based workflow scheduling scheme that takes into account job dependencies, job priorities, and dynamic resource usage. The proposed approach can intelligently identify and utilize free windows of different resource queues. Our simulation results demonstrate that the proposed DRL-based workflow scheduling scheme can significantly reduce the average job latency compared to existing approaches.
暂无评论