DNA probe arrays have emerged as a core genomic technology that enables cost-effective gene expression monitoring, mutation detection, single nucleotide polymorphism analysis and other genomic analyses. DNA chips are ...
详细信息
DNA probe arrays have emerged as a core genomic technology that enables cost-effective gene expression monitoring, mutation detection, single nucleotide polymorphism analysis and other genomic analyses. DNA chips are manufactured through a highly scalable process, Very Large-Scale Immobilized Polymer Synthesis (VLSIPS), that combines photolithographic technologies adapted from the semiconductor industry with combinatorial chemistry. Commercially available DNA chips contain more than a half million probes and are expected to exceed one hundred million probes in the next generation. This paper is one of the first attempts to apply VLSI CAD methods to the problem of probe placement in DNA chips, where the main objective is to minimize total border cost (i.e., the number of nucleotide mismatches between adjacent sites). We make the following contributions. First, we propose several partitioning-based algorithms for DNA probe placement that improve solution quality by over 4% compared to best previously known methods. Second, we give a simple in-place probe reembedding algorithm with solution quality better than previous "chessboard" and batched greedy algorithms. Third, we experimentally evaluate scalability and suboptimality of existing and newly proposed probe placement algorithms. Interestingly, we find that DNA placement algorithms appear to have better suboptimality properties than those recently reported for VLSI placement algorithms.
We propose a new stable multiway partitioning algorithm, where stability is defined as an additional quality of a partitioning solution. The stability of a partitioning algorithm is an important criterion for a partit...
详细信息
We propose a new stable multiway partitioning algorithm, where stability is defined as an additional quality of a partitioning solution. The stability of a partitioning algorithm is an important criterion for a partitioning based placement to achieve timing closure through the repetition of the placement procedure, Given a previous partitioning result P* on an original netlist hypergraph H* and a partially modified netlist hypergraph H, a new cost function with similarity factor is defined to produce a new partition P on H which is similar to the original partition P*. The proposed algorithm is the first approach that quantifies the degree of similarity of a current partition to the original partition using similarity cost. Our goal is to build a new partition in a relatively short run time, whose cut quality is not much degraded from that of the original partition P* while it preserves as much of the previous groupings in P* as possible. The proposed partitioner is especially beneficial to engineering change order (ECO) applications, where partial modifications of a netlist are handled by the incremental methodology in a design iteration cycle. Our approach helps ECO placers maximize the incremental capability since the portions to be re-placed are minimized. Experimental results show that the proposed algorithm achieves a high quality partition comparable to a state-of-the-art multilevel partitioner hMetis, while many portions of the groupings in the previous partition are preserved in the current partition. The tradeoff between similarity and cut quality with respect to a varying similarity coefficient is also shown.
Memory-intensive applications present unique challenges to an ASIC designer in terms of the choice of memory organization, memory size requirements, bandwidth and access latencies, etc. The high potential of single-ch...
详细信息
Memory-intensive applications present unique challenges to an ASIC designer in terms of the choice of memory organization, memory size requirements, bandwidth and access latencies, etc. The high potential of single-chip distributed logic-memory architectures in addressing many of these issues has been recognized in general-purpose computing, and more recently in ASIC design. However, such architectures will be adopted widely by designers only when general techniques and tools for efficient high-level synthesis (HLS) of multi-partitioned ASICs become available. The techniques presented in this paper are motivated by the fact that many memoryintensive applications exhibit irregular array data access patterns (due to conditionals in loop nests, etc.). Synthesis should, therefore, be capable of determining a partitioned architecture, wherein array data and computations may have to be heterogeneously distributed for achieving the best performance speedup. Furthermore, the synthesis methodology should not be restricted by the nature of array index functions (affine or otherwise) in a behavior. Therefore, our methodology employs simulation to provide information about the access patterns of array data references in a behavior, which is used by the rest of our analysis. We use a combination of clustering and min-cut style partitioning techniques to partition the behavior into sub-behaviors while considering various factors including data access locality, balanced workloads, inter-partition communication, etc. Finally, we also employ an iterative improvement strategy to determine the best way of distributing array data into physical memory in each partition. Our experiments with several benchmark applications show that the proposed techniques can yield partitioned architectures that can achieve upto 2.2X performance speed-up over conventional HLS solutions, while achieving upto 1.6X performance speedup over the best homogeneous partitioning solution feasible.
Y2002-63377-279 0318039布尔满意度解算机中的有效冲突驱动学习=Efficentconflict driven learning in a Boolean satisfiability solver 〔会,英〕/Zhang, L. T. & Madigan, C. F. //2001ieee/acminternationalconference on Co...
详细信息
Y2002-63377-279 0318039布尔满意度解算机中的有效冲突驱动学习=Efficentconflict driven learning in a Boolean satisfiability solver 〔会,英〕/Zhang, L. T. & Madigan, C. F. //2001ieee/acminternationalconference on computeraidedDesign, digest. -279~285(E)Y2002-63422-128 0318040人工神经网络微单元电场电平预测模型=ANN mi-crocell electric field level prediction model〔会,英〕/Neskovic, A. & Neskovic, N. //ieee Region 8 EU-
In this survey, we outline basic SAT- and ATPG-procedures as well as their applications in formal hardware verification. We attempt to give the reader a trace trough literature and provide a basic orientation concerni...
详细信息
ISBN:
(纸本)0780376072
In this survey, we outline basic SAT- and ATPG-procedures as well as their applications in formal hardware verification. We attempt to give the reader a trace trough literature and provide a basic orientation concerning the problem formulations and known approaches in this active field of research.
We discuss the simplification of non-deterministic MV networks and their internal nodes using internal flexibilities. Given the network structure and its external specification, the flexibility at a node is derived as...
详细信息
ISBN:
(纸本)0780376072
We discuss the simplification of non-deterministic MV networks and their internal nodes using internal flexibilities. Given the network structure and its external specification, the flexibility at a node is derived as a non-deterministic MV relation. This flexibility is used to simplify the node representation and enhance the effect of Boolean resubstitution. We show that the flexibility derived is maximum. The proposed approach has been implemented and tested in MVSIS [16]. Experimental results show that it performs well on a variety Of MV and binary benchmarks.
In this paper, we present several novel techniques that make the recently published multilevel routing scheme [19] more effective and complete. Our contributions include: (1) resource reservation for local nets during...
详细信息
ISBN:
(纸本)0780376072
In this paper, we present several novel techniques that make the recently published multilevel routing scheme [19] more effective and complete. Our contributions include: (1) resource reservation for local nets during the coarsening process, (2) congestion-driven, graph-based Steiner tree construction during the initial routing and the refinement process and (3) multi-iteration refinement considering the congestion, history. The experiments show that each of these techniques helps to improve the completion rate considerately. Compared to [19], the new routing system reduces the number of failed nets by 2x to 18x, with less dian 50% increase in runtime in most cases.
暂无评论