The High-level synthesis (HLS) is a key step in the digital VLSI circuit design. HLS can be seen in the present study as an important matter of intelligently map a Data Flow Graph (DFG) specification of digital filter...
详细信息
The High-level synthesis (HLS) is a key step in the digital VLSI circuit design. HLS can be seen in the present study as an important matter of intelligently map a Data Flow Graph (DFG) specification of digital filters to FPGA. In this article, the powerful features of the Harris hawks optimization (HHO) and Cuckoo search (CS) algorithm have improved the performance of the proposed method. This proposed strategy is also known as the GChO algorithm. In GChO, the phase of CS helps to force the HHO to escape from stagnation, to ignore local optima and premature convergence. In addition for HLS, the three retiming-based models were also proposed in the work. The standard pipelining was employed in Model 1. The modified MinPeriod Retiming on the multiplier less digital filters was incorporated by Model 2 and led to a significant improvement in clock frequency. Model 3 adopted modified Min -Area Retiming on digital filters and reduced the complexity of the circuit;however these designs therefore offer single parameter -based heuristic solution. This can be resolved by using meta -heuristic approach focused on retiming. The use of meta -heuristic methods and swarm intelligence has been considered as a highly desirable choice when searching the digital block solution space with higher frequency and reduced complexity in HLS problems. The robustness of the GChO method has been verified on IEEE CEC' 2020 standard benchmark test suites. The paper analyzes that the performance of the proposed GChO algorithm outperforms the Harris hawks optimization (HHO), Moth Flame optimization algorithm, Particle Swarm optimization algorithm, Chimp optimization algorithm for the HLS digital filters. The experimental results show that the proposed Model 2 and 3 for retimed PDR-FIR filter is improved in term of MUF by 7.38% and utilized slices by 37.77%, PRO-DFII IIR retimed filter by 10.54%, and complexity reduced by 29.69%, PRO-LAT-ARF by 41.61% and slice by 11.17% and PRO-LATLAD-IIR improved
VLSI technology requires three main factors high speed, less power and small chip area. Speed is the factor which always depends upon clock frequency. Computation time of digital circuit can be reduced by applying tra...
详细信息
ISBN:
(纸本)9781509016662
VLSI technology requires three main factors high speed, less power and small chip area. Speed is the factor which always depends upon clock frequency. Computation time of digital circuit can be reduced by applying transformation of delay that is retiming to digital signal block, which can be applied to digital signal processing blocks that can reduce computation time. For transformation ofdelay we need critical path and shortest path computation algorithm. Clock period minimization techniques of retiming used to minimize clock period of the circuit like Infinite impulse response, Finite impulse respons (IIR, FIR) filters. We have computed critical path before applying retiming in circuit, it gives us an estimation of computing time. Shortest path algorithms are required in the circuit for solving shortest path problem in the graph. We are explaining clock period minimization technique of retiming to enhance speed and proposing new shortest path algorithm. Existing method contain Floyd-Warshall(all pair shortest path) and Bellman Ford algorithms (single point shortest path) which are used in retiming. We are giving new Dijkstra algorithm(single point shortest path algorithm) instead of bellman ford algorithm because it has less run time complexity and high speed. We also observed that most of filter data flow graph are sparse. Then we have chosen johnson algorithm (all pair shortest path) because run time complexity of it is less than Floyd-Warshall which are existing algorithm. For this purpose used CAD tool for computing run time complexity of overall algorithm.
VLSI technology requires three main factors high speed, less power and small chip area. Speed is the factor which always depends upon clock frequency. Computation time of digital circuit can be reduced by applying tra...
详细信息
ISBN:
(纸本)9781509016679
VLSI technology requires three main factors high speed, less power and small chip area. Speed is the factor which always depends upon clock frequency. Computation time of digital circuit can be reduced by applying transformation of delay that is retiming to digital signal block, which can be applied to digital signal processing blocks that can reduce computation time. For transformation ofdelay we need critical path and shortest path computation algorithm. Clock period minimization techniques of retiming used to minimize clock period of the circuit like Infinite impulse response,Finite impulse respons (IIR,FIR) filters. We have computed critical path before applying retiming in circuit, it gives us an estimation of computing time. Shortest path algorithms are required in the circuit for solving shortest path problem in the graph. We are explaining clock period minimization technique of retiming to enhance speed and proposing new shortest path algorithm. Existing method contain Floyd-Warshall(all pair shortest path) and Bellman Ford algorithms (single point shortest path)which are used in retiming .We are giving new Dijkstra algorithm(single point shortest path algorithm)instead of bellman ford algorithm because it has less run time complexity and high speed. We also observed that most of filter data flow graph are sparse. Then we have chosen johnson algorithm (all pair shortest path) because run time complexity of it is less than Floyd-Warshall which are existing algorithm. For this purpose used CAD tool for computing run time complexity of overall algorithm.
Cloud Computing propounds a striking option for business to pay only for the resources that were consumed. The prime challenge is to increase the MapReduce clusters to minimize their costs. MapReduce is a widely used ...
详细信息
ISBN:
(纸本)9781479949816
Cloud Computing propounds a striking option for business to pay only for the resources that were consumed. The prime challenge is to increase the MapReduce clusters to minimize their costs. MapReduce is a widely used parallel computing framework for large scale data processing. The major concern of map reduce programming model are job execution time and cluster throughput. Multiple speculative execution strategies have been proposed, but all are failed to address the DAG communication and cluster utilization. In this paper, we developed a new strategy, OTA (Optimal Time algorithm), which improves the effectiveness of speculative execution significantly. OTA do not consider the difference between the execution time of tasks on the same processors, they may form clusters of tasks that are not similar to each other. The proposed strategy efficiently utilizes the characteristics and properties of the MapReduce jobs in the given workload for constructing optimal job schedule. This resolves the problem of minimizing the makespan of workloads that additionally includes the workflow (DAGs) of mapreduce jobs.
Rough set theory provides some principles that are used for data classification and knowledge reduction. Reduct is one of the main concepts that can be used for feature set reduction and for data classification. Findi...
详细信息
ISBN:
(纸本)9781467358255
Rough set theory provides some principles that are used for data classification and knowledge reduction. Reduct is one of the main concepts that can be used for feature set reduction and for data classification. Finding the reduct set is computationally expensive for data sets with large number of attributes. Several heuristic approached have been proposed to extract reduct sets where some of the approached used the Discernibility Matrix (DM) concept to perform the reduct computation. In this paper the johnson reduction algorithm and the Object Reduct using Attribute Weighting technique algorithm (ORAW) for reduct computation are evaluated. The two approaches aim at reducing the number of features in the dataset. To evaluate the two approaches several UCI standard datasets were used in the experiments. The results of the experiments showed that the ORAW approach gives better results in term of classification accuracy where the average classification accuracy over eight data sets achieved by the ORAW approach was 85.6%;while johnson approach achieved 78.8% of accuracy. For further evaluation, the two approaches were compared with some other well known classification techniques.
The reasonable arrangement of the time of inbound trucks' arrival at the distribute center was very important to ensure the smooth circulation of goods and cost reduction. The course of unloading and sorting were ...
详细信息
ISBN:
(纸本)9783642330124
The reasonable arrangement of the time of inbound trucks' arrival at the distribute center was very important to ensure the smooth circulation of goods and cost reduction. The course of unloading and sorting were regarded as two stage flow shop in the Cross-docking (CD) environment including only one door and one conveyor, applying the johnson algorithm to solve the problem of minimizing working hour of goods (work piece). Finally, according to a distribution center' order data, the minimum time required for sorting all the batch of goods was calculated, and the optimum sequence of inbound trucks was obtained, which can provide guidance for CD practice.
Rough set theory provides some principles that are used for data classification and knowledge reduction. Reduct is one of the main concepts that can be used for feature set reduction and for data classification. Findi...
详细信息
ISBN:
(纸本)9781467356107
Rough set theory provides some principles that are used for data classification and knowledge reduction. Reduct is one of the main concepts that can be used for feature set reduction and for data classification. Finding the reduct set is computationally expensive for data sets with large number of attributes. Several heuristic approached have been proposed to extract reduct sets where some of the approached used the Discernibility Matrix (DM) concept to perform the reduct computation. In this paper the johnson reduction algorithm and the Object Reduct using Attribute Weighting technique algorithm (ORAW) for reduct computation are evaluated. The two approaches aim at reducing the number of features in the dataset. To evaluate the two approaches several UCI standard datasets were used in the experiments. The results of the experiments showed that the ORAW approach gives better results in term of classification accuracy where the average classification accuracy over eight data sets achieved by the ORAW approach was 85.6%;while johnson approach achieved 78.8% of accuracy. For further evaluation, the two approaches were compared with some other well known classification techniques.
Scheduling consists mainly of allocating resources to jobs over time under necessary constraints. In the past, the processing time for each job was usually assigned or estimated as a fixed value. In many real-world ap...
详细信息
Scheduling consists mainly of allocating resources to jobs over time under necessary constraints. In the past, the processing time for each job was usually assigned or estimated as a fixed value. In many real-world applications, however, job processing times may vary dynamically. McCahon and Lee proposed a fuzzy johnson algorithm for managing uncertain scheduling. However, some problems exist in their procedure with respect to calculation of the starting time for each job. In this paper, we modify McCahon and Lee's algorithm and propose a new reasonable procedure for eliminating start-time uncertainties. A half-inverse operator is defined and 24 cases are analyzed to verify this procedure. Analytical and experimental results showing the effectiveness of our method are also presented. (C) 1999 Elsevier Science Ltd. All rights reserved.
The Frobenius problem is to find a method (= algorithm) for calculating the largest "sum of money" that cannot be given by coins whose values b(0), b(1),..., b(w). are coprime integers. As admissible solutio...
详细信息
The Frobenius problem is to find a method (= algorithm) for calculating the largest "sum of money" that cannot be given by coins whose values b(0), b(1),..., b(w). are coprime integers. As admissible solutions (algorithms), it is common practice to study polynomial algorithms, which owe their name to the form of the dependence of time expenditure on the length of the original information. The difficulty of the Frobenius problem is apparent from the fact that already for w = 3 the existence of a polynomial solution is still an open problem. In the present paper, we distinguish some classes of input data for which the problem can be solved polynomially;nevertheless, argumentation in the spirit of complexity theory of algorithms is kept to a minimum.
Flexible flow shops can be thought of as generalizations of simple flow shops. In the past, the processing time for each job was usually assumed to be known exactly, but in many real-world applications, processing tim...
详细信息
Flexible flow shops can be thought of as generalizations of simple flow shops. In the past, the processing time for each job was usually assumed to be known exactly, but in many real-world applications, processing times may vary dynamically due to human factors or operating faults. In T.P. Hong, W.C. Chen [Journal of Advanced Computing Intelligence 2 (4) (1998) 142-149], we have demonstrated how discrete fuzzy concepts could easily be used in the Sriskandarajah and Sethi's algorithm for managing uncertain flexible-flow-shop scheduling. In this paper, we generalize it to continuous fuzzy domains. We use triangular membership functions for flexible flow shops with two machine centers to examine processing-time uncertainties and to make scheduling more suitable for real applications. We first use triangular fuzzy LPT algorithm to allocate jobs, and then use triangular fuzzy johnson algorithm to deal with sequencing the tasks. The proposed method thus provides a more flexible way of scheduling jobs than conventional scheduling methods. (C) 2000 Elsevier Science Inc. All rights reserved.
暂无评论