Recent advances in digital microfluidics have led to the promise of miniaturized laboratories, with the associated advantages of high sensitivity and less human-induced errors. Front-end operations such as sample prep...
详细信息
Recent advances in digital microfluidics have led to the promise of miniaturized laboratories, with the associated advantages of high sensitivity and less human-induced errors. Front-end operations such as sample preparation play a pivotal role in biochemical laboratories, and in applications in biomedical engineering and life science. For fast and high-throughput biochemical applications, preparing samples of multiple target concentrations sequentially is inefficient and time-consuming. Therefore, it is critical to concurrently prepare samples of multiple target concentrations. In addition, since reagents used in biochemical reactions are expensive, reagent-saving has become an important consideration in sample preparation. Prior work in this area does not address the problem of reagent-saving and concurrent sample preparation for multiple target concentrations. In this paper, we propose the first reagent-saving mixing algorithm for biochemical samples of multiple target concentrations. The proposed algorithm not only minimizes the consumption of reagents, but it also reduces the number of waste droplets and the sample preparation time by preparing the target concentrations concurrently. The proposed algorithm is evaluated on both real biochemical experiments and synthetic test cases to demonstrate its effectiveness and efficiency. Compared to prior work, the proposed algorithm can achieve up to 41% reduction in the number of reagent droplets and waste droplets, and up to 50% reduction in sample preparation time.
Advances in computer and communication technologies have stimulated the integration of digital video and audio with computing, leading to the development of computer-assisted multimedia conferencing. We address the pr...
详细信息
Advances in computer and communication technologies have stimulated the integration of digital video and audio with computing, leading to the development of computer-assisted multimedia conferencing. We address the problem of media mixing which arises in teleconferencing applications such as teleorchestra. We present a mixing algorithm which minimizes the difference between generation times of the media packets that are being mixed together in the absence of globally synchronized clocks, but in the presence of jitter in communication delays on packet switched networks. The algorithm is shown to be complete: given that there are no other message exchanges except media data between mixers and media sources, there cannot be any other algorithm that succeeds when our algorithm fails. mixing can be accomplished by several different communication architectures. In order to support applications such as teleorchestra, which involve a large number of participants, we propose hierarchical mixing architectures and show that they are an order of magnitude more scalable compared to purely centralized or distributed architectures. Furthermore, we present mechanisms for minimizing the delays incurred by mixing in various communication architectures. We have implemented the mixing algorithms on a network of workstations connected by Ethernets, and have experimentally evaluated the performance of various mixing architectures. These experiments revealed interesting results such as the maximum number of participants that can be supported in a conference.
Although Q-learning (QL) is widely used in reinforcement learning, it suffers from overestimation bias, which can lead to poor performance in stochastic environments due to its susceptibility to maximization bias. To ...
详细信息
Although Q-learning (QL) is widely used in reinforcement learning, it suffers from overestimation bias, which can lead to poor performance in stochastic environments due to its susceptibility to maximization bias. To address this problem, various bias correction mechanisms have been proposed. However, while these mechanisms may reduce overestimation bias, some of them introduce underestimation bias, which is undesirable in some environments. To leverage both overestimation and underestimation biases, we introduce an underestimation mechanism called the min estimator, followed by our proposed Max_Mix_Min Q-learning (M3QL) method, which incorporates a balance parameter beta\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}. Our method also considers the number of N Q-functions. Initially, we theoretically analyze why our method benefits from both overestimation and underestimation bias under the assumption of different bias distributions and how sample size N affects the performance of our method. Additionally, we visualize the theoretic analysis results on Meta-chain MDP example. Theoretical analysis demonstrates that M3QL achieves bias reduction compared to QL and the underestimation mechanism. Furthermore, we theoretically prove that M3QL is unbiased on certain values of beta\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}. In experimental comparisons with states of arts on Atari benchmark problems, our method consistently outperforms them. We also compare M3QL with the underestimation mechanism and Deep Q-learning (DQN) on these benchmark problems, revealing that M3QL
暂无评论