There is a growing body of experimental evidence suggesting that the Ca2+ signaling in ventricular myocytes is characterized by a high gradient near the cell membrane and a more uniform Ca2+ distribution in the cell i...
详细信息
There is a growing body of experimental evidence suggesting that the Ca2+ signaling in ventricular myocytes is characterized by a high gradient near the cell membrane and a more uniform Ca2+ distribution in the cell interior [1]--[7]. An important reason for this phenomenon might be that in these cells the t-tubular system forms a network of extracellular space, extending deep into the cell interior. This allows the electrical signal, that propagates rapidly along the cell membrane, to reach the vicinity of the sarcoplasmic reticulum (SR), where intracellular Ca2+ required for myofilament activation is stored [1], [8]--[11]. Early studies of cardiac muscle showed that the t-tubules are found at intervals of about 2 lm along the longitudinal cell axis in close proximity to the Z-disks of the sarcomeres [12]. Subsequent studies have demonstrated that the t-tubular system has also longitudinal extensions [9]--[11], [13].
In this paper we present the architecture for the Personal Autonomic Desktop Manager, a self managing application designed to act on behalf of the user in several aspects: protection, healing, optimization and configu...
详细信息
ISBN:
(纸本)9780769531403
In this paper we present the architecture for the Personal Autonomic Desktop Manager, a self managing application designed to act on behalf of the user in several aspects: protection, healing, optimization and configuration. The overall goal of this research is to improve the correlation of the autonomic self* properties and doing so also enhance the overall self-management capacity of the desktop (autonomicity). We introduce the Circulatory Computing (CC) model, a self-managing system initiative based on the biological metaphor of the cardiovascular system, and use its concepts in the design and implementation of the architecture.
Tradeoffs between time complexities and solution optimalities are important when selecting algorithms for an NP-hard problem in different applications. Also, the distinction between theoretical upper bound and actual ...
详细信息
Tradeoffs between time complexities and solution optimalities are important when selecting algorithms for an NP-hard problem in different applications. Also, the distinction between theoretical upper bound and actual solution optimality for realistic instances of an NP-hard problem is a factor in selecting algorithms in practice. We consider the problem of partitioning a sequence of n distinct numbers into minimum number of monotone (increasing or decreasing) subsequences. This problem is NP-hard and the number of monotone subsequences can reach [√2n+1/1-1/2]in the worst case. We introduce a new algorithm, the modified version of the Yehuda-Fogel algorithm, that computes a solution of no more than [√2n+1/1-1/2]monotone subsequences in O(n^1.5) time. Then we perform a comparative experimental study on three algorithms, a known approximation algorithm of approximation ratio 1.71 and time complexity O(n^3), a known greedy algorithm of time complexity O(n^1.5 log n), and our new modified Yehuda-Fogel algorithm. Our results show that the solutions computed by the greedy algorithm and the modified Yehuda-Fogel algorithm are close to that computed by the approximation algorithm even though the theoretical worst-case error bounds of these two algorithms are not proved to be within a constant time of the optimal solution. Our study indicates that for practical use the greedy algorithm and the modified Yehuda-Fogel algorithm can be good choices if the running time is a major concern.
Secure provenance techniques are essential in generating trustworthy provenance records, where one is interested in protecting their integrity, confidentiality, and availability. In this work, we suggest an architectu...
详细信息
Software technologies, such as model-based testing approaches, have specific characteristics and limitations that can affect their use in software projects. To make available knowledge regarding such technologies is i...
详细信息
ISBN:
(纸本)9781605580302
Software technologies, such as model-based testing approaches, have specific characteristics and limitations that can affect their use in software projects. To make available knowledge regarding such technologies is important to support the decision regarding their use in software projects. In particular, a choice of model-based testing approach can influence testing success or failure. Therefore, this paper aims at describing knowledge acquired from a systematic review regarding model-based testing approaches and proposing an infrastructure towards supporting their selection for software projects. Copyright 2008 ACM.
Experimental studies have been used as a mechanism to acquire knowledge through a scientific approach based on measurement of phenomena in different areas. However it is hard to run such studies when they require mode...
详细信息
Experimental studies have been used as a mechanism to acquire knowledge through a scientific approach based on measurement of phenomena in different areas. However it is hard to run such studies when they require models (simulation), produce amount of information, and explore science in scale. In this case, a computerized infrastructure is necessary and constitutes a complex system to be built. In this paper we discuss an experimentation environment that has being built to support large scale experimentation and scientific knowledge management in software engineering.
We consider a scenario in which users share an access point and are mainly interested in VoIP applications. Each user is allowed to adapt to varying network conditions by choosing the transmission rate at which VoIP t...
详细信息
This report summarizes the proceedings of the second workshop of the 'Minimum Information for Biological and Biomedical Investigations' (MIBBI) consortium held on Dec 1-2, 2010 in Rüdesheim, Germany throu...
详细信息
This report summarizes the proceedings of the second workshop of the 'Minimum Information for Biological and Biomedical Investigations' (MIBBI) consortium held on Dec 1-2, 2010 in Rüdesheim, Germany through the sponsorship of the Beilstein-Institute. MIBBI is an umbrella organization uniting communities developing Minimum Information (MI) checklists to standardize the description of data sets, the workflows by which they were generated and the scientific context for the work. This workshop brought together representatives of more than twenty communities to present the status of their MI checklists and plans for future development. Shared challenges and solutions were identified and the role of MIBBI in MI checklist development was discussed. The meeting featured some thirty presentations, wide-ranging discussions and breakout groups. The top outcomes of the two-day workshop as defined by the participants were: 1) the chance to share best practices and to identify areas of synergy; 2) defining a series of tasks for updating the MIBBI Portal; 3) reemphasizing the need to maintain independent MI checklists for various communities while leveraging common terms and workflow elements contained in multiple checklists; and 4) revision of the concept of the MIBBI Foundry to focus on the creation of a core set of MIBBI modules intended for reuse by individual MI checklist projects while maintaining the integrity of each MI project. Further information about MIBBI and its range of activities can be found at http://***/.
Secure provenance techniques are essential in generating trustworthy provenance records, where one is interested in protecting their integrity, confidentiality, and availability. In this work, we suggest an architectu...
详细信息
Secure provenance techniques are essential in generating trustworthy provenance records, where one is interested in protecting their integrity, confidentiality, and availability. In this work, we suggest an architecture to provide protection of authorship and temporal information in grid-enabled provenance systems. It can be used in the resolution of conflicting intellectual property claims, and in the reliable chronological reconstitution of scientific experiments. We observe that some techniques from public key infrastructures can be readily applied for this purpose. We discuss the issues involved in the implementation of such architecture and describe some experiments realized with the proposed techniques.
暂无评论