this paper addresses the development of an automatic segmentation technique for detecting cell nuclei. the technique uses a new approach for segmenting nuclei in images taken from tissues with colon carcinoma. the seg...
详细信息
this paper addresses the development of an automatic segmentation technique for detecting cell nuclei. the technique uses a new approach for segmenting nuclei in images taken from tissues with colon carcinoma. the segmentation problems encountered in these images and solved by the proposed technique are related to the non-uniform illumination on the background, out-of-focus nuclei, the physical structure of cells in the tissue section, the activity status of the cell and the clustered cell nuclei. First, the region growing method is used for accurate background detection. the separation regions between grouped cell nuclei are detected using the cross-correlation method and validated based on their link withthe background. then, the nuclei boundaries are identified by applying the watershed algorithm on the complemented distance transform of the binary image containing the selected separation lines.
Summary form only given. It is well known that despite all of its advantages the digital revolution also leads to large variety of new risks. One principal issue in this context is the growing dependence of our modern...
Summary form only given. It is well known that despite all of its advantages the digital revolution also leads to large variety of new risks. One principal issue in this context is the growing dependence of our modern information society from the availability and correct (proved) function of modern communication services. First, I'll give a short overview on threats in communication networks (grids, clouds, etc), protocols and secure personal devices. then I'll discuss current network security approaches based on anonymous message exchanges within communicating systems. Cryptography was first used to ensure data confidentiality, it has been “democratized” by ensuring the safety of telecommunications services, thereby extending its scope to authentication of a person or device, or a message, non-repudiation, integrity but also the anonymity of transactions. the anonymity is sometimes quite important in the new telecommunication and mobile networks services, much more than just message confidentiality. the talk will focus on some examples and new approaches developed in our research laboratory to deal with anonymity in routing protocols for mobile communicating systems.
One of the most important step in precisely optical flow computation is using L1 norm for mathematical modeling. For discrete signals, L1 norm gives better results than L2. Another useful ingredient is using local and...
详细信息
One of the most important step in precisely optical flow computation is using L1 norm for mathematical modeling. For discrete signals, L1 norm gives better results than L2. Another useful ingredient is using local and global combinations in order to use the advantages of both methods. We will present a combined local-global approach that uses L1 norm for boththe data fidelity terms and the regularization one. Our approach is robust to noise and occlusions and preserves motion boundaries. Additionally, a version using L1 only for the regularization term and another one using only L2 are presented. We will show that combined local-global estimators have some benefits in real scenarios. All numerical schemes resulted are highly parallelizable, being designed for running on graphic processing units.
In research, grid computing is an established way of providing computer resources for information retrieval. However, e-science grids also contain, process and produce documents - thereby acting as digital libraries a...
详细信息
In research, grid computing is an established way of providing computer resources for information retrieval. However, e-science grids also contain, process and produce documents - thereby acting as digital libraries and requiring means for information discovery. In this paper, we discuss how distributed information retrieval can be integrated into the Open Grid Service Architecture (OGSA) to efficiently provide image retrieval for e-science grids. We identify two fundamental ways of performing information retrieval on the grid - as a batch job or as a distributed activity - and argue the case for the latter for reasons of efficiency. We give an analysis of the theoretic communication and computation complexity and demonstrate that bandwidth limitations provide a decisive argument to support our case. We describe further design decisions for our system architecture and give a brief comparison with other designs reported in literature. Lastly, we describe how the statelessness and isolation of web services impede data-intensive, distributed, cross-site activities in OGSA grids, and how to escape them.
the GPU computing follows the trend of GPGPU, driven by the innovations in both hardware and programming languages made available to nongraphic programmers. Since some problems require an important time to solve or da...
详细信息
the GPU computing follows the trend of GPGPU, driven by the innovations in both hardware and programming languages made available to nongraphic programmers. Since some problems require an important time to solve or data quantities that do not fit on one single GPU, the logical continuation was to make use of multiple GPUs. In order to use a multiGPU environment in a general way, our paper presents an approach where each card is driven by either a [heavyweight MPI] process or a [lightweight OpenMP] thread. We compare the two models in terms of performance, implementation complexity and particularities, as well as overhead implied by the mixed code. We show that the best performance is obtained when we use OpenMP. We also note that using “pinned memory” we further improve the execution time. the next objective will be to create a three-level multiGPU environment with internode communication (processes, distributed memory), intranode GPUs management (threads, shared memory) and computation inside the GPU cards.
Nowadays, withthe rapid development of Web services technologies as a solution to achieve SOA, the number of Web services on the web that offer similar functions is increasing. therefore, not only the discovery and s...
详细信息
Nowadays, withthe rapid development of Web services technologies as a solution to achieve SOA, the number of Web services on the web that offer similar functions is increasing. therefore, not only the discovery and selection of the best Web services based on functional property requirements and preferences of users is not sufficient, but also owning to improve and achieve more accurate results, the non-functional properties (i.e. QoS) should be taken into consideration. In the existing paper, a new QoS-aware framework to improve the semantic Web service discovery based on broker by using ontology concepts has been proposed. Due to having real-time values of QoS attributes of web services, composite monitoring mechanism in which various WS-related applied reports continuously can be collected. these reports are compiled through WS monitoring agent, feedback from user, and provider advertisement.
GPU hardware architectures have evolved into a suitable platform for the hardware acceleration of complex computing tasks. Stereo vision is one such task where acceleration is desirable for robotic and automotive syst...
详细信息
GPU hardware architectures have evolved into a suitable platform for the hardware acceleration of complex computing tasks. Stereo vision is one such task where acceleration is desirable for robotic and automotive systems. Much research was invested in developing stereo vision algorithms with increased quality, but real-time implementations are still lacking. In this work we focus on creating a real-time dense stereo reconstruction system. We selected the Semi-global Matching method as the basis of our system due to its high quality and reduced computational complexity. the Census transform is selected as the matching metric because our results show that it can reduce the matching errors for traffic images compared to classical solutions. We also present two modifications to the original Semi-Global algorithm to improve the sub-pixel accuracy and the execution time. the system was implemented and evaluated on a current generation GPU with a running time of 19ms for image having the resolution 512×383.
Detecting the parts of a vehicle represents a topic of major interest for computer vision applications, especially for precrash systems. this paper proposes an artificial vision based technique that identifies the pil...
详细信息
Detecting the parts of a vehicle represents a topic of major interest for computer vision applications, especially for precrash systems. this paper proposes an artificial vision based technique that identifies the pillars of the lateral viewed cars. the novelty of the approach resides in the multi-layer classification scheme applied within the context of a stereo-based object detection system. From all the objects deetected by stereovision the side viewed cars are recognized, and for them the pillars are identified. this process of pillar identification is the result of a multi-layer classification that comprises: a rough object hypothesis refinement that selects only those objects that are likely to have one or two wheels, followed by an adaptive boosting classifier build using histograms of oriented gradient features. the boosted classifier realizes a fine selection of the wheel-based hypotheses and discriminates between side viewed vehicles and other objects in a traffic scene. the last step consists in the construction of a geometrical model of the pillars' region of interest for the identified side vehicles.
the success of the collaborative web-based MediaWiki platform, widely used in several projects to exchange knowledge created a new idea to use this system as a low-tech interoperability and repository layer for data p...
详细信息
the success of the collaborative web-based MediaWiki platform, widely used in several projects to exchange knowledge created a new idea to use this system as a low-tech interoperability and repository layer for data providers, end users, developers and project partners. Facilitating the acquisition of knowledge for multimedia digital resources is a task that usually requires special purpose interfaces with which users are not familiar. the method effectively enables data providers to publish their metadata about multimedia content in the field of biodiversity in a push-operation to a metadata repository through a familiar interface like MediaWiki templates. the workflow then involves a procedure for automatic metadata harvesting into Fedora Commons repository, combined withthe automatic creation of repository reports written to wiki pages in order to ensure a feedback to the data providers and end users. Models, techniques, standards and protocols used in the KeyToNature project make MediaWiki a layered candidate in achieving interoperability at the syntactic and semantic level with a low technological entry barrier.
Randomness test suites constitute an essential component within the process of assessing random number generators in view of determining their suitability for a specific application. Evaluating the randomness quality ...
详细信息
Randomness test suites constitute an essential component within the process of assessing random number generators in view of determining their suitability for a specific application. Evaluating the randomness quality of random numbers sequences produced by a given generator is not an easy task considering that no finite set of statistical tests can assure perfect randomness, instead each test attempts to rule out sequences that show deviation from perfect randomness by means of certain statistical properties. this is the reason why several batteries of statistical tests are applied to increase the confidence in the selected generator. therefore, in the present context of constantly increasing volumes of random data that need to be tested, special importance has to be given to the performance of the statistical test suites. Our work enrolls in this direction and this paper presents the results on improving the well known NIST Statistical Test Suite (STS) by introducing parallelism and a paradigm shift towards byte processing delivering a design that is more suitable for today's multicore architectures. Experimental results show a very significant speedup of up to 103 times compared to the original version.
暂无评论