Sink Scheduling, in the form of scheduling multiple sinks among sink sites to leverage traffic burden, is an effective mechanism for the energy-efficiency of wireless sensor networks (WSNs). Due to the inherent diffic...
详细信息
Accurately monitoring changing energy usage patterns in households is a first requirement for more efficient and eco-friendly energy management. Such data is essential to the establishment of the Smart Grid, but at th...
详细信息
The User Model plays a significant role in many personalized applications. Separated user modeling results in the fragments of the user model being distributed on the Web, but user modeling in a Multi-Application Envi...
详细信息
Hardware components can contain hidden backdoors, which can be enabled with catastrophic effects or for ill-gotten profit. These backdoors can be inserted by a malicious insider on the design team or a third-party IP ...
详细信息
Hardware components can contain hidden backdoors, which can be enabled with catastrophic effects or for ill-gotten profit. These backdoors can be inserted by a malicious insider on the design team or a third-party IP provider. In this paper, we propose techniques that allow us to build trustworthy hardware systems from components designed by untrusted designers or procured from untrusted third-party IP providers. We present the first solution for disabling digital, design-level hardware backdoors. The principle is that rather than try to discover the malicious logic in the design -- an extremely hard problem -- we make the backdoor design problem itself intractable to the attacker. The key idea is to scramble inputs that are supplied to the hardware units at runtime, making it infeasible for malicious components to acquire the information they need to perform malicious actions. We show that the proposed techniques cover the attack space of deterministic, digital HDL backdoors, provide probabilistic security guarantees, and can be applied to a wide variety of hardware components. Our evaluation with the SPEC 2006 benchmarks shows negligible performance loss (less than 1% on average) and that our techniques can be integrated into contemporary microprocessor designs.
Field programmable gate arrays (FPGAs) are widely used in reliability-critical systems due to their reconfiguration ability. However, with the shrinking device feature size and increasing die area, nowadays FPGAs can ...
详细信息
Uncertainty handling is a major issue for the control of real-world systems. Traditional singleton type-1 Fuzzy Logic Controllers (FLCs) with crisp inputs and precise fuzzy sets cannot fully cope with the high levels ...
详细信息
Uncertainty handling is a major issue for the control of real-world systems. Traditional singleton type-1 Fuzzy Logic Controllers (FLCs) with crisp inputs and precise fuzzy sets cannot fully cope with the high levels of uncertainties present in real world environments (e.g. sensor noise, environmental impacts, etc.). While non-singleton type-1 fuzzy systems can provide an additional degree of freedom through non-singleton fuzzification of the inputs, it is unclear how this capability relates to singleton type-1 and specifically interval type-2 FLCs in terms of control performance (also because the application of non-singleton type-1 FLCs is quite rare in the literature). In recent years interval type-2 FLCs employing type-2 fuzzy sets with a Footprint of Uncertainty (FOU) have become increasingly popular. This FOU provides an additional degree of freedom that can enable type-2 FLCs to handle the uncertainties associated with the inputs and the outputs of the FLCs. One of the main criticisms of singleton type-2 FLCs is that they outperform (the usually singleton-) type-1 FLCs because they - respectively their type-2 fuzzy sets, employ extra parameters, thus making improved performance an obvious result. In order to address this criticism, we have implemented a non-singleton type-1 FLC which allows a more direct comparison between the non-singleton type-1 FLC and singleton interval type-2 FLC as the number of parameters for both controllers is very similar. The paper details the implementation details of the FLCs for the application of a nonlinear servo system and provides the experimental simulation results which were performed to study the effect of increasing levels of uncertainty (in the form of input noise) and the capability of the individual FLCs to cope with them. We conclude by providing our interpretation of the results and highlighting the essential differences in the uncertainty handling between the (non-) singleton type-1 and singleton interval type-2 FLC
Texture is one of the most used low-level feature for image analysis and, in addition, one of the most difficult to characterize due to its imprecision. It is usual for humans to describe visual textures according to ...
详细信息
Texture is one of the most used low-level feature for image analysis and, in addition, one of the most difficult to characterize due to its imprecision. It is usual for humans to describe visual textures according to some perceptual properties like coarseness-fineness, orientation or regularity. In this paper, we propose to model the fineness property, that is the most popular one, by means of a fuzzy partition on the domain of representative fineness measures. In our study, a wide variety of measures is studied, and the partitions are obtained by relating each measure (our reference set) with the human perception of fineness. Assessments about the perception of this property are collected from pools. This information is used to analyze the capability of each measure to discriminate different fineness categories, which imposes the number of fuzzy sets of the partition. Moreover, it is used to calculate the parameters of the membership function associated to each fuzzy set.
Distributed file system (DFS) is playing important roles of supporting large distributed data-intensive applications to meet storage needs. Typically, the design of DFS, such as GFS in Google, DMS in Cisco and TFS in ...
详细信息
Distributed file system (DFS) is playing important roles of supporting large distributed data-intensive applications to meet storage needs. Typically, the design of DFS, such as GFS in Google, DMS in Cisco and TFS in Alibaba, is driven by observations of specific application workloads, internal demands and technological environment. In such systems, the metadata service is a critical factor that can affect the file system performance and availability to a great degree. Five requirements have been summarized for the metadata service: location transparent file service, smart director, efficient speed, strong scalability and friendly collaborator. In this paper, we present metadata service module called CH Masters in our DFS. Consistent hashing protocol is used to relieve potential hot spots on name servers. Files' metadata and master nodes are mapped into the same hash space by consistent hash function. And then files' metadata are scattered to master nodes by clockwise "closest" principle. Chunk server acts as a client when report its chunks info. Only a small proportion of files' metadata will be rehashed when master nodes state change. A new scalable file mapping strategy is also proposed to map file sizes from few MB to several GB efficiently. After intensive experiments, it shows CH Masters is satisfying the above five requirements.
The Ray Tracing rendering algorithm can produce high-fidelity images of 3-D scenes, including shadow effects, as well as reflections and transparencies. This is currently done at a processing speed of at most 30 frame...
详细信息
暂无评论