Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and ***,achieving a balance between the quality...
详细信息
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and ***,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains *** This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and ***,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image *** this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of *** rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image *** detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in *** addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light *** Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,*** achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering *** Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR ***,the handling of morphing artifacts in the parallax image regions remains a topic for future resea
This paper provides a detailed comparison of traditional networking architectures and Software Defined Networking (SDN) approaches, with a focus on bandwidth optimization and traffic management. Simulations were condu...
详细信息
As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering ...
详细信息
As the trend to use the latestmachine learning models to automate requirements engineering processes continues,security requirements classification is tuning into the most researched field in the software engineering *** literature studies have proposed numerousmodels for the classification of security ***,adopting those models is constrained due to the lack of essential datasets permitting the repetition and generalization of studies employing more advanced machine learning ***,most of the researchers focus only on the classification of requirements with security *** did not consider other nonfunctional requirements(NFR)directly or indirectly related to *** has been identified as a significant research gap in security requirements *** major objective of this study is to propose a security requirements classification model that categorizes security and other relevant security *** use PROMISE_exp and DOSSPRE,the two most commonly used datasets in the software engineering *** proposed methodology consists of two *** the first step,we analyze all the nonfunctional requirements and their relation with security *** found 10 NFRs that have a strong relationship with security *** the second step,we categorize those NFRs in the security requirements *** proposedmethodology is a hybridmodel based on the ConvolutionalNeural Network(CNN)and Extreme Gradient Boosting(XGBoost)***,we evaluate the model by updating the requirement type column with a binary classification column in the dataset to classify the requirements into security and non-security *** performance is evaluated using four metrics:recall,precision,accuracy,and F1 Score with 20 and 28 epochs number and batch size of 32 for PROMISE_exp and DOSSPRE datasets and achieved 87.3%and 85.3%accuracy,*** proposed study shows an enhancement in metrics
Video forgery is one of the most serious problems affecting the credibility and reliability of video content. Therefore, detecting video forgery presents a major challenge for researchers due to the diversity of forge...
详细信息
We study the task of automated house design,which aims to automatically generate 3D houses from user ***,in the automatic system,it is non-trivial due to the intrinsic complexity of house designing:1)the understanding...
详细信息
We study the task of automated house design,which aims to automatically generate 3D houses from user ***,in the automatic system,it is non-trivial due to the intrinsic complexity of house designing:1)the understanding of user requirements,where the users can hardly provide high-quality requirements without any professional knowledge;2)the design of house plan,which mainly focuses on how to capture the effective information from user *** address the above issues,we propose an automatic house design framework,called auto-3D-house design(A3HD).Unlike the previous works that consider the user requirements in an unstructured way(e.g.,natural language),we carefully design a structured list that divides the requirements into three parts(i.e.,layout,outline,and style),which focus on the attributes of rooms,the outline of the building,and the style of decoration,*** the processing of architects,we construct a bubble diagram(i.e.,graph)that covers the rooms′attributes and relations under the constraint of *** addition,we take each outline as a combination of points and orders,ensuring that it can represent the outlines with arbitrary ***,we propose a graph feature generation module(GFGM)to capture layout features from the bubble diagrams and an outline feature generation module(OFGM)for outline ***,we render 3D houses according to the given style requirements in a rule-based *** on two benchmark datasets(i.e.,RPLAN and T3HM)demonstrate the effectiveness of our A3HD in terms of both quantitative and qualitative evaluation metrics.
Kidney disease (KD) is a gradually increasing global health concern. It is a chronic illness linked to higher rates of morbidity and mortality, a higher risk of cardiovascular disease and numerous other illnesses, and...
详细信息
Over the past decades, integration of wireless sensor networks (WSNs) and computer vision (CV) technology has shown promising results in mitigating crop losses caused by wild animal attacks. Studies have demonstrated ...
详细信息
Over the past decades, integration of wireless sensor networks (WSNs) and computer vision (CV) technology has shown promising results in mitigating crop losses caused by wild animal attacks. Studies have demonstrated the effectiveness of these technologies in providing real-time monitoring and early detection of animal intrusions into agricultural fields. By deploying WSNs equipped with motion sensors and cameras, farmers can receive instant alerts when wild animals enter their fields, allowing for timely intervention to prevent crop damage. Furthermore, advancements in CV algorithms possess made possible to automatically detect and classify the animal species, facilitating targeted response strategies. For example, sophisticated image processing techniques can differentiate between harmless birds and destructive mammals, allowing farmers to focus their efforts on deterring the most damaging species. Field trials and pilot projects implementing WSN-CV systems have reported significant reductions in crop losses attributed to wild animal raids. By leveraging data collected through sensor networks and analyzed using computer vision algorithms, farmers can make informed decisions regarding pest and insect management strategies. This data-driven approach has led to more efficient utilization of resources, such as targeted application of insecticides and pesticides, resulting in both economic and environmental benefits. Moreover, the integration of WSN-CV technology has enabled the development of innovative deterrent systems that leverage artificial intelligence and automation. These systems can deploy non-lethal methods, such as sound or light-based repellents, to deter wild animals without causing harm to the environment or wildlife populations. Overall, the combination of wireless sensor networks and computer vision technology provides the promising resolution to the long-standing issue of wild animal-related losses in agriculture. By harnessing the power of data and a
This study introduces the System for Calculating Open Data Re-identification Risk (SCORR), a framework for quantifying privacy risks in tabular datasets. SCORR extends conventional metrics such as k-anonymity, l-diver...
详细信息
This study introduces the System for Calculating Open Data Re-identification Risk (SCORR), a framework for quantifying privacy risks in tabular datasets. SCORR extends conventional metrics such as k-anonymity, l-diversity, and t-closeness with novel extended metrics, including uniqueness-only risk, uniformity-only risk, correlation-only risk, and Markov Model risk, to identify a broader range of re-identification threats. It efficiently analyses event-level and person-level datasets with categorical and numerical attributes. Experimental evaluations were conducted on three publicly available datasets: OULAD, HID, and Adult, across multiple anonymisation levels. The results indicate that higher anonymisation levels do not always proportionally enhance privacy. While stronger generalisation improves k-anonymity, l-diversity and t-closeness vary significantly across datasets. Uniqueness-only and uniformity-only risk decreased with anonymisation, whereas correlation-only risk remained high. Meanwhile, Markov Model risk consistently remained high, indicating little to no improvement regardless of the anonymisation level. Scalability analysis revealed that conventional metrics and Uniqueness-only risk incurred minimal computational overhead, remaining independent of dataset size. However, correlation-only and uniformity-only risk required significantly more processing time, while Markov Model risk incurred the highest computational cost. Despite this, all metrics remained unaffected by the number of quasi-identifiers, except t-closeness, which scaled linearly beyond a certain threshold. A usability evaluation comparing SCORR with the freely available ARX Tool showed that SCORR reduced the number of user interactions required for risk analysis by 59.38%, offering a more streamlined and efficient process. These results confirm SCORR’s effectiveness in helping data custodians balance privacy protection and data utility, advancing privacy risk assessment beyond existing tools
Brain and central nervous system (CNS) cancers are the leading cause of cancer-related mortality, presenting significant diagnostic challenges due to their aggressive nature and diverse manifestations. While biopsies ...
详细信息
In this paper, we introduce EMD-Based Hyperbolic Diffusion Distance (EMD-HDD), a new method for constructing a meaningful distance metric for hierarchical data with latent hierarchical structure. Our method relies on ...
详细信息
暂无评论