Deep learning models have revolutionized numerous fields, yet their decision-making processes often remain opaque, earning them the characterization of "black-box" models due to their lack of transparency an...
详细信息
Deep learning models have revolutionized numerous fields, yet their decision-making processes often remain opaque, earning them the characterization of "black-box" models due to their lack of transparency and comprehensibility. This opacity presents significant challenges to understanding the rationale behind their decisions, thereby impeding their interpretability, explainability, and reliability. This review examines 718 studies published between 2015 and 2024 in high-impact journals indexed in SCI, SCI-E, SSCI, and ESCI, providing a crucial reference for researchers investigating methodologies and techniques in related domains. In this exploration, we evaluate a wide array of interpretability and explainability (XAI) strategies, including visual and feature-based explanations, local approach-based techniques, and Bayesian methods. These strategies are assessed for their effectiveness and applicability using a comprehensive set of evaluation metrics. Moving beyond traditional analyses, we propose a novel taxonomy of XAI methods, addressing gaps in the literature and offering a structured classification that elucidates the roles and interactions of these methods. Moreover, we explore the intricate relationship between interpretability and explainability, examining potential conflicts and highlighting the necessity for interpretability in practical applications. Through detailed comparative analysis, we underscore the strengths and limitations of various XAI methods across different data types, ensuring a thorough understanding of their practical performance and real-world utility. The review also examines model robustness against adversarial attacks, emphasizing the critical importance of transparency, reliability, and ethical considerations in model development. A significant emphasis is placed on identifying and mitigating biases in deep learning systems, providing insights into future research directions that aim to enhance fairness and reduce bias. By thoroughl
Information-flow tracking is useful for preventing malicious code execution and sensitive information leakage. Unfortunately, the performance penalty of the currently available solutions is too high for real-world app...
详细信息
In this work, we introduce a class of black-box(BB) reductions called committed-programming reduction(CPRed) in the random oracle model(ROM) and obtain the following interesting results:(1) we demonstrate that some we...
详细信息
In this work, we introduce a class of black-box(BB) reductions called committed-programming reduction(CPRed) in the random oracle model(ROM) and obtain the following interesting results:(1) we demonstrate that some well-known schemes, including the full-domain hash(FDH) signature(Eurocrypt1996) and the Boneh-Franklin identity-based encryption(IBE) scheme(Crypto 2001), are provably secure under CPReds;(2) we prove that a CPRed associated with an instance-extraction algorithm implies a reduction in the quantum ROM(QROM). This unifies several recent results, including the security of the Gentry-Peikert-Vaikuntanathan IBE scheme by Zhandry(Crypto 2012) and the key encapsulation mechanism(KEM) variants using the Fujisaki-Okamoto transform by Jiang et al.(Crypto 2018) in the ***, we show that CPReds are incomparable to non-programming reductions(NPReds) and randomly-programming reductions(RPReds) formalized by Fischlin et al.(Asiacrypt 2010).
Solid-state batteries (SSBs) are at the forefront of next-generation energy storage technologies, promising enhanced safety, energy density, and longevity compared to traditional lithium-ion batteries. However, managi...
详细信息
Bat Algorithm (BA) is a nature-inspired metaheuristic search algorithm designed to efficiently explore complex problem spaces and find near-optimal solutions. The algorithm is inspired by the echolocation behavior of ...
详细信息
Despite recent significant advancements in Handwritten Document Recognition (HDR), the efficient and accurate recognition of text against complex backgrounds, diverse handwriting styles, and varying document layouts r...
详细信息
In this paper, we introduce InternVL 1.5, an open-source multimodal large language model(MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introdu...
详细信息
In this paper, we introduce InternVL 1.5, an open-source multimodal large language model(MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple improvements.(1) Strong vision encoder: we explored a continuous learning strategy for the large-scale vision foundation model — InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.(2) Dynamic high-resolution: we divide images into tiles ranging from 1 to 40 of 448×448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.(3) High-quality bilingual dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images,and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in optical character recognition(OCR) and Chinese-related tasks. We evaluate InternVL 1.5 through a series of benchmarks and comparative studies. Compared to both open-source and proprietary commercial models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 multimodal benchmarks. Code and models are available at https://***/OpenGVLab/InternVL.
Nowadays, individuals and organizations are increasingly targeted by phishing attacks, so an accurate phishing detection system is required. Therefore, many phishing detection techniques have been proposed as well as ...
详细信息
IoT devices rely on authentication mechanisms to render secure message *** data transmission,scalability,data integrity,and processing time have been considered challenging aspects for a system constituted by IoT *** ...
详细信息
IoT devices rely on authentication mechanisms to render secure message *** data transmission,scalability,data integrity,and processing time have been considered challenging aspects for a system constituted by IoT *** application of physical unclonable functions(PUFs)ensures secure data transmission among the internet of things(IoT)devices in a simplified network with an efficient time-stamped *** paper proposes a secure,lightweight,cost-efficient reinforcement machine learning framework(SLCR-MLF)to achieve decentralization and security,thus enabling scalability,data integrity,and optimized processing time in IoT *** has been integrated into SLCR-MLF to improve the security of the cluster head node in the IoT platform during transmission by providing the authentication service for device-to-device *** IoT network gathers information of interest from multiple cluster members selected by the proposed *** addition,the software-defined secured(SDS)technique is integrated with SLCR-MLF to improve data integrity and optimize processing time in the IoT *** analysis shows that the proposed framework outperforms conventional methods regarding the network’s lifetime,energy,secured data retrieval rate,and performance *** enabling the proposed framework,number of residual nodes is reduced to 16%,energy consumption is reduced by up to 50%,almost 30%improvement in data retrieval rate,and network lifetime is improved by up to 1000 msec.
software quality prediction is used at various stages of projects. There are several metrics that provide the quality measure with respect to different types of software. In this study, defect density is used as the f...
详细信息
暂无评论