Artificial intelligence (AI) is revolutionizing various sectors, including science,technology, industry and daily life [1,2].One key area where AI can make a significant impact is in material design, crucial for advan...
Artificial intelligence (AI) is revolutionizing various sectors, including science,technology, industry and daily life [1,2].One key area where AI can make a significant impact is in material design, crucial for advancing technologies such as energy storage and catalysis [3,4].
W-type barium-nickel ferrite(BaNi_(2)Fe_(16)O_(27))is a highly promising material for electromagnetic wave(EMW)absorption be-cause of its magnetic loss capability for EMW,low cost,large-scale production potential,high...
详细信息
W-type barium-nickel ferrite(BaNi_(2)Fe_(16)O_(27))is a highly promising material for electromagnetic wave(EMW)absorption be-cause of its magnetic loss capability for EMW,low cost,large-scale production potential,high-temperature resistance,and excellent chemical ***,the poor dielectric loss of magnetic ferrites hampers their utilization,hindering enhancement in their EMW-absorption *** efficient strategies that improve the EMW-absorption performance of ferrite is highly desired but re-mains ***,an efficient strategy substituting Ba^(2+)with rare earth La^(3+)in W-type ferrite was proposed for the preparation of novel La-substituted ferrites(Ba_(1-x)LaxNi_(2)Fe_(15.4)O_(27)).The influences of La^(3+)substitution on ferrites’EMW-absorption performance and the dissipative mechanism toward EMW were systematically explored and ***^(3+)efficiently induced lattice defects,enhanced defect-induced polarization,and slightly reduced the ferrites’bandgap,enhancing the dielectric properties of the ***^(3+)also enhanced the ferromagnetic resonance loss and strengthened magnetic *** effects considerably improved the EMW-absorption perform-ance of Ba_(1-x)LaxNi_(2)Fe_(15.4)O_(27)compared with pure W-type *** x=0.2,the best EMW-absorption performance was achieved with a minimum reflection loss of-55.6 dB and effective absorption bandwidth(EAB)of 3.44 GHz.
The pansharpening of high-resolution panchromatic (Pan) image and low-resolution multispectral (MS) images represents an important task in the remote sensing field. It allows the joint exploitation of the information ...
详细信息
The detection and characterization of human veins using infrared (IR) image processing have gained significant attention due to its potential applications in biometric identification, medical diagnostics, and vein-bas...
详细信息
The detection and characterization of human veins using infrared (IR) image processing have gained significant attention due to its potential applications in biometric identification, medical diagnostics, and vein-based authentication systems. This paper presents a low-cost approach for automatic detection and characterization of human veins from IR images. The proposed method uses image processing techniques including segmentation, feature extraction, and, pattern recognition algorithms. Initially, the IR images are preprocessed to enhance vein structures and reduce noise. Subsequently, a CLAHE algorithm is employed to extract vein regions based on their unique IR absorption properties. Features such as vein thickness, orientation, and branching patterns are extracted using mathematical morphology and directional filters. Finally, a classification framework is implemented to categorize veins and distinguish them from surrounding tissues or artifacts. A setup based on Raspberry Pi was used. Experimental results of IR images demonstrate the effectiveness and robustness of the proposed approach in accurately detecting and characterizing human. The developed system shows promising for integration into applications requiring reliable and secure identification based on vein patterns. Our work provides an effective and low-cost solution for nursing staff in low and middle-income countries to perform a safe and accurate venipuncture.
Artificial intelligence(AI) systems surpass certain human intelligence abilities in a statistical sense as a whole, but are not yet the true realization of these human intelligence abilities and behaviors. There are d...
详细信息
Artificial intelligence(AI) systems surpass certain human intelligence abilities in a statistical sense as a whole, but are not yet the true realization of these human intelligence abilities and behaviors. There are differences, and even contradictions, between the cognition and behavior of AI systems and humans. With the goal of achieving general AI, this study contains a review of the role of cognitive science in inspiring the development of the three mainstream academic branches of AI based on the three-layer framework proposed by David Marr, and the limitations of the current development of AI are explored and analyzed. The differences and inconsistencies between the cognition mechanisms of the human brain and the computation mechanisms of AI systems are analyzed. They are found to be the cause of the differences and contradictions between the cognition and behavior of AI systems and humans. Additionally, eight important research directions and their scientific issues that need to focus on braininspired AI research are proposed: highly imitated bionic information processing, a large-scale deep learning model that balances structure and function, multi-granularity joint problem solving bidirectionally driven by data and knowledge, AI models that simulate specific brain structures, a collaborative processing mechanism with the physical separation of perceptual processing and interpretive analysis, embodied intelligence that integrates the brain cognitive mechanism and AI computation mechanisms,intelligence simulation from individual intelligence to group intelligence(social intelligence), and AI-assisted brain cognitive intelligence.
Recently,there has been an upsurge of activity in image-based non-photorealistic rendering(NPR),and in particular portrait image stylisation,due to the advent of neural style transfer(NST).However,the state of perform...
详细信息
Recently,there has been an upsurge of activity in image-based non-photorealistic rendering(NPR),and in particular portrait image stylisation,due to the advent of neural style transfer(NST).However,the state of performance evaluation in this field is poor,especially compared to the norms in the computer vision and machine learning ***,the task of evaluating image stylisation is thus far not well defined,since it involves subjective,perceptual,and aesthetic *** make progress towards a solution,this paper proposes a new structured,threelevel,benchmark dataset for the evaluation of stylised portrait *** criteria were used for its construction,and its consistency was validated by user ***,a new methodology has been developed for evaluating portrait stylisation algorithms,which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the *** perform evaluation for a wide variety of image stylisation methods(both portrait-specific and general purpose,and also both traditional NPR approaches and NST)using the new benchmark dataset.
Background Precise estimation of current and future comorbidities of patients with cardiovascular disease is an important factor in prioritizing continuous physiological monitoring and new *** learning(ML)models have ...
详细信息
Background Precise estimation of current and future comorbidities of patients with cardiovascular disease is an important factor in prioritizing continuous physiological monitoring and new *** learning(ML)models have shown satisfactory performance in short-term mortality prediction in patients with heart disease,whereas their utility in long-term predictions is *** study aimed to investigate the performance of tree-based ML models on long-term mortality prediction and effect of two recently introduced biomarkers on long-term *** This study used publicly available data from the Collaboration Center of Health Information Appli-cation at the Ministry of Health and Welfare,Taiwan,*** collected data were from patients admitted to the cardiac care unit for acute myocardial infarction(AMI)between November 2003 and September *** collected and analyzed mortality data up to December *** records were used to gather demo-graphic and clinical data,including age,gender,body mass index,percutaneous coronary intervention status,and comorbidities such as hypertension,dyslipidemia,ST-segment elevation myocardial infarction,and non-ST-segment elevation myocardial *** the data,collected from 139 patients with AMI,from medical and demographic records as well as two recently introduced biomarkers,brachial pre-ejection period(bPEP)and brachial ejection time(bET),we investigated the performance of advanced ensemble tree-based ML algorithms(random forest,AdaBoost,and XGBoost)to predict all-cause mortality within 14 years.A nested cross-validation was performed to evaluate and compare the performance of our developed models precisely with that of the conventional logistic regression(LR)as the baseline *** The developed ML models achieved significantly better performance compared to the baseline LR(C-Statistic,0.80 for random forest,0.79 for AdaBoost,and 0.78 for XGBoost,vs.0.77 for LR)(PRF<0.001,PAdaBoost<0.001,a
This paper introduces a one-stage deep uncalibrated photometric stereo (UPS) network, namely Fourier Uncalibrated Photometric Stereo Network (FUPS-Net), for non-Lambertian objects under unknown light directions. It de...
详细信息
This paper introduces a one-stage deep uncalibrated photometric stereo (UPS) network, namely Fourier Uncalibrated Photometric Stereo Network (FUPS-Net), for non-Lambertian objects under unknown light directions. It departs from traditional two-stage methods that first explicitly learn lighting information and then estimate surface normals. Two-stage methods were deployed because the interplay of lighting with shading cues presents challenges for directly estimating surface normals without explicit lighting information. However, these two-stage networks are disjointed and separately trained so that the error in explicit light calibration will propagate to the second stage and cannot be eliminated. In contrast, the proposed FUPS-Net utilizes an embedded Fourier transform network to implicitly learn lighting features by decomposing inputs, rather than employing a disjointed light estimation network. Our approach is motivated from observations in the Fourier domain of photometric stereo images: lighting information is mainly encoded in amplitudes, while geometry information is mainly associated with phases. Leveraging this property, our method “decomposes” geometry and lighting in the Fourier domain as guidance, via the proposed Fourier Embedding Extraction (FEE) block and Fourier Embedding Aggregation (FEA) block, which generate lighting and geometry features for the FUPS-Net to implicitly resolve the geometry-lighting ambiguity. Furthermore, we propose a Frequency-Spatial Weighted (FSW) block that assigns weights to combine features extracted from the frequency domain and those from the spatial domain for enhancing surface reconstructions. FUPS-Net overcomes the limitations of two-stage UPS methods, offering better training stability, a concise end-to-end structure, and avoiding accumulated errors in disjointed networks. Experimental results on synthetic and real datasets demonstrate the superior performance of our approach, and its simpler training setup, potentially
—Recent advancements in 3D Gaussian Splatting (3D-GS) have demonstrated the potential of using 3D Gaussian primitives for high-speed, high-fidelity, and cost-efficient novel view synthesis from continuously calibrate...
详细信息
The challenges posed by nonlinearities in industrial systems necessitate innovative techniques that outperform the limitations of traditional methods such as principal component analysis (PCA). While Kernel Principal ...
详细信息
ISBN:
(数字)9798331513733
ISBN:
(纸本)9798331513740
The challenges posed by nonlinearities in industrial systems necessitate innovative techniques that outperform the limitations of traditional methods such as principal component analysis (PCA). While Kernel Principal Component Analysis (KPCA) offers a robust solution to handle nonlinear data, its computational requirements are a significant issue, especially for large-sized datasets. In this work, we propose a novel technique, namely, reduced kernel principal component analysis-based spectral clustering (RKPCA SpC ), to monitor and detect faults in the benchmark Tennessee Eastman process. The suggested approach addresses the complexity associated with KPCA by reducing data size during the model training phase. This reduction involves retaining only the principal components, preserving informative features, and selecting pertinent samples without compromising the original data's content. The efficacy of the proposed method is evaluated through key performance metrics, including false alarm rate (FAR), missed detection rate (MDR), detection time delay (DTD), and computation time (CT). Additionally, gained execution time (GET), gained storage space (GSP), and loss function (LF) are considered, providing a comprehensive assessment of the developed paradigms' effectiveness. The results demonstrate the promising capabilities of our proposed scheme.
暂无评论