Many phenomena in nature and technology are associated with the filtration of suspensions and colloids in porous media. Two main types of particle deposition,namely, cake filtration at the inlet and deep bed filtratio...
详细信息
Many phenomena in nature and technology are associated with the filtration of suspensions and colloids in porous media. Two main types of particle deposition,namely, cake filtration at the inlet and deep bed filtration throughout the entire porous medium, are studied by different models. A unified approach for the transport and deposition of particles based on the deep bed filtration model is proposed. A variable suspension flow rate, proportional to the number of free pores at the inlet of the porous medium, is considered. To model cake filtration, this flow rate is introduced into the mass balance equation of deep bed filtration. For the cake filtration without deposit erosion,the suspension flow rate decreases to zero, and the suspension does not penetrate deep into the porous medium. In the case of the cake filtration with erosion, the suspension flow rate is nonzero, and the deposit is distributed throughout the entire porous medium. An exact solution is obtained for a constant filtration function. The method of characteristics is used to construct the asymptotics of the concentration front of suspended and retained particles for a filtration function in a general form. Explicit formulae are obtained for a linear filtration function. The properties of these solutions are studied in detail.
Data centers are often equipped with multiple cooling units. Here, an aquifer thermal energy storage (ATES) system has shown to be efficient. However, the usage of hot and cold-water wells in the ATES must be balanced...
详细信息
Data centers are often equipped with multiple cooling units. Here, an aquifer thermal energy storage (ATES) system has shown to be efficient. However, the usage of hot and cold-water wells in the ATES must be balanced for legal and environmental reasons. Reinforcement Learning has been proven to be a useful tool for optimizing the cooling operation at data centers. Nonetheless, since cooling demand changes continuously, balancing the ATES usage on a yearly basis imposes an additional challenge in the form of a delayed reward. To overcome this, we formulate a return decomposition, Cool-RUDDER, which relies on simple domain knowledge and needs no training. We trained a proximal policy optimization agent to keep server temperatures steady while minimizing operational costs. Comparing the Cool-RUDDER reward signal to other ATES-associated rewards, all models kept the server temperatures steady at around 30 °C. An optimal ATES balance was defined to be 0% and a yearly imbalance of −4.9% with a confidence interval of [−6.2, −3.8]% was achieved for the Cool 2.0 reward. This outperformed a baseline ATES-associated reward of 0 at −16.3% with a confidence interval of [−17.1, −15.4]% and all other ATES-associated rewards. However, the improved ATES balance comes with a higher energy consumption cost of 12.5% when comparing the relative cost of the Cool 2.0 reward to the zero reward, resulting in a trade-off. Moreover, the method comes with limited requirements and is applicable to any long-term problem satisfying a linear state-transition system.
Foundation models(FMs) [1] have revolutionized software development and become the core components of large software systems. This paradigm shift, however, demands fundamental re-imagining of software engineering theo...
Foundation models(FMs) [1] have revolutionized software development and become the core components of large software systems. This paradigm shift, however, demands fundamental re-imagining of software engineering theories and methodologies [2]. Instead of replacing existing software modules implemented by symbolic logic, incorporating FMs' capabilities to build software systems requires entirely new modules that leverage the unique capabilities of ***, while FMs excel at handling uncertainty, recognizing patterns, and processing unstructured data, we need new engineering theories that support the paradigm shift from explicitly programming and maintaining user-defined symbolic logic to creating rich, expressive requirements that FMs can accurately perceive and implement.
Graph Neural Networks (GNNs) have emerged as a widely used and effective method across various domains for learning from graph data. Despite the abundance of GNN variants, many struggle with effectively propagating me...
详细信息
Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective *** Deep Learning(DL)approaches have shown promise in AD di...
详细信息
Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective *** Deep Learning(DL)approaches have shown promise in AD diagnosis,existing methods often struggle with the issues of precision,interpretability,and class *** study presents a novel framework that integrates DL with several eXplainable Artificial Intelligence(XAI)techniques,in particular attention mechanisms,Gradient-Weighted Class Activation Mapping(Grad-CAM),and Local Interpretable Model-Agnostic Explanations(LIME),to improve bothmodel interpretability and feature *** study evaluates four different DL architectures(ResMLP,VGG16,Xception,and Convolutional Neural Network(CNN)with attention mechanism)on a balanced dataset of 3714 MRI brain scans from patients aged 70 and *** proposed CNN with attention model achieved superior performance,demonstrating 99.18%accuracy on the primary dataset and 96.64% accuracy on the ADNI dataset,significantly advancing the state-of-the-art in AD *** ability of the framework to provide comprehensive,interpretable results through multiple visualization techniques while maintaining high classification accuracy represents a significant advancement in the computational diagnosis of AD,potentially enabling more accurate and earlier intervention in clinical settings.
Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality *** and treating ABE promptly is important to prevent further complications and long-term *** studie...
详细信息
Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality *** and treating ABE promptly is important to prevent further complications and long-term *** studies have explored ABE ***,they often face limitations in classification due to reliance on a single modality of Magnetic Resonance Imaging(MRI).To tackle this problem,the authors propose a Tri-M2MT model for precise ABE detection by using tri-modality MRI *** scans include T1-weighted imaging(T1WI),T2-weighted imaging(T2WI),and apparent diffusion coefficient maps to get indepth ***,the tri-modality MRI scans are collected and preprocessesed by using an Advanced Gaussian Filter for noise reduction and Z-score normalisation for data *** Advanced Capsule Network was utilised to extract relevant features by using Snake Optimization Algorithm to select optimal features based on feature correlation with the aim of minimising complexity and enhancing detection ***,a multi-transformer approach was used for feature fusion and identify feature correlations ***,accurate ABE diagnosis is achieved through the utilisation of a SoftMax *** performance of the proposed Tri-M2MT model is evaluated across various metrics,including accuracy,specificity,sensitivity,F1-score,and ROC curve analysis,and the proposed methodology provides better performance compared to existing methodologies.
Powder crystallography is the experimental science of determining the structure of molecules provided in crystalline-powder form,by analyzing their x-ray diffraction(XRD)*** many materials are readily available as cry...
详细信息
Powder crystallography is the experimental science of determining the structure of molecules provided in crystalline-powder form,by analyzing their x-ray diffraction(XRD)*** many materials are readily available as crystalline powder,powder crystallography is of growing usefulness to many ***,powder crystallography does not have an analytically known solution,and therefore the structural inference typically involves a laborious process of iterative design,structural refinement,and domain knowledge of skilled experts.A key obstacle to fully automating the inference process computationally has been formulating the problem in an end-to-end quantitative form that is suitable for machine learning,while capturing the ambiguities around molecule orientation,symmetries,and reconstruction *** we present an ML approach for structure determination from powder diffraction *** works by estimating the electron density in a unit cell using a variational coordinate-based deep neural *** demonstrate the approach on computed powder x-ray diffraction(PXRD),along with partial chemical composition information,as *** evaluated on theoretically simulated data for the cubic and trigonal crystal systems,the system achieves up to 93.4%average similarity(as measured by structural similarity index)with the ground truth on unseen materials,both with known and partially-known chemical composition information,showing great promise for successful structure solution even from degraded and incomplete input *** approach does not presuppose a crystalline structure and the approach are readily extended to other situations such as nanomaterials and textured samples,paving the way to reconstruction of yet unresolved nanostructures.
In this work, a novel methodological approach to multi-attribute decision-making problems is developed and the notion of Heptapartitioned Neutrosophic Set Distance Measures (HNSDM) is introduced. By averaging the Pent...
详细信息
This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of t...
详细信息
This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull *** control chart developed supports the examination of the mean lifespan variation for a particular product in the process of *** control limit levels are used:the warning control limit,inner control limit,and outer control ***,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control *** control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control ***,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.
The class of maximal-length cellular automata (CAs) has gained significant attention over the last few years due to the fact that it can generate cycles with the longest possible lengths. For every l of the form l = 2...
详细信息
暂无评论