This study utilizes ML classifiers to estimate canopy density based on three decades of data (1990-2021). The Support Vector Machine (SVM) classifier outperformed other classifiers, such as Random Tree and Maximum Lik...
详细信息
This study utilizes ML classifiers to estimate canopy density based on three decades of data (1990-2021). The Support Vector Machine (SVM) classifier outperformed other classifiers, such as Random Tree and Maximum Likelihood. Satellite data from Landsat and Sentinel 2 was classified using a developed python model, providing an economical and time-saving approach. The accuracy of the classification was evaluated through a confusion matrix and area computation. The findings indicate a negative trend in the overall decadal change, with significant tree loss attributed to jhum cultivation, mining, and quarry activities. However, positive changes were observed in recent years due to the ban on illegal mining. The study highlights the dynamic nature of tree cover and emphasizes the need for biennial assessments using at least five time-series data. Micro-level analysis in Shallang, West Khasi hills, revealed a concerning trend of shortening jhum cycles. Automation in canopy change analysis is crucial for effective forest monitoring, providing timely information for law enforcement proposals and involving forest managers, stakeholders, and watchdog organizations.
The prediction of the mechanical strength of composites must be known before use or fabrication. The computerized modeling and analysis helps in prediction of the realistic performance of the composite products. The c...
详细信息
The prediction of the mechanical strength of composites must be known before use or fabrication. The computerized modeling and analysis helps in prediction of the realistic performance of the composite products. The current research work presents the modeling routs of yarns, yarn interpolation for path, cross-section, and orientations with finite element analysis of woven fabric reinforcements. The geometrical modeling routes of textile woven reinforcements at meso-scale described by using TexGen 3.10, which is a python scripted software package, developed by the polymer composites group at the University of Nottingham, UK, works as a preprocessor for characterization of textile reinforcements. The finite element analysis of textile woven reinforcements is done by using a commercially available software package ABAQUS 6.14-5. Due to the similarity of python scripted codes in both the software’s, ABAQUS is considered as an analysis tool for textile reinforcements among the so many FE based platforms. Textile woven fabric unit cell having plain and twill weaving patterns are explained with Kevlar (monolithic) and Carbon-Kevlar (hybrid) yarns with finite element compression behaviour analysis, and discussed to understand the mechanical performance of polymer textile composites.
API standard 609 provides a purchase specification for butterfly valves designed for installation between flanges, as defined in several ASME codes, which specify certain details, the features and nondestructive requi...
详细信息
API standard 609 provides a purchase specification for butterfly valves designed for installation between flanges, as defined in several ASME codes, which specify certain details, the features and nondestructive requirements including testing. If industries want to manufacture butterfly valves meeting standards as demanded by the plant engineering market, they should follow the design guidelines provided in the related API and ASME codes. In the present study, the design process of the butterfly valve according to API standard 609 was developed and implemented to create a 3-dimensional feature of the butterfly valve. This can be utilized in CAE analysis without any complicated pre-process of FE-modeling. The design process consists of seven steps, from the selection of class to the determination of the maximal disc diameter. Each step can be automated by using the python script. Consequently, CAD data of the designed butterfly valve can be created using Xml-templates on the basis of a Grasshopper in Rhinoceros.
Background: Open Source cheminformatics toolkits such as OpenBabel, the CDK and the RDKit share the same core functionality but support different sets of file formats and forcefields, and calculate different fingerpri...
详细信息
Background: Open Source cheminformatics toolkits such as OpenBabel, the CDK and the RDKit share the same core functionality but support different sets of file formats and forcefields, and calculate different fingerprints and descriptors. Despite their complementary features, using these toolkits in the same program is difficult as they are implemented in different languages (C++ versus Java), have different underlying chemical models and have different application programming interfaces (APIs). Results: We describe Cinfony, a python module that presents a common interface to all three of these toolkits, allowing the user to easily combine methods and results from any of the toolkits. In general, the run time of the Cinfony modules is almost as fast as accessing the underlying toolkits directly from C++ or Java, but Cinfony makes it much easier to carry out common tasks in cheminformatics such as reading file formats and calculating descriptors. Conclusion: By providing a simplified interface and improving interoperability, Cinfony makes it easy to combine complementary features of OpenBabel, the CDK and the RDKit.
Linear regression is one of the oldest statistical modeling approaches. Still, it is a valuable tool, particularly when it is necessary to create forecast models with low sample sizes. When researchers use this method...
详细信息
Linear regression is one of the oldest statistical modeling approaches. Still, it is a valuable tool, particularly when it is necessary to create forecast models with low sample sizes. When researchers use this method and have numerous potential regressors, choosing the group of regressors for a model that fulfills all the required assumptions can be challenging. In this sense, the authors developed an open-source python script that automatically tests all the combinations of regressors under a brute-force approach. The output displays the best linear regression models, regarding the thresholds set by users for the required assumptions: statistical significance of the estimations, multicollinearity, error normality, and homoscedasticity. Further, the script allows the selection of linear regressions with regression coefficients according to the user's expectations. This script was tested with an environmental dataset to predict surface water quality parameters based on landscape metrics and contaminant loads. Among millions of possible combinations, less than 0.1 % of the regressor combinations fulfilled the requirements. The resulting combinations were also tested in geographically weighted regression, with similar results to linear regression. The model's performance was higher for pH and total nitrate and lower for total alkalinity and electrical conductivity. center dot A python script was developed to find the best linear regressions within a dataset. center dot Output regressions are automatically selected based on regression coefficient expectations set by the user and the linear regression assumptions. center dot The algorithm was successfully validated through an environmental dataset.
暂无评论