Educational Data Mining (EDM) refers to the field that focuses on using various statistical methods and data mining techniques to draw inferences from large amount of educational data set. Over the years it has been u...
Educational Data Mining (EDM) refers to the field that focuses on using various statistical methods and data mining techniques to draw inferences from large amount of educational data set. Over the years it has been used for student performance prediction by applying various machine learning processes and more recently deep learning. As a result of the lockdown announced after the spread of Covid-19, many colleges were forced to adapt to online learning tools. In this paper, we identify and evaluate the impact of the Covid-19 pandemic and its subsequent fallout in predicting student's academic performance. For this, a data set of various undergraduate students was compiled from March 2021. A Likert-type questionnaire was administered and large number of responses were gathered from various primary and secondary resources. This was subsequently used to validate the proposed methodology. Furthermore, different classification algorithms were used to predict the performance of the student and subsequently compared with one another based on their accuracy. The results show that the excessive use of e- learning tools including smartphones, laptops and tablets have a significant impact on student's academic performance as well as on their psychological health. The work will help us to better understand the impact of the lockdown on student's scholastic performance and point out areas where online-learning methods can be improved.
Network security has become a big concern in the modern age. And in the last few years we have seen so many different invalid uses of our data there is many prompt for the government to control the data exploiting sit...
Network security has become a big concern in the modern age. And in the last few years we have seen so many different invalid uses of our data there is many prompt for the government to control the data exploiting sites but we have to detect such VPN and delete with it so in future our data network is secure network. In almost all areas, such as online banking, online shopping, communications, companies, and organizations, the internet offers tremendous convenience. As a result, the communication network necessitates the protection of sensitive data that is stored or transferred over the internet. Because of the rapid growth of computerized devices and their widespread use the internet has resulted in data protection issues for users. Security and privacy risks have become increasingly complex in recent years, highlighting the need for a modernized secured medium to safeguard valuable data on the internet. Virtual Private Network (VPN) is a great way to secure devices and information from hackers, as described in this article. A virtual private network (VPN) is a network device that runs more than a network connection and sends encrypted data. data to prevent attackers from accessing it. The aim of a VPN is to provide vari ous security elements such as authenticity, confidentiality, and data integrity, which is why they are becoming increasingly popular, low-cost, and simple to use. Smartphones, laptops, and tablets will all use VPN services. The creation, protocols, tunneling, and protection of VPNs are also discussed in this paper. It is a rapidly developing technology that plays a critical role in WLAN by enabling safe data transmission over the Internet.
Centrifugal casting is one of the casting techniques which is most widely used in the modern production of automotive and industrial components. The die casting is replaced with a shaped casting method using a centrif...
Centrifugal casting is one of the casting techniques which is most widely used in the modern production of automotive and industrial components. The die casting is replaced with a shaped casting method using a centrifugal casting process, which will increase the properties of the final products. The present work is focused on the fabrication of shaped casting of three-wheeler Differential housing using Aluminium material by vertical centrifugal casting process by varying the Process parameters. Conventionally this part is produced in Cast iron / Steel by Gravity casting / Shell moulding process. The effects of casting parameters through the Vertical Centrifugal casting process are mould rotation speed and temperature of the mould etc, were analysed. The mould speed, pouring temperature, Die Coating for controlling the heat transfer and Microstructure and preheat of the mould are very important factors because at an optimised mould speed the centrifugal force and gravitational downward pull are balanced, and a perfect mould is extracted. After several trials, we have optimised the parameters to achieve a good quality differential housing part. Conventionally, the Centrifugal casting method is used for Axis-Symmetric part like Cylinder liners, tubes, etc. The Differential Case is a Cup-shaped part with a Flange for clamping the case in the gearbox. The Die and Mould were critically designed and manufactured to suit the Vertical Centrifugal Casting machine and appropriate Metal pouring spout is provided to have a uniform metal flow to have a better filling up of the metal inside the annular cavity of the mould. The trial results revealed an excellent filling nature throughout the mould shadow regions, the metallurgical parameters like hardness, porosity and strength were comparable to the specifications. The major advantage over the Differential housing made by Aluminium casting is that the strength to weight ratio of Aluminium Alloy is much better than Steel or Cast I
Now a days, security related issue plays a key role ineach and every organization. Every organization consists of internal security methods and makes it work as efficient as possible. One such security method is the o...
Now a days, security related issue plays a key role ineach and every organization. Every organization consists of internal security methods and makes it work as efficient as possible. One such security method is the online Face recognition process. In recent past days, online examinations are being held in many organizations. Due to this, there is a chance of misusing the system in many ways. One such problem we have identified is that the problem arises between the twoidentical persons / twins. To solve this problem we introduced a new technique where this system cannot be misused with thehelp of transfer learning approach using a CNN model (ResNet) for learning invariant features from the facial images. Human face recognition (HFR) approach provides more challenging issues because of the large intra class variation among the facial images and also due to the less availability of data that needs to be trained to the system for accurate detection. This paper proposes the transfer learning approach for learning invariant features from the facial images more accurately.
This review discusses the most current developments and future perspectives in enzymatic and non-enzymatic glucose sensors, which have notably evolved over the preceding quadrennial period. Furthermore, a thorough exp...
详细信息
This review discusses the most current developments and future perspectives in enzymatic and non-enzymatic glucose sensors, which have notably evolved over the preceding quadrennial period. Furthermore, a thorough exploration encompassed the sensor's intricate fabrication processes, the diverse range of materials employed, the underlying principles of detection, and an in-depth assessment of the sensors' efficacy in detecting glucose levels within essential bodily fluids such as human blood serums, urine, saliva, and interstitial fluids. It is worth noting that the accurate quantification of glucose concentrations within human blood has been effectively achieved by utilizing classical enzymatic sensors harmoniously integrated with optical and electrochemical transduction mechanisms. Monitoring glucose levels in various mediums has attracted exceptional attention from industrial to academic researchers for diabetes management, food quality control, clinical medicine, and bioprocess inspection. There has been an enormous demand for the creation of novel glucose sensors over the past ten years. Research has primarily concentrated on succeeding biocompatible and enhanced sensing abilities related to the present technologies, offering innovative avenues for more effective glucose sensors. Recent developments in wearable optical and electrochemical sensors with low cost, high stability, point-of-care testing, and online tracking of glucose concentration levels in biological fluids can aid in managing and controlling diabetes globally. New nanomaterials and biomolecules that can be used in electrochemical sensor systems to identify glucose concentration levels are developed thanks to advances in nanoscience and nanotechnology. Both enzymatic and non-enzymatic glucose electrochemical sensors have garnered much interest recently and have made significant strides in detecting glucose levels. In this review, we summarise several categories of non-enzymatic glucose sensor mater
Cloud Storage has become a major part of the smartphone industry. Every single smartphone user is using one cloud or the other to store their data to free up space on their smartphones that come with limited storage. ...
Cloud Storage has become a major part of the smartphone industry. Every single smartphone user is using one cloud or the other to store their data to free up space on their smartphones that come with limited storage. Almost all the major smartphone companies are introducing their own cloud service. The users tend to store data on the cloud storage available for their device. It has become difficult for the users to migrate their data from one cloud to another when they change their phone without wasting their time and resources. Currently the users tend to download data from the first cloud to their device, transfer the data to the new device and then upload the data to the second cloud, which is tedious and time consuming. In order to solve this problem, we plan to introduce an easy and time saving method that helps in transferring the data between different cloud storages and build an authentication and session key agreements for mutual trust which is based on algorithm called elliptic curve cryptography without certification for peer-peer cloud servers. This method assists in developing trust between the 2 cloud storages and build a foundation for data migration from different cloud service providers.
The Lid Driven Cavity problem is the simplest problem considered by CFD people for validation of code and the same has been used for this study. The study was about testing an internal flow at a low Reynolds number (R...
The Lid Driven Cavity problem is the simplest problem considered by CFD people for validation of code and the same has been used for this study. The study was about testing an internal flow at a low Reynolds number (Re=100). The case was solved using a transient incompressible flow solver called ‘icoFoam’. The flow field was taken as input for the ITHACA utility for performing POD. The POD snapshots and the Eigenvalues were extracted for the problem considered. The solutions were discussed based on the applications or usability for various real-life problems.
If sample data disperses information like classification rules, these regulations can need to be combined and fused. This is usually done by combining either the classification outputs, like the classification ensembl...
If sample data disperses information like classification rules, these regulations can need to be combined and fused. This is usually done by combining either the classification outputs, like the classification ensemble, or by incorporating the collection of classification laws by independently weighing them. In this paper, we introduce a new way of assembling classifiers with parameters of classifiers. The technique is based on the use for regression analysis distributions in normative input dimensions of generative probabilistic graders and on nonparametric wave function for continuous. We also spread over classification parameters such as Dirichlet or regular Wishart. These populations are classified as populations of high or second order. We prove that two or more classificatory can be used by multiplying the hyper distributions of variables and drawing simple formulas for the assignment. Any research reveals how this modern approach is described. The primary advantage of this fusion approach is the conservation of hyper distribution in the fusion phase. For starters, the fused components can be used in the organized training steps.
The work utilizes the genome-wide association data sourced from subjects diagnosed with autism spectrum disorder (ASD) - Stage I and II. This system facilitates early and definitive prediction of autism to start inter...
The work utilizes the genome-wide association data sourced from subjects diagnosed with autism spectrum disorder (ASD) - Stage I and II. This system facilitates early and definitive prediction of autism to start intervention techniques at a young age. The Decision Tree model is used to classify the incoming gene-expression. Through the model selection stage, the performance of Decision-Trees and Random Forests was evaluated. This method of diagnosis, a combination of Machine Learning and Precision Medicine, ensures direct and a more conclusive diagnosis that can be adapted at feasible costs. The results are presented on a webpage using the Flask web framework.
In this article, we propose a computer research approach that uses only a few tomographic optoacoustic projections to analyze main components for high-frame volumetric heart imagery is seen herein in the non-invasive ...
In this article, we propose a computer research approach that uses only a few tomographic optoacoustic projections to analyze main components for high-frame volumetric heart imagery is seen herein in the non-invasive imaging of a smoking rodent. This approach is beneficial for discerning periodic movements. The training process allows the heart motion information to be effectively compressed and used in a higher frame rate as prior information for image restoration. Image consistency with report book decrease in the data flow is seen to be safeguarded. We prove that the parametric motion can be effectively obtained under such conditions by focusing on moment data from a single tomographic detector. A visualization of perfusing a contrast agent in vivo indicates the possibility of recording intermittent (quasi) events not recorded in the training process. The proposed method can be used to dramatically improve optoacoustic imaging time resolution and help create more accessible and data efficiency systems.
暂无评论