SARS CoV-2 is a fascinating topic to investigate, especially in Indonesia and Malaysia, which share similar racial demographics. However, statistical analysis of information on the SARS CoV-2 from a database, especial...
SARS CoV-2 is a fascinating topic to investigate, especially in Indonesia and Malaysia, which share similar racial demographics. However, statistical analysis of information on the SARS CoV-2 from a database, especially GISAID, does not contain specific customizations related to virus comparisons between selected countries. Therefore, the researchers conducted statistical analysis and data visualization using the Python programming language to describe and investigate SARS CoV-2 Indonesia and Malaysia from the GISAID database. SARS CoV-2 metadata from Indonesia (N=117) and Malaysia (N=250), which were gathered during 2020, were compared. This comparison was aimed to investigate the discrepancies of COVID-19 cases in closely related populations. Firstly, data visualization was conducted using the Python Matplotlib library to create bar charts for clades and mutation comparison. Additionally, a series of boxplots were generated to show age discrepancies stratified by gender. Furthermore, the statistical tests showed that only the dominant Malaysian (G and O) clades were found to be significantly different compared to Indonesian cases (p-value=0.016). The proportion of two major mutations (G614D and NSP12 P323L) were also significantly different in the two countries caused by the dominant clade differences (p-value=0.007). Lastly, the differences in the age distribution of COVID-19 cases between the two countries were significant only in the male group (p-value=0.017).
Cashless Transaction describes as a state where financial transactions are not bound by physical form such as banknotes or coins, but rather trough transfer of digital information (electronic representation of money)....
详细信息
Space has increasingly attracted the attention of governments, large industries, and universities. One of the most popular strategies in recent years has been the adoption of nanosatellites to fulfill different missio...
Space has increasingly attracted the attention of governments, large industries, and universities. One of the most popular strategies in recent years has been the adoption of nanosatellites to fulfill different missions, which can work alone or in constellations. Universities stand out among the agents launching nanosatellites, with more than 600 launches until 2022. Given the growth of entities that control space missions, it is necessary to implement new methods for communication between control and satellite to accelerate data transmission and provide a high-security degree. Our work proposes a consortium archi-tecture between Ground Stations (GSs) so that a GS as a Service (GSaaS) works with low cost, reliability, and resource sharing. We simulated a nanosatellite mission in Low Earth Orbit (LEO) with MATLAB to obtain the parameters of average communication time, propagation loss, and at which angles the communication would be most affected by atmospheric phenomena. Then, we implement business rules for communication between GS and satellites using smart contract concepts. We set up a blockchain to provide the decentralization infrastructure and created a web service to provide a communication API between nanosatellite and blockchain. We simulated the firmware update process, showing that the nanosatellite took around 20 minutes to request all 32-byte fragments of 301 Kb firmware. Considering the time interval that the communication window between GS and nanosatellite remains active, the entire firmware transmission takes two to three communication slots. However, the transmission time is drastically reduced in a scenario with two or more GSs. Furthermore, the GSaaS decentralized infrastructure allows the consortium of GSs to communicate agnostically with the satellites, preserving firmware privacy due to the cryptography used in blockchain transactions
The demands of today’s 5G mobile network, especially low latency and high bandwidth, are a big challenge for the 5G Core (5GC) provider. The most critical user data packet handler in the 5GC Network Function (NF) is ...
The demands of today’s 5G mobile network, especially low latency and high bandwidth, are a big challenge for the 5G Core (5GC) provider. The most critical user data packet handler in the 5GC Network Function (NF) is the User Plane Function (UPF), which is responsible for moving data from the user equipment to the destination data network, and vice versa. Existing work mainly focuses on implementing UPF using the key technologies of high-speed data processing. In this paper, with a mobile core provider called free5GC for a stand-alone (SA) 5G network, we share our experience with the implementation of UPF by using a programmable hardware appliance, which can offer more Tbps compared to the implementation of software UPF that can offer only a few hundred Gbps. For that, we demonstrate how to build up a more flexible architecture of UPF by using the Software-Defined Networking (SDN) concept due to the opacity of protocol specification. We split the UPF control signal implementation into a software application, and user data packet processing into a programmable hardware appliance. We also show how to integrate a number of current UPF data plane free5GC implementations such as Data Plane Development Kit (DPDK), Linux kernel module, and SmartNIC. Furthermore, we analyze and make use of microservices to support the specific features of the UPF data plane that cannot be implemented in a programmable hardware appliance. We tested our free5GC mobile network and the new UPF design architecture that can run on a real programmable hardware appliance from Accton CSP-7551. The evaluation results show that our programmable user plane can reach the line rate.
Adversarial attacks have become one of the most serious security issues in widely used deep neural networks. Even though real-world datasets usually have large intra-variations or multiple modes, most adversarial defe...
详细信息
Understanding how social structures with the use of a network have been an active field of study for academics in the past five years alone. The need to properly comprehends how Social Network Analysis (SNA) is being ...
详细信息
ISBN:
(纸本)9781665499705
Understanding how social structures with the use of a network have been an active field of study for academics in the past five years alone. The need to properly comprehends how Social Network Analysis (SNA) is being studied grows more and more in recent years. In this article, we propose a Systematic Literature Review (SLR) to the SNA to see how the algorithms, techniques, and methods are used also discuss their findings. We select thirty-one research studies on SNA. We found different algorithms and techniques that are being used. It is found that the selected research could be categorized into five different main topics which is academic, health, social media, communication, and technology. From all of the research paper discussed, it is also found that many algorithms and techniques are being used to enhanced the SNA, most of them are being machine learning algorithm such as Decision Tree, Random Forest, Support Vector Machine, Naïve Bayes, Logistic Regression, K Nearest Neighbor, and Whale Optimization Algorithm. While the common features of the datasets used in the research comes as different arrays of user information from social media platforms, Tweets and posts from multiple platforms, also a photographic input such as self-images, portraits, and context related pictures. This article will serve as a single reference for future researchers to the discovery of the latest SNA findings.
Virtual assistants are improving and providing consumers with greater advantages. The comprehension and fulfilment of requests by virtual assistants will increase as voice recognition and natural language processing c...
Virtual assistants are improving and providing consumers with greater advantages. The comprehension and fulfilment of requests by virtual assistants will increase as voice recognition and natural language processing continue to grow. Virtual assistants are projected to be employed in more commercial activities as speech recognition technology advances. The main goal of developing personal assistant software (virtual assistant) is to use web-based semantic data sources, user-generated content, and knowledge from knowledge libraries. Basically, main objective of making this Voice-Based Virtual Assistant is to make life easier and having a personal assistant to everyone which can perform many tasks. As the end user interacts with a virtual assistant, the AI programming learns from the data provided and improves its ability to forecast the end user's needs. Virtual assistants are often used to do things like add tasks to a calendar, provide information that would normally be found in a website, and operate and monitor Smart Home devices like lighting and cameras and thermostats. Massive volumes of data are required to fuel virtual assistant technologies, which feed Artificial Intelligence (AI) platforms such as machine learning, natural language processing, and speech recognition. Speech recognition has a lengthy history and has seen several key advancements. On smartphones and wearable devices, speech recognition for dictation, search, and voice commands has become a standard feature. Design of a small, large vocabulary speech recognition system that can run quickly, accurately, and with minimum latency on mobile devices.
Smart-parking solutions use sensors, cameras, and data analysis to improve parking efficiency and reduce traffic congestion. computer vision-based methods have been used extensively in recent years to tackle the probl...
Smart-parking solutions use sensors, cameras, and data analysis to improve parking efficiency and reduce traffic congestion. computer vision-based methods have been used extensively in recent years to tackle the problem of parking lot management, but most of the works assume that the parking spots are manually labeled, impacting the cost and feasibility of deployment. To fill this gap, this work presents an automatic parking space detection method, which receives a sequence of images of a parking lot and returns a list of coordinates identifying the detected parking spaces. The proposed method employs instance segmentation to identify cars and, using vehicle occurrence, generate a heat map of parking spaces. The results using twelve different subsets from the PKLot and CNRPark-EXT parking lot datasets show that the method achieved an AP25 score up to 95.60% and AP50 score up to 79.90%.
作者:
Mishne, GalCharles, AdamHalıcıoğlu Data Science Institute
Department of Electrical and Computer Engineering the Neurosciences Graduate Program UC San Diego 9500 Gilman Drive La Jolla CA92093 United States Department of Biomedical Engineering
Kavli Neuroscience Discovery Institute Center for Imaging Science Department of Neuroscience Mathematical Institute for Data Science Johns Hopkins University BaltimoreMD21287 United States
Optical imaging of the brain has expanded dramatically in the past two decades. New optics, indicators, and experimental paradigms are now enabling in-vivo imaging from the synaptic to the cortex-wide scales. To match...
详细信息
The goal of this research is to analyze of effectiveness the Next Generation Firewall that implemented to secure IoT in smart house and company network. The method used in this research is the method of comparison wit...
详细信息
暂无评论