INTRODUCTION FROM THE CHAIROn behalf of the Organizing Committee, I am delighted to welcome all participants to the 3rdinternationalconference on computing and Applied Informatics (ICCAI), which is being held on 18t...
INTRODUCTION FROM THE CHAIR
On behalf of the Organizing Committee, I am delighted to welcome all participants to the 3rdinternationalconference on computing and Applied Informatics (ICCAI), which is being held on 18th - 19th September 2018 in Medan, Sumatera Utara, Indonesia. The conference is organized by Faculty of Computer science and Information Technology, Universitas Sumatera Utara (USU) Medan, Indonesia, is sponsored by IOP publisher and indexed by Scopus. Following the successful ICCAI 2016 and ICCAI 2017, ICCAI 2018 also brings together researchers, lecturers, educators, students and practitioners in order to discuss, share, exchange and to extend their knowledge of new advances on theories and practices in the field of computing and Applied Informatics, towards a smarter society through research on datascience.
There are 278 papers have been submitted in this year’s conference. Each paper has been reviewed with tight criteria from our invited and voluntary multi-national reviewers according to their respective field of interest. Based on the review results, 123 papers have been accepted, which lead to an acceptance rate of 44.2%. The accepted papers come from some countries, such as Indonesia, Malaysia, Japan, Philippines, Saudi Arabia, China, Nigeria, India, and Turkey.
The conference is a prestigious event that would not been successful without extensive effort from many volunteers. I would like to express my sincere appreciation and thanks to all participants and supporters, especially the members of Faculty of Computer science and Information Technology (Fasilkom-TI). I would also like to express our sincere thanks to our keynote speakers: Prof. Dr. Zhenjiang Shen (Kanazawa University, Japan), Assoc. Prof. Dr. Kardi Teknomo (Ateneo de Manila University, Philippines) and Drs. Mahyuddin K.M. Nasution, ***, Ph.D (Universitas Sumatera Utara, Indonesia). Moreover, our sincere gratitude Prof. Dr. Runtung Sitepu, Rector of Universitas Sumatera Utara, and
Volunteer computing (VC) has been successfully applied to many compute-intensive scientific projects to solve embarrassingly parallel computing problems. There exist some efforts in the current literature to apply VC ...
详细信息
ISBN:
(数字)9789811063855
ISBN:
(纸本)9789811063855;9789811063848
Volunteer computing (VC) has been successfully applied to many compute-intensive scientific projects to solve embarrassingly parallel computing problems. There exist some efforts in the current literature to apply VC to data-intensive (i.e. big data) applications, but none of them has confirmed the scalability of VC for the applications in the opportunistic volunteer environments. This paper chooses MapReduce as a typical computing paradigm in coping with big data processing in distributed environments and models it on DHT (Distributed Hash Table) P2P overlay to bring this computing paradigm into VC environments. The modelling results in a distributed prototype implementation and a simulator. The experimental evaluation of this paper has confirmed that the scalability of VC for the MapReduce big data (up to 10 TB) applications in the cases, where the number of volunteers is fairly large (up to 10K), they commit high churn rates (up to 90%), and they have heterogeneous compute capacities (the fastest is 6 times of the slowest) and bandwidths (the fastest is up to 75 times of the slowest).
Information security problem being more and more serious, plenty of data about security being produced fast, the Security Information and Event Management (SIEM) systems have faced with diversity of Volume Big data so...
详细信息
ISBN:
(纸本)9783319685052;9783319685045
Information security problem being more and more serious, plenty of data about security being produced fast, the Security Information and Event Management (SIEM) systems have faced with diversity of Volume Big data sources, so it is necessary that big data analysis should be used. This paper presents the architecture and principle of SIEM systems which use popular big data technology. The information security data is transferred from flume to Flink or Spark computing Framework through Kafka and is retrieved through Elastic Research. The K-means algorithm is used in analyzing the abnormal condition with spark mllib. The report of experiment and results of SIEM shows it is efficient systems process big data to detect security anomaly. In the end, the full paper is summarized and the future work should be the usage of stream computing in the SIEM to solve inform security problem in large-scale network with the continuously producing information security data.
The paper describes two approaches to gathering measurement data about moving objects in wireless networks. The use of Fog computing technology makes it possible to relocate a part of calculations closer to measuring ...
详细信息
This paper proposes a hybrid method that simultaneously considers sparsity in wavelet domain and image self-similarity by using wavelet L1 norm, nonlocal wavelet L0 norm regularization in image compressive sensing (CS...
详细信息
ISBN:
(纸本)9783319685427;9783319685410
This paper proposes a hybrid method that simultaneously considers sparsity in wavelet domain and image self-similarity by using wavelet L1 norm, nonlocal wavelet L0 norm regularization in image compressive sensing (CS) recovery. An auxiliary variable is then introduced to decompose this composite constraint problem into two simpler regularization sub-problems. Based on Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), the sub-problems corresponding to the wavelet L1 norm and the nonlocal wavelet L0 norm are then solved by soft thresholding and adaptive hard thresholding respectively. The threshold of the later is decreased according to the energy of measurement error, leading to an adaptive hybrid regularization method. Experimental results show that it outperforms several excellent CS techniques.
Generally, software evolution activity is a process of frequent iteration which produces large-volume, heterogeneous and unstructured data in a fast way. During this process, lots of noisy data and side effects are ge...
详细信息
ISBN:
(纸本)9781509063185
Generally, software evolution activity is a process of frequent iteration which produces large-volume, heterogeneous and unstructured data in a fast way. During this process, lots of noisy data and side effects are generated. In this way, these software evolution data form the so called four Vs of Big data. So it is necessary to extract valuable information from the big software evolution data in order to carry out an effective change impact analysis to assure safe evolution process. This paper proposes a change impact analysis technique based on evolution slicing to tackle such big software evolution data at the code level. This technique firstly distinguishes the modified elements, and then constructs the evolution slice to assist software developers and maintainers to make evolution decisions. The experiment on four simple example programs shows that this technique has a better recall value.
The paper describes an approach to performing a distributed analysis on time series. The approach suggests to integrate data Mining and ETL technologies and to perform primary analysis of time series based on a subset...
详细信息
This study involved the design of a smart neighborhood application, to sustain an innovative economy in this 4IR Big data era. This study involved a five-part study : Part 1 investigated on the appropriate drivers of ...
详细信息
ISBN:
(纸本)9781509058662
This study involved the design of a smart neighborhood application, to sustain an innovative economy in this 4IR Big data era. This study involved a five-part study : Part 1 investigated on the appropriate drivers of the dimension of the innovative Digital Malaysia concept before theories of digital signal processing, visualization and appropriate ambient computing could be applied into the design framework of the Integrated Smart Neighborhood in a Smart City in Malaysia;Part II, involved a semi-structured interview with two prominent experts;Part III involved public survey conducted to verify the significant dimensions of Malaysia's KS, the important indicators of KS and to validate the generalizable measurement model for Malaysia's KS based on the innovative Digital Malaysia context. Based on the 5-Round Delphi, KS in the Digital Malaysia context was defined;Part V involved development of tools and devices as proof of concept for sustainability and wellbeing of a specific section of the population in the big data era. A smart neighborhood in the context of this study takes into consideration the Knowledge Society (KS) model in an innovative Digital Malaysia KS Dimensions: Technology, Education, Governance, Social and Environment. The framework was verified for its 'goodness of fit' model using the Structural Equation Modeling (SEM), based on the AMOS output. The elements that were significant were related to three main elements: environment: cleanliness;security and health. The data in the smart neighborhood can be presented by the dashboard technology using Big data Analytics, to be shared for societal well-being.
This work proposed an activity recognition model which focus on the power consuming activity in home environment, to help residents modify their behavior. We set the IoT system with lower number of sensors. The key da...
详细信息
ISBN:
(纸本)9783319685052;9783319685045
This work proposed an activity recognition model which focus on the power consuming activity in home environment, to help residents modify their behavior. We set the IoT system with lower number of sensors. The key data for identifying activity comes from widely used smart sockets. It first took residents' acceptability into consideration to set the IoT system, then used a seamless indoor position system to get residents' position to help recognize the undergoing activities. Based on ontology, it made use of domain knowledge in daily activity and built an activity ontology. The system took real home situation into consideration and make full use of both electric and electronic appliances' data into the context awareness. The knowledge helps improve the performance of the data-driven method. The experiment shows the system can recognize the common activities with a high accuracy and have a good applicability to real home scenario.
The service selection based on QoS is very important in mobile Web service computing, because QoS is highly uncertain in service providing for mobile applications. From the uncertain QoS data described, a parallel ser...
详细信息
ISBN:
(纸本)9783319685052;9783319685045
The service selection based on QoS is very important in mobile Web service computing, because QoS is highly uncertain in service providing for mobile applications. From the uncertain QoS data described, a parallel service selection method based on adaptive cloud model (PSSM_ACM) is presented for the first time. First, PSSM_ACM employs cloud model to portray QoS to solve the QoS's uncertainty in mobile applications. Then, 2 kinds of backward QoS cloud generator are introduced to convert big QoS data to QoS cloud model, and a QoS cloud model adaptive adjustment mechanism is introduced too. Next, by reference the technique for order preference by similarity to an ideal solution (TOPSIS) theory, 2 kinds of service selection algorithm are designed to obtain the optimal service reflecting user's QoS needs. The last, some experiments demonstrate the superiority and efficiency of our approach.
暂无评论