Diabetic retinopathy (DR) is a retinal disease caused by diabetes. It is one of the leading causes of permanent blindness. Therefore, clinical screening and classification of disease severity in patients diagnosed wit...
Diabetic retinopathy (DR) is a retinal disease caused by diabetes. It is one of the leading causes of permanent blindness. Therefore, clinical screening and classification of disease severity in patients diagnosed with diabetes are warranted. Currently, deep learning-based diabetic retinopathy screening tends to use deep networks such as Deep Convolutional Neural Network (DCNN). However, medical datasets usually contain fewer images. Thus, DCNN is not suitable for such small datasets. Light networks not only have fewer parameters that small datasets can suffice, but also they are more computationally efficient. They are more suitable for small datasets. Thus, in this paper, a novel light network for automatic DR detection is proposed. The proposed method is based on ResNet and attention modules. The attention modules are added to catch subtle features in images, and some convolutional layers are removed to prevent overfitting. Compared with existing DCNN-based DR detection methods, the proposed method has the following advantages. First, it has fewer parameters. Second, it optimizes the process layers. Third, the computing is convenience. Experimental results demonstrate that our proposed network outperforms other benchmark methods in accuracy, precision, recall reaching at 0.806, 0.814, 0.823 respectively and the time cost.
Underwater visual environment perception is an important issue for autonomous motion and operation of underwater robots. Due to the absorption of light by water and the scattering of light by particles in the underwat...
Underwater visual environment perception is an important issue for autonomous motion and operation of underwater robots. Due to the absorption of light by water and the scattering of light by particles in the underwater environment, the underwater image has low contrast and blurry edges. In this paper, we propose a method to capture image domain features and convert them into another image domain without any paired training samples. Cyclic consistent loss is used to eliminate pairing data during training. The model can convert from one domain to another without one-to-one mapping between the source domain and the target domain. For providing a feasible solution for deploying real-time GAN on resource constrained devices, a method based on GAN compression is proposed to generate high undistorted images with low computational requirements. This method can be applied to many other tasks, such as image enhancement, image coloring, style transfer, etc.
With the maturity of cloud-native technologies such as microservices, Docker, and Kubernetes, many enterprises use cloud-native methods to build applications and deploy applications on the cloud. Existing DevOps platf...
With the maturity of cloud-native technologies such as microservices, Docker, and Kubernetes, many enterprises use cloud-native methods to build applications and deploy applications on the cloud. Existing DevOps platforms are mostly based on the traditional continuous construction tool Jenkins. Due to its monolithic architecture, disk storage, and memory usage, it cannot fully utilize the advantages of elastic scaling and flexible expansion of cloud. In order to better support the continuous integration and continuous deployment of cloud-native applications, this paper studies the cloud-native based CI/CD platform and practices the GitOps model, builds a CI/CD platform with higher agility, elasticity and portability to speed up the application development and deployment.
PSO (Particle Swarm Optimization) is a meta heuristic algorithm, but it is not particularly ideal in large scale optimization. It can hardly balance the two contents of exploration and development effectively. In orde...
PSO (Particle Swarm Optimization) is a meta heuristic algorithm, but it is not particularly ideal in large scale optimization. It can hardly balance the two contents of exploration and development effectively. In order to solve this issue, this article proposed an improved structure, which can decouple above two contents. It will help to explore and exploit the contents of different components independently at the same time. On this basis, a method of sparse particle distribution and local congestion estimation is proposed. This method can adjust the difference between samples and update particles in optimization process by using adaptive subgroup size adjustment. In order to verify the feasibility of this decoupled adaptive particle swarm optimization algorithm, this article preliminarily proves the convergence of the algorithm through theoretical analysis. In terms of experiments, comprehensive experiments are carried out on the method based on the largescale optimization benchmark data of CEC 2010 and CEC 2013. The results prove the effectiveness of the proposed strategy.
The total mileage of cable tunnels in China is increasing in recent years, the maintenance and inspection costs are also increasing. Large-scale deployment of detector in cable tunnel is an efficient way to reduce mai...
The total mileage of cable tunnels in China is increasing in recent years, the maintenance and inspection costs are also increasing. Large-scale deployment of detector in cable tunnel is an efficient way to reduce maintenance cost and improve inspection efficiency. The massive detector deployment leads to new challenges for power communication networks. This study designed an online cable tunnel monitoring system based on 5G network slicing technology. Taking advantage of the characteristics of 5G technology such as high bandwidth, low latency and wide connectivity, data collected by monitoring terminals and inspection robots are transmitted to the control side in real-time. With this online monitoring system, the accurate tunnel environment monitoring, cable faults identification and potential dangers elimination are realized. This system can provide a strong guarantee for underground transmission lines operation and has significant meaning for smart grid construction.
In order to enhance the risk control ability in the field of Internet finance, guarantee the sustainable development of the Internet finance industry, and reduce the losses brought to the Internet finance platform by ...
In order to enhance the risk control ability in the field of Internet finance, guarantee the sustainable development of the Internet finance industry, and reduce the losses brought to the Internet finance platform by unexpected events arising from personal credit, the article conducts early warning research on the credit risk in Internet finance risk based on a CNN-LSTM model, which performs extraction of deep features while predicting time series features from the user's behavioral features to collect the user's credit degree, evaluate and predict the user's credit risk, and conduct timely risk warning to prevent the expected default of personal loans, overdue repayment and other defaults. The results show that the CNN-LSTM model has good financial risk early warning effect, and can effectively take advantage of information resources to reduce the risk of Internet finance.
Anomaly detection and fault localization are key functions in telecom network management systems. Network devices (i.e. entities) such as routers, switches, transmitters, and so on are typically monitored with multiva...
Anomaly detection and fault localization are key functions in telecom network management systems. Network devices (i.e. entities) such as routers, switches, transmitters, and so on are typically monitored with multivariate time series, the detection of anomalies being critical for an entity's service quality management. Nevertheless, given the complexity of multivariate time series, detecting anomalies is still challenging. These two functions can be attributed to the same problem, the problem of anomaly detection for multivariate time series. We propose Taylor features, which are filtered using the Boosting algorithm and then transformed into a hierarchical equivalent representation. Further, we use stochastic RNN to capture temporal dependencies of sequences and FCM to model the relationship among variables. Finally, it comes to the TaylorBoost model. The experiments are carried out on a new server machine dataset from an Internet company. TaylorBoost outperforms other baseline methods with an overall precision of 0.90.
The estimation of F0, or Fundamental Frequency, is one of the most vital steps of preprocessing research in speech and signal processing. Nevertheless, for small sample size problems, traditional methods and methods b...
The estimation of F0, or Fundamental Frequency, is one of the most vital steps of preprocessing research in speech and signal processing. Nevertheless, for small sample size problems, traditional methods and methods based on machine learning have limitations. This paper proposed a method for F0 estimation which integrates FFT, Band-pass Filtering (BPF) and STFT. In this method, the spectrum peaks are used in one hand to set frequency band of Band-pass Filtering, in another hand to calculate the proper length of frames which is used for STFT analysis. This greatly improved the performance of F0 estimation. The results show that, compared with Auto-Correlation and STFT method, the RMSE of the proposed LSTFT method has been reduced by 62% and 78%, respectively. This study indicates that the accuracy of F0 estimation used LSTFT method performs better than that of traditional method, LSTFT makes full use of the spectrum of the whole signal, and makes the short-time analysis such as STFT more dynamic.
This book contains the joint proceedings of the Winter School of Hakodate (WSH) 2011 held in Hakodate, Japan, March 1516, 2011, and the 6th International Workshop on Natural Computing (6th IWNC) held in Tokyo, Japan, ...
ISBN:
(纸本)9784431543930;9784431543947
This book contains the joint proceedings of the Winter School of Hakodate (WSH) 2011 held in Hakodate, Japan, March 1516, 2011, and the 6th International Workshop on Natural Computing (6th IWNC) held in Tokyo, Japan, March 2830, 2012, organized by the Special Interest Group of Natural Computing (SIG-NAC), the Japanese Society for Artificial Intelligence (JSAI). This volume compiles refereed contributions to various aspects of natural computing, ranging from computing with slime mold, artificial chemistry, eco-physics, and synthetic biology, to computational aesthetics.
Computation should be a good blend of theory and practice, and researchers in the field should create algorithms to address real world problems, putting equal weight on analysis and implementation. Experimentation and...
ISBN:
(数字)9784431541066
ISBN:
(纸本)9784431541059
Computation should be a good blend of theory and practice, and researchers in the field should create algorithms to address real world problems, putting equal weight on analysis and implementation. Experimentation and simulation can be viewed as yielding to refined theories or improved applications. The Workshop on Computation: Theory and Practice (WCTP)-2011 was the first workshop organized jointly by the Tokyo Institute of technology, the Institute of Scientific and Industrial ResearchOsaka University, the University of the Philippines Diliman, and De La Salle UniversityManila devoted to theoretical and practical approaches to computation. The aim of the workshop was to present the latest developments by theoreticians and practitioners in academe and industry working to address computational problems that can directly impact the way we live in society. This book comprises the refereed proceedings of WCTP-2011, held in Quezon City, the Philippines, in September 2011. The 16 carefully reviewed and revised full papers presented here deal with biologically inspired computational modeling, programming language theory, advanced studies in networking, and empathic computing. .
暂无评论