Prior literature has asserted that higher supply chain visibility, concerned with better information flow and determining more accurate demand levels within the supply chain at a given time, improves decision-making e...
详细信息
Prior literature has asserted that higher supply chain visibility, concerned with better information flow and determining more accurate demand levels within the supply chain at a given time, improves decision-making efficiency. Decision-making efficiency denoted by decision-making delay is in turn dependent on the information processing structure. Different production control systems have different information processing structures. This paper considers the relations between the production control system (MRP and JIT) and the organization structure of information processing that determines the decision making delay. We compare MRP and JIT across different information processing structures and decision efficiency, and find that JIT is suitable for small lot size and large variety production and MRP for large lot size and small variety production. In addition, an optimized organization structure of information processing will surely reduce the decision making delay.
Clustering has been widely used in data analysis. A majority of existing clustering approaches assume that the number of clusters is given in advance. Recently, a novel clustering framework is proposed which can autom...
详细信息
Clustering has been widely used in data analysis. A majority of existing clustering approaches assume that the number of clusters is given in advance. Recently, a novel clustering framework is proposed which can automatically learn the number of clusters from training data. Based on these works, we propose a nonsmooth penalized clustering model via l(p)( 0 < p < 1) regularized sparse regression. In particular, this model is formulated as a nonsmooth nonconvex optimization, which is based on over-parameterization and utilizes an l(p)norm-based regularization to control the tradeoff between the model fit and the number of clusters. We theoretically prove that the new model can guarantee the sparseness of cluster centers. To increase its practicality for practical use, we adhere to an easy-to-compute criterion and follow a strategy to narrow down the search interval of cross validation. To address the non-smoothness and nonconvexness of the cost function, we propose a simple smoothing trust region algorithm and present its convergent and computational complexity analysis. Numerical studies on both simulated and practical data sets provide support to our theoretical results and demonstrate the advantages of our new method.
In recent years, with the rapid economic growth, the electric power consumption has led to a continuous increase. Besides, to adapt to the new supply and demand relationships, industrial structure has changed dramatic...
详细信息
In recent years, with the rapid economic growth, the electric power consumption has led to a continuous increase. Besides, to adapt to the new supply and demand relationships, industrial structure has changed dramatically, and electricity consumption in different industries has changed accordingly, such as in Hebei province that boasts a large amount of traditional industries. Hence, the study of the change of electricity consumption in different industries and the correlation between them have great practical significance for monitoring changes in macroeconomic structure and aiding decision-making in power companies. This paper selected a typical county in Hebei Province and used the method of complex network and sensitive coefficient to analyze the correlation of electricity load among different industries. This paper found that the industrial structure in Bazhou county show a strong relationship with the traditional industry classification. The correlation between each industry in the same traditional industry is stronger and more stable, and it is weaker and more changeable regarding different traditional industries.
Performance appraisal has always been an important research topic in human resource management. A reasonable performance appraisal plan lays a solid foundation for the development of an enterprise. Especially as globa...
详细信息
Performance appraisal has always been an important research topic in human resource management. A reasonable performance appraisal plan lays a solid foundation for the development of an enterprise. Especially as globalization and technology advance, in order to meet the fast-changing strategic goals and increasing cross-functional tasks, enterprises face new challenges in performance appraisal. How to improve employees’ ability to accept new knowledge efficiently and constantly has been an urgent problem for enterprises. In this paper, we propose an automatic method which generation multiple-choice questions by utilizing the relations between different terminology. Graphical model is used to extract core concept from different corpus while word embedding technology is used to indicate the relevant relations. Experimental results demonstrate that the proposed question generation method outperforms the traditional manual method in both efficiency and confusion.
This paper proposes a deep architecture for saliency detection by fusing pixel-level and superpixel-level predictions. Different from the previous methods that either make dense pixel-level prediction with complex net...
详细信息
ISBN:
(纸本)9781509060672
This paper proposes a deep architecture for saliency detection by fusing pixel-level and superpixel-level predictions. Different from the previous methods that either make dense pixel-level prediction with complex networks or region-level prediction for each region with fully-connected layers, this paper investigates an elegant route to make two-level predictions based on a same simple fully convolutional network via seamless transformation. In the transformation module, we integrate the low level features to model the similarities between pixels and superpixels as well as superpixels and superpixels. The pixel-level saliency map detects and highlights the salient object well and the superpixel-level saliency map preserves sharp boundary in a complementary way. A shallow fusion net is applied to learn to fuse the two saliency maps, followed by a CRF post-refinement module. Experiments on four benchmark data sets demonstrate that our method performs favorably against the state-of-art methods.
It has been worthy of notice that the number of scientific researchers has experienced a rapid growth in China. Meanwhile, the strict restriction to the total number and the position structure of researchers has exert...
详细信息
Feature noise, namely noise on inputs is a long-standing plague to support vector machine(SVM). Conventional SVM with the hinge loss(C-SVM) is sparse but sensitive to feature noise. Instead, the pinball loss SVM(pin-S...
详细信息
Feature noise, namely noise on inputs is a long-standing plague to support vector machine(SVM). Conventional SVM with the hinge loss(C-SVM) is sparse but sensitive to feature noise. Instead, the pinball loss SVM(pin-SVM) enjoys noise robustness but loses the sparsity completely. To bridge the gap between C-SVM and pin-SVM, we propose the truncated pinball loss SVM((pin) over bar -SVM) in this paper. It provides a flexible framework of trade-off between sparsity and feature noise insensitivity. Theoretical properties including Bayes rule, misclassification error bound, sparsity, and noise insensitivity are discussed in depth. To train (pin) over bar -SVM, the concave-convex procedure(CCCP) is used to handle non-convexity and the decomposition method is used to deal with the subproblem of each CCCP iteration. Accordingly, we modify the popular solver LIBSVM to conduct experiments and numerical results validate the properties of (pin) over bar -SVM on the synthetic and real-world data sets. (C) 2017 Elsevier Ltd. All rights reserved.
In the era of bigdata, the application of multi-source heterogeneous aggregation data is more and more extensive. If the quality of aggregation data is uneven, it will bring a lot of troubles to the subsequent data m...
In the era of bigdata, the application of multi-source heterogeneous aggregation data is more and more extensive. If the quality of aggregation data is uneven, it will bring a lot of troubles to the subsequent datamining, and then lead to inaccurate decision-making. A comprehensive quality evaluation method for aggregation data is proposed in this paper, based on factor analysis and multivariate variance analysis which is from the perspective of multivariate statistical inference. The case study shows that the method proposed in this paper is feasible and adaptive for the long-term evaluation of the quality of multi-source heterogeneous aggregation data.
Although artificial intelligence (AI) is currently one of the most interesting areas in scientific research, the potential threats posed by emerging AI systems remain a source of persistent controversy. To address the...
详细信息
Distance metric plays a significant role in machine learning methods(classification, clustering, etc.), especially in k-nearest neighbor classification(kNN), where the Euclidean distances are computed to decide the la...
详细信息
Distance metric plays a significant role in machine learning methods(classification, clustering, etc.), especially in k-nearest neighbor classification(kNN), where the Euclidean distances are computed to decide the labels of unknown points. But Euclidean distance ignores the statistical structure which may help to measure the similarity of different inputs better. In this paper, we construct an unified framework, including two eigenvalue related methods, to learn data-dependent metric. Both methods aim to maximize the difference of intra-class distance and inter-class distance, but the optimization is considered in global view and local view respectively. Different from previous work in metric learning, our methods straight seek for equilibrium between inter-class distance and intra-class distance, and the linear transformation decomposed from the metric is to be optimized directly instead of the metric. Then we can effectively adjust the data distribution in transformed space and construct favorable regions for kNN classification. The problems can be solved simply by eigenvalue-decomposition, much faster than semi-definite programming. After selecting the top eigenvalues, the original data can be projected into low dimensional space, and then insignificant information will be mitigated or eliminated to make the classification more efficiently. This makes it possible that our novel methods make metric learning and dimension reduction simultaneously. The numerical experiments from different points of view verify that our methods can improve the accuracy of kNN classification and make dimension reduction with competitive performance. (C) 2016 Elsevier B.V. All rights reserved.
暂无评论