We consider the problem of real-life evacuation of people at sea. The primary disaster response goal is to minimize the time to save all the people during the evacuation operation, taking into account different groups...
详细信息
We consider the problem of real-life evacuation of people at sea. The primary disaster response goal is to minimize the time to save all the people during the evacuation operation, taking into account different groups at risk (children, women, seniors etc.) and the evacuation processing time (including the routing time), subject to a budget constraint. There are different evacuation tools (e.g., lifeboats, salvage ships, sea robots, helicopters etc.) for rescuing groups at risk to some safe points (e.g., hospitals, other ships, police offices etc.). The evacuation processing time of a group at risk depends on the group and the evacuation tool used. The secondary goal is to minimize the cost among all the alternative optimal solutions for the primary goal. We present a new mathematical rescue-evacuation model and design a fast solution method for real-time emergency response for different population groups and different evacuation tools, based on iterative utilization of a modification of the scheduling algorithm introduced by Leung and Ng (Eur J Oper Res 260:507-513, 2017).
Due to the development of new technologies such as the Internet and cloud computing, high requirements have been placed on the storage and management of big data. At the same time, new applications in the cloud comput...
详细信息
Due to the development of new technologies such as the Internet and cloud computing, high requirements have been placed on the storage and management of big data. At the same time, new applications in the cloud computing environment also pose new requirements for cloud storage systems, such as strong scalability and high concurrency. Currently, the existing nosql database system is based on cloud computing virtual resources, supporting dynamic addition and deletion of virtual nodes. Based on the study of phase space reconstruction, the necessity of considering traffic flow as a chaotic time series is analyzed. In addition, offline data migration methods based on load balancing are also studied. Firstly, a data migration model is proposed through analysis, and the factors that affect migration performance are analyzed. Based on this, optimization objectives for migration are proposed. Then, the system design of data migration is presented, and optimization research is conducted from two aspects around the migration optimization objectives: optimizing from the data source layer, and proposing the LBS method to convert data sources into distributed data sources, ensuring the balanced distribution of data and meeting the scalability requirements of the system. This paper applies cloud computing technology and phase space reconstruction to load balancing scheduling algorithms to promote their development.
Quality of service (QoS) provisioning in communication networks highly depends on packet scheduling algorithms. In a well-known group of scheduling algorithms, i.e., Rate proportional servers (RPS), service rate of ea...
详细信息
Quality of service (QoS) provisioning in communication networks highly depends on packet scheduling algorithms. In a well-known group of scheduling algorithms, i.e., Rate proportional servers (RPS), service rate of each session is used to isolate service disciplines. Some type of services, e.g., video streaming and interactive gaming not only have a bursty traffic but also need a minimum delay bound to obtain an acceptable QoS. However, unfortunately such applications may impose an unacceptable delay when the RPS scheduling algorithms are applied. In this paper, we propose a scheduling algorithm, which attempts to involve burstiness in addition to service rate, for isolating service disciplines in the scheduler. The proposed algorithm belongs to fluid flow paradigm and assigns a time variable weight that indicates the instantaneous service rate, to each session. The arrival constraint is assumed to be leaky bucket and our algorithm tries to provide a service discipline similar to the arrival constraint. Evaluation of the algorithm is carried out by calculating packet delay through a simulation where various kinds of traffic are scheduled by the proposed algorithm. The simulation results are compared with a well-known RPS scheduling algorithm and show that average, maximum and variance values of delay are more controllable in our algorithm by adjusting the parameters of each session.
Sufficient public parking lots (PLs) are essential for developing of sustainable cities. Different factors such as location, accessibility, safety, and environmental effects must be considered to ensure PLs stability....
详细信息
Sufficient public parking lots (PLs) are essential for developing of sustainable cities. Different factors such as location, accessibility, safety, and environmental effects must be considered to ensure PLs stability. New technologies such as intelligent parking systems, electric vehicle (EV) charging stations (CSs), and green infrastructure make PLs more sustainable and efficient. In addition to providing parking spaces for ordinary cars (OCs), PLs provide charging services for EVs. After completing charging, EVs can be transferred to another place in the PL to provide charging service for more EVs. This problem is a motivation to present an optimization process for park scheduling in this paper. The proposed process is based on minimizing the number of required chargers. The considered constraints in the optimal scheduling process include providing the requested charging service and parking space for all EVs and OCs. The required parking space is determined based on the available databases and the simultaneous presence of vehicles in the PL. Statistical simulations produce different scenarios of vehicles in PL. The findings demonstrate that the suggested approach enhances the utilization of EV charging infrastructure in PLs. It can address the issue of random parking in public places and determine the parking routine.
An efficient scheduling algorithm (stations burst plan) for demand-assigned time-division multiple-access (TDMA) satellite network systems is introduced. The total demand for transmitting data through a transponder ma...
详细信息
An efficient scheduling algorithm (stations burst plan) for demand-assigned time-division multiple-access (TDMA) satellite network systems is introduced. The total demand for transmitting data through a transponder may exceed the available bit-rate capacity, and a scheduler of the system wishes to utilize the system with minimum changes of slot allocations while maximizing throughputs. By implementing such a burst-plan algorithm, transmission of all demanded data traffic can be completed with minimum unused resources (idle slots). The underlying ideas adopted for the algorithm are that jobs with shorter remaining processing times should have higher priorities and that as many jobs are processed at a time as possible. The algorithm is particularly useful for deriving smooth burst plans for a satellite system with a large number of ground stations.< >
Hadoop is a popular framework to process growing volumes of data across clusters of computers, and has achieved great success both in industry and academic researches. Although Hadoop has powerful batch processing cap...
详细信息
Hadoop is a popular framework to process growing volumes of data across clusters of computers, and has achieved great success both in industry and academic researches. Although Hadoop has powerful batch processing capabilities, it can not support the real-time services, such as online payment or monitoring sensor data. These real-time services have strict deadlines in common, where service response after the deadline is considered useless. Current researches on time-constrained scheduling algorithms generally aim at shortening the completion time, rather than guaranteeing the specific latency for the real-time services. In this paper, we study the deadline-constrained scheduling problem on Hadoop, where service requests arrive randomly and no prior information is available. A maximum urgency scheduling (MUS) algorithm is proposed, and then implemented as a pluggable scheduler on Hadoop. This novel algorithm can be applied in heterogeneous environments with a low computation complexity. Experiments indicate that the MUS algorithm maximises the number of jobs meeting their deadlines while maintains the fairness among different types of jobs.
In this paper, we present a priority scheduling algorithm at ATM switches with multi-class output buffers in which the service rate of each class buffer is dynamically adjusted. The service rate is computed periodical...
详细信息
In this paper, we present a priority scheduling algorithm at ATM switches with multi-class output buffers in which the service rate of each class buffer is dynamically adjusted. The service rate is computed periodically by a control scheme. We derive the design Formulas of the control scheme tc, ensure that each class buffer occupancy converges to its desired operating point related to QoS requirement. Moreover, through dynamic service rate control in the proposed scheduling algorithm. the available channel capacity can be estimated exactly. it may be used for rate control of ABR traffic and call admission control of the other real-time traffic (CBR, VER, etc.).
The growing demand for multimedia communication has resulted in tougher requirements of quality of service (QoS). Today, QoS necessitates the deployment of powerful and efficient networks. Worldwide Interoperability f...
详细信息
The growing demand for multimedia communication has resulted in tougher requirements of quality of service (QoS). Today, QoS necessitates the deployment of powerful and efficient networks. Worldwide Interoperability for Microwave Access (WiMAX) is regarded as a promising technology in the field of wireless communication. In fact, WiMAX network is considered the best network to support real-time as well as non-real-time applications in varied conditions of a simulated environment. Wireless communication requires uplink and downlink scheduling for communication among base station subscribers. scheduling is still a challenging task for researchers. In this work, we propose an evolutionary computational scheme for downlink scheduling that brings in substantial improvisations in the QOS of a network system. The proposed approach simplifies the scheduling scheme for varied service schemes such as UGS, rtPS, nrtPS. We extend some improved computational strategies to our proposed approach in order to control data communication as well as route formation in signal information. We also use a computational approach, i.e., passage relocation admission control to perform automatic selection of base station with similar data operations. We further seek to analyze the role of data communication and packet dropping in wireless network communication. Our experimental study shows an improved performance of the proposed model in terms of slot/success ratio, throughput and energy consumption. As it happens, we succeed in recording 7% improvement in throughput performance, 10.34% improvement in slot/success ratio performance, and quite significantly, a 28% reduction in energy consumption based on the simulation time.
Nowadays, many enterprises provide cloud services based on their own Hadoop clusters. Because the resources of a Hadoop cluster are limited, the Hadoop cluster must select some specific tasks to allocate limited resou...
详细信息
Nowadays, many enterprises provide cloud services based on their own Hadoop clusters. Because the resources of a Hadoop cluster are limited, the Hadoop cluster must select some specific tasks to allocate limited resources in order to get the maximal profit. In this paper, we study the maximal profit problem for a given candidate task set. We describe the candidate task set with a valid sequence and propose a sequence-based scheduling strategy. In order to improve the efficiency of finding a valid sequence, we design some pruning strategies and give the corresponding scheduling algorithm. Finally, we propose a timeout handling algorithm when some task runs timeout. Experiments show that the total profit of the proposed algorithm is very close to the ideal maxima and is obviously bigger than related scheduling algorithms under different experimental settings.
Requests distribution is an key technology for Web cluster server. This paper presents a throughput-driven scheduling algorithm (TDSA). The algorithm adopts the throughput of cluster back-ends to evaluate their load...
详细信息
Requests distribution is an key technology for Web cluster server. This paper presents a throughput-driven scheduling algorithm (TDSA). The algorithm adopts the throughput of cluster back-ends to evaluate their load and employs the neural network model to predict the future load so that the scheduling system features a self-learning capability and good adaptability to the change of load. Moreover, it separates static requests from dynamic requests to make full use of the CPU resources and takes the locality of requests into account to improve the cache hit ratio. Experimental re suits from the testing tool of WebBench^TM show better per formance for Web cluster server with TDSA than that with traditional scheduling algorithms.
暂无评论