Programming, where problem solving and coding come together, is cognitively demanding. Whereas traditional instructional strategies tend to focus on language constructs, the problem solving skills required for program...
详细信息
ISBN:
(纸本)9781450354288
Programming, where problem solving and coding come together, is cognitively demanding. Whereas traditional instructional strategies tend to focus on language constructs, the problem solving skills required for programming remain underexposed. In an explorative small-scale case study we explore a "thinking-first" framework combined with stepwise heuristics, to provide students structure throughout the entire programming process. Using unplugged activities and high-level flowcharts, students are guided to brainstorm about possible solutions and plan their algorithms before diving into (and getting lost in) coding details. Thereafter, a stepwise approach is followed towards implementation. Flowcharts support novice programmers to keep track of where they are and give guidance to what they need to do next, similar to a road-map. High-level flowcharts play a key role in this approach to problem solving. They facilitate planning, understanding and decomposing the problem, communicating ideas in an early stage, step-wise implementation and evaluating and reflecting on the solution (and approach) as a whole.
Mathematical optimization is a common tool for numerous real-world optimization problems. However, in some application domains there is a scope for improvement of currently used optimization techniques. For example, ...
详细信息
Mathematical optimization is a common tool for numerous real-world optimization problems. However, in some application domains there is a scope for improvement of currently used optimization techniques. For example, this is typically the case for applications that contain features which are difficult to model, and applications of interdisciplinary nature where no strong optimization knowledge is available. The goal of this thesis is to demonstrate how to overcome these challenges by considering five problems from two application domains. The first domain that we address is scheduling in Cloud computing systems, in which we investigate three selected problems. First, we study scheduling problems where jobs are required to start immediately when they are submitted to the system. This requirement is ubiquitous in Cloud computing but has not yet been addressed in mathematical scheduling. Our main contributions are (a) providing the formal model, (b) the development of exact and efficient solution algorithms, and (c) proofs of correctness of the algorithms. Second, we investigate the problem of energy-aware scheduling in Cloud data centers. The objective is to assign computing tasks to machines such that the energy required to operate the data center, i.e., the energy required to operate computing devices plus the energy required to cool computing devices, is minimized. Our main contributions are (a) the mathematical model, and (b) the development of efficient heuristics. Third, we address the problem of evaluating scheduling algorithms in a realistic environment. To this end we develop an approach that supports mathematicians to evaluate scheduling algorithms through simulation with realistic instances. Our main contributions are the development of (a) a formal model, and (b) efficient heuristics. The second application domain considered is powerline routing. We are given two points on a geographic area and respective terrain characteristics. The objective is to fin
Solutions to many of the world's problems depend upon materials research and development. However, advanced materials can take decades to discover and decades more to fully deploy. Humans and robots have begun to ...
详细信息
Solutions to many of the world's problems depend upon materials research and development. However, advanced materials can take decades to discover and decades more to fully deploy. Humans and robots have begun to partner to advance science and technology orders of magnitude faster than humans do today through the development and exploitation of closed-loop, autonomous experimentation systems. This review discusses the specific challenges and opportunities related to materials discovery and development that will emerge from this new paradigm. Our perspective incorporates input from stakeholders in academia, industry, government laboratories, and funding agencies. We outline the current status, barriers, and needed investments, culminating with a vision for the path forward. We intend the article to spark interest in this emerging research area and to motivate potential practitioners by illustrating early successes. We also aspire to encourage a creative reimagining of the next generation of materials science infrastructure. To this end, we frame future investments in materials science and technology, hardware and software infrastructure, artificial intelligence and autonomy methods, and critical workforce development for autonomous research.
This paper presents a summary of recent developments in computational methods for multibody dynamics analyses. The developments are presented within the context of an automated numerical analysis. The intent of the pa...
详细信息
This paper presents a summary of recent developments in computational methods for multibody dynamics analyses. The developments are presented within the context of an automated numerical analysis. The intent of the paper is to provide a basis for the easy development of computational algorithms. The principal concepts discussed are: differentiation algorithms, partial velocities and partial angular velocities, generalized speeds, Euler parameters, Kane's equations, orthogonal complement arrays, lower body arrays and accuracy testing functions.
This paper presents an algorithmic development in the framework of computationally efficient robust Nonlinear Model Predictive Control (NMPC) which deals with a parametric plant-model mismatch, where the description o...
详细信息
ISBN:
(纸本)9783952426937
This paper presents an algorithmic development in the framework of computationally efficient robust Nonlinear Model Predictive Control (NMPC) which deals with a parametric plant-model mismatch, where the description of the evolution of the uncertainty is done using a scenario tree, known as multi-stage approach. In order to reduce the computational time and memory requirements of the multistage NMPC, the calculations of the optimal control inputs can be done scenario-wise in parallel. These parallelized calculations must enforce the satisfaction of the non-anticipativity constraints, which is negotiated iteratively among the scenarios using Lagrangean or price-driven decomposition. The main challenge in using such scheme is the determination of the values of the aggregate variables that are used to coordinate the scenario-wise computations for convergence to the feasibility of the non-anticipativity constraints. The proposed approach uses parametric sensitivities of the optimal model states with respect to the control inputs which are used for the iterative determination of the values of aggregated variables. The proposed method achieves good performance and faster convergence compared to traditional decomposition schemes. The potential of the approach is demonstrated in simulation example of an hydrodesulphurisation unit.
Video compression is an extremely relevant theme in today’s scenario, mainly due to the significant growth in digital video applications. Without compression, it is almost impossible to send or store digital videos, ...
详细信息
Video compression is an extremely relevant theme in today’s scenario, mainly due to the significant growth in digital video applications. Without compression, it is almost impossible to send or store digital videos, due to the large amount of data that they require, making applications such as high definition digital television, video conferences, mobiles video calls, and others unviable. This demand is increasing since there is a strong growth in high definition video applications, where the amount of information is considerably larger. Many video coding standards were developed in the last few years, all of them can achieve excellent compression rates. A significant part of the compression gains in the current video coding standards are obtained through the exploration of the temporal redundancies by means of the motion estimation process. However, the current motion estimation algorithms do not consider the inherent variations that appear in high and ultra-high definition videos. In this work an evaluation of the motion estimation in high definition videos is presented. This evaluation shows that some well know fast algorithms, that are widely used by the scientific community, do not keep the same quality results when applied to high resolution videos. It demonstrates the relevance of new fast algorithms that are focused on high definition videos. This thesis presents the development of new fast motion estimation algorithms focused in high definition video encoding. The algorithms developed in this thesis show some characteristics that make them more resilient to avoid local minima, when applied to high definition videos. Moreover, this work also aims at the development of dedicated hardware architectures for these new algorithms, focused on high definition videos. The architectural development is extremely relevant, mainly for real time applications at 30 frames per second, and also for mobile applications, where performance and power are critical issues. All d
Processing text from scientific literature has become a necessity due to the burgeoning amounts of information that are fast becoming available, stemming from advances in electronic information technology. We created ...
详细信息
Processing text from scientific literature has become a necessity due to the burgeoning amounts of information that are fast becoming available, stemming from advances in electronic information technology. We created a program, NeuroText (http://***/textmine/*** ), designed specifically to extract information relevant to neuroscience-specific databases, NeuronDB and CellPropDB (http://***/senselab/ ), housed at the Yale University School of Medicine. NeuroText extracts relevant information from the Neuroscience literature in a two-step process: each step parses text at different levels of granularity. NeuroText uses an expert-mediated knowledgebase and combines the techniques of indexing, contextual parsing, semantic and lexical parsing, and supervised and non-supervised learning to extract information. The constrains, metadata elements, and rules for information extraction are stored in the knowledgebase. NeuroText was created as a pilot project to process 3 years of publications in Journal of Neuroscience and was subsequently tested for 40,000 PubMed abstracts. We also present here a template to create domain non-specific knowledgebase that when linked to a text-processing tool like NeuroText can be used to extract knowledge in other fields of research. less
暂无评论