Este artículo presenta un caso de estudio en el que se analiza la eficiencia relativa del plan de estudios de un programa universitario. Esta cuestión es muy relevante, ya que los cursos del plan están ...
详细信息
Este artículo presenta un caso de estudio en el que se analiza la eficiencia relativa del plan de estudios de un programa universitario. Esta cuestión es muy relevante, ya que los cursos del plan están financiados con fondos públicos y por lo tanto afectan la eficiencia del gasto público global. El método aplicado es el análisis envolvente de datos (DEA, por sus iniciales en inglés). Los resultados permiten identificar los cursos optativos ineficientes que deberían aumentar el número de estudiantes y reducir al mismo tiempo el número de miembros del cuerpo docente asignados a ellos para mejorar su eficiencia. En el caso de cursos obligatorios, la mejora también requiere reducir el número de docentes auxiliares. This article presents a study case in which we analyze the relative efficiency of the curriculum of a university degree program. This is a relevant question, since the required courses are publicly funded and thus affect the efficiency of global public spending. The method applied is Data Envelopment Analysis (DEA). Through the our results, we were able to identify inefficient non-mandatory courses that should increase the number of students while reducing the number of faculty members assigned to them to improve their efficiency. In the case of mandatory courses, improvement also requires to reduce the number of adjunct faculty. Este artigo apresenta um estudo de caso em que a eficiência relativa do currículo de um programa universitário é analisada. Esta questão é muito relevante porque os cursos do plano são de financiamento público e, portanto, afetam a eficiência da despesa pública global. O método aplicado é a análise envoltória de dados (DEA, por sua sigla em Inglês). Os resultados permitem identificar eletivas ineficientes que deveriam aumentar o número de alunos e, ao mesmo tempo, reduzir o número de membros do corpo docente que lhes foram atribuídos para melhorar a sua eficiência. No caso de cursos obrigatórios, a melhoria também requer reduzir o nú
We propose a set-theoretic model for parallelism. The model is based on separate distributions of data and work. The major theoretic result is that communication can then be derived by formal reasoning. While the mode...
详细信息
We propose a set-theoretic model for parallelism. The model is based on separate distributions of data and work. The major theoretic result is that communication can then be derived by formal reasoning. While the model has an immediate interpretation in distributed memory parallelism, we show that it can also accomodate multicore shared memory programming, as well as clusters with accelerators. The model gives rise in a natural way to objects that resemble the VecScatter construct in the PETSc library, or active messages in such packages as Charm++. Thus we argue that the model offers the prospect of an abstract programming system that can be compiled down to proven high-performance constructs.
The Open Community Runtime (OCR) is a recent effort in the search for a runtime for extreme scale parallel systems. OCR relies on the concept of a dynamically generated task graph to ex- press the parallelism of a pro...
详细信息
The Open Community Runtime (OCR) is a recent effort in the search for a runtime for extreme scale parallel systems. OCR relies on the concept of a dynamically generated task graph to ex- press the parallelism of a program. Rather than being directly used for application development, the main purpose of OCR is to become a low-level runtime for higher-level programming models and tools. Since manycore architectures like the Intel Xeon Phi are likely to play a major role in future high performance systems, we have implemented the OCR API for shared-memory machines, including the Xeon Phi. We have also implemented two benchmark applications and performed experiments to investigate the viability of the OCR as a runtime for manycores. Our initial experiments and a comparison with OpenMP indicate that OCR can be an efficient runtime system for current and emerging manycore systems.
A five-year, 0-1, mixed integer programming model was developed to analyze the effects of 1990 Farm Bill legislation on the crop-mix decisions made on cotton farms. Results showed that, when compared to the 1985 Farm ...
详细信息
A five-year, 0-1, mixed integer programming model was developed to analyze the effects of 1990 Farm Bill legislation on the crop-mix decisions made on cotton farms. Results showed that, when compared to the 1985 Farm Bill, the 1990 Farm Bill can result in higher whole-farm income despite new "triple base" provisions limiting payment acres. The increase in income results from elimination of limited cross-compliance provisions and the change to a three-year base calculation. The model was also used to assess the likely impact of possible changes in the current legislation.
In this paper, we describe a programming model to enable reasoning about spatio-temporal data streams. A spatio-temporal data stream is one where each datum is related to a point in space and time. For example, sensor...
详细信息
In this paper, we describe a programming model to enable reasoning about spatio-temporal data streams. A spatio-temporal data stream is one where each datum is related to a point in space and time. For example, sensors in a plane record airspeeds ( va ) during a given flight. Similarly, GPS units record an airplane's flight path over the ground including ground speeds ( vg ) at different locations. An aircraft's airspeed and ground speed are related by the following mathematical formula:, where v a and α a are the aircraft airspeed and heading, and v w and α w are the wind speed and direction. Wind speeds and directions are typically forecast in 3,000-foot height intervals over discretely located fix points in 6-12 hour ranges. Modeling the relationship between these spatio-temporal data streams allows us to estimate with high probability the likelihood of sensor failures and consequent erroneous data. Tragic airplane accidents (such as Air France's Flight 447 on June 1st, 2009 killing all 216 passengers and 12 aircrew aboard) could have been avoided by giving pilots better information which can be derived from inferring stochastic knowledge about spatio-temporal data streams. This work is a first step in this direction.
A policy maker knows two models. One implies an exploitable inflation-unemployment trade-off, the other does not. The policy maker's prior probability over the two models is part of his state vector. Bayes' la...
详细信息
A policy maker knows two models. One implies an exploitable inflation-unemployment trade-off, the other does not. The policy maker's prior probability over the two models is part of his state vector. Bayes' law converts the prior probability into a posterior probability and gives the policy maker an incentive to experiment. For models calibrated to U.S. data through the early 1960s, we compare the outcomes from two Bellman equations. The first tells the policy maker to "experiment and learn." The second tells him to "learn but don't experiment." In this way, we isolate a component of government policy that is due to experimentation and estimate the benefits from intentional experimentation. We interpret the Bellman equation that learns but does not intentionally experiment as an "anticipated utility" model and study how well its outcomes approximate those from the "experiment and learn" Bellman equation. The approximation is good. For our calibrations, the benefits from purposeful experimentation are small because random shocks are big enough to provide ample unintentional experimentation.
Coordinating the networks needed for modern scientific data collection and control presents a significant challenge: they are typically power- and resource-constrained, operating in noisy and hostile environments, and...
详细信息
Coordinating the networks needed for modern scientific data collection and control presents a significant challenge: they are typically power- and resource-constrained, operating in noisy and hostile environments, and need to adapt their behaviour to match their operation to their sensed environment while maintaining the scientific integrity of their observations. In this paper, we explore several aspects of managing and coordinating sensor-driven systems. We draw some conclusions about how to design, structure, and implement programming and middleware abstractions for long-lived and adaptive sensor networks.
Finding a parallel architecture adapted to a given class of algorithms is a central problem for architects. This paper presents a methodology to realize it, and provides an illustration using image analysis. First, we...
详细信息
Finding a parallel architecture adapted to a given class of algorithms is a central problem for architects. This paper presents a methodology to realize it, and provides an illustration using image analysis. First, we show a set of common basic operations that can be used to solve most image analysis problems. Then these movements are translated to fit some natural communications in a given architecture. The considered data movements (global operations on connected pixel sets) can express a large class of algorithms. Their implementation on exemplary massively parallel architectures (arrays, hypercubes, pyramids) is discussed.
A mathematical programming model and linear programming solution procedure are developed to assign salesmen to accounts and simultaneously to determine appropriate time allocations to the accounts. The model is useful...
详细信息
A mathematical programming model and linear programming solution procedure are developed to assign salesmen to accounts and simultaneously to determine appropriate time allocations to the accounts. The model is useful for sales situations in which geographic locations of accounts are close and the interaction of the salesman with the account is relatively important in the marketing mix. An implementation example is discussed.
We apply AI-assisted large language model (LLM) capabilities of GPT-3 targeting high-performance computing (HPC) kernels for (i) code generation, and (ii) auto-parallelization of serial code in C ++, Fortran, Python a...
详细信息
We apply AI-assisted large language model (LLM) capabilities of GPT-3 targeting high-performance computing (HPC) kernels for (i) code generation, and (ii) auto-parallelization of serial code in C ++, Fortran, Python and Julia. Our scope includes the following fundamental numerical kernels: AXPY, GEMV, GEMM, SpMV, Jacobi Stencil, and CG, and language/programming models: (1) C++ (e.g., OpenMP [including offload], OpenACC, Kokkos, SyCL, CUDA, and HIP), (2) Fortran (e.g., OpenMP [including offload] and OpenACC), (3) Python (e.g., numpy, Numba, cuPy, and pyCUDA), and (4) Julia (e.g., Threads, ***, ***, and ***). Kernel implementations are generated using GitHub Copilot capabilities powered by the GPT-based OpenAI Codex available in Visual Studio Code given simple + <programming model> + prompt variants. To quantify and compare the generated results, we propose a proficiency metric around the initial 10 suggestions given for each prompt. For auto-parallelization, we use ChatGPT interactively giving simple prompts as in a dialogue with another human including simple "prompt engineering" follow ups. Results suggest that correct outputs for C++ correlate with the adoption and maturity of programming models. For example, OpenMP and CUDA score really high, whereas HIP is still lacking. We found that prompts from either a targeted language such as Fortran or the more general-purpose Python can benefit from adding language keywords, while Julia prompts perform acceptably well for its Threads and *** programming models. We expect to provide an initial quantifiable point of reference for code generation in each programming model using a state-of-the-art LLM. Overall, understanding the convergence of LLMs, AI, and HPC is crucial due to its rapidly evolving nature and how it is redefining human-computer interactions.
暂无评论