In this paper, we describe the application of iFlow, a no-code programming platform, in visualizing foundational computational thinking (CT) as a precursor to learning disciplinary concepts. Research shows that it is ...
详细信息
ISBN:
(纸本)9789869721486
In this paper, we describe the application of iFlow, a no-code programming platform, in visualizing foundational computational thinking (CT) as a precursor to learning disciplinary concepts. Research shows that it is possible to learn both science concepts and computational skills through building computer simulations and solutions of problems related to natural phenomena. CT could be operationally defined as the cognitive processes involved in problem formulation, in which its solutions could be effectively carried out by an information-processing agent (Wing, 2010). We introduce two modules as examples to facilitate the visualization of computational processes that are frequently adopted in teaching physical sciences. The first module was designed for students to analyze and evaluate data, and the second module allowed students to generate simulated data and scaffold questions to predict testable outcomes. In this workshop, we will demonstrate how to use iFlow to teach data analysis and problem evaluation that takes advantage of the no-code programming by means of the functional blocks, which contain operational codes to vividly visualize the input, process, and output.
Most naturallanguageprocessing research now recommends large Transformer-based models with fine-tuning for supervised classification tasks;older strategies like bag-ofwords features and linear models have fallen out...
详细信息
ISBN:
(纸本)9781952148187
Most naturallanguageprocessing research now recommends large Transformer-based models with fine-tuning for supervised classification tasks;older strategies like bag-ofwords features and linear models have fallen out of favor. Here we investigate whether, in automated essay scoring (AES) research, deep neural models are an appropriate technological choice. We find that fine-tuning BERT produces similar performance to classical models at significant additional cost. We argue that while state-of-the-art strategies do match existing best results, they come with opportunity costs in computational resources. We conclude with a review of promising areas for research on student essays where the unique characteristics of Transformers may provide benefits over classical methods to justify the costs.
Purpose: The COVID-19 crisis has shown that the global supply chains are not as resilient as expected. First investigations indicate that the main contributing factor is a lack of visibility into the supply chain'...
详细信息
In this work, we describe the system developed by a group of undergraduates from the Indian Institutes of Technology, for the Shared Task at Textgraphs-14 on Multi-Hop Inference Explanation Regeneration (Jansen and Us...
详细信息
Electronic Health Records (EHRs) contain rich medical information about patients, possibly hundreds of notes, lab results, images and other information. Doctors can easily be overwhelmed by this wealth of information....
详细信息
Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions. Occlusion is a well established method th...
详细信息
ISBN:
(纸本)9781952148033
Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions. Occlusion is a well established method that provides explanations on discrete language data, e.g. by removing a language unit from an input and measuring the impact on a model's decision. We argue that current occlusion-basedmethods often produce invalid or syntactically incorrect language data, neglecting the improved abilities of recent NLP models. Furthermore, gradient-based explanation methods disregard the discrete distribution of data in NLP. Thus, we propose OLM: a novel explanation method that combines occlusion and language models to sample valid and syntactically correct replacements with high likelihood, given the context of the original input. We lay out a theoretical foundation that alleviates these weaknesses of other explanation methods in NLP and provide results that underline the importance of considering data likelihood in occlusion-based explanation.(1)
With the widespread adoption of the Next Generation Science Standards (NGSS), science teachers and online learning environments face the challenge of evaluating students' integration of different dimensions of sci...
详细信息
ISBN:
(纸本)9781952148187
With the widespread adoption of the Next Generation Science Standards (NGSS), science teachers and online learning environments face the challenge of evaluating students' integration of different dimensions of science learning. Recent advances in representation learning in naturallanguageprocessing have proven effective across many naturallanguageprocessing tasks, but a rigorous evaluation of the relative merits of these methods for scoring complex constructed response formative assessments has not previously been carried out. We present a detailed empirical investigation of feature-based, recurrent neural network, and pre-trained transformer models on scoring content in real-world formative assessment data. We demonstrate that recent neural methods can rival or exceed the performance of feature-basedmethods. We also provide evidence that different classes of neural models take advantage of different learning cues, and pre-trained transformer models may be more robust to spurious, dataset-specific learning cues, better reflecting scoring rubrics.
The article examines the problem of applying intellectual methods and models for the automatic extraction of professional skills and competencies from the descriptions of academic disciplines in Russian. The accuracy ...
详细信息
This paper presents a PhD project which tests the effectiveness of NLP-basedmethods to extract and analyze large amounts of data from translation and interpreting corpora. More specifically, Named Entity Recognition ...
详细信息
Attention-based pre-trained language models such as GPT-2 brought considerable progress to end-to-end dialogue modelling. However, they also present considerable risks for task-oriented dialogue, such as lack of knowl...
详细信息
暂无评论