The transition from paper-based tests to corresponding computer-administered tests allows for the incorporation of improved interfaces that support response making. The main research question is whether innovative int...
详细信息
The transition from paper-based tests to corresponding computer-administered tests allows for the incorporation of improved interfaces that support response making. The main research question is whether innovative interfaces affect test response time and/or response accuracy. This study compared performance on banked cloze tests using a conventional interface (based on paper-based formats for responding by writing a number in a box) versus cloze tests with improved interfaces (based on computer-based affordances such as dragging and dropping responses to fill in a blank). In a banked cloze test, the left side of the page shows a text with words deleted and replaced with blanks that are numbered, and the right side shows the word list with a space to write in the corresponding number. In Experiment 1, 56 fourth graders in the conventional group responded more slowly but just as accurately, and spent more time looking at the word list on the right of the screen but spent equivalent time looking at the text as compared to a group that took the test with an improved interface. In Experiment 2, the same pattern of results was replicated with 148 sixth graders and for each of three versions of improved interfaces as compared to the conventional interface. Results support the idea that the improved interface affected the response execution phase but not the response development phase of performance on the cloze test.
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop inte...
详细信息
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test affect the speed of student performance? In three different experiments involving fourth, sixth, and eighth graders, respectively;students answered reading comprehension questions presented in conventional (i.e., paper-based design) or drag-and-drop formats. The tests consisted of four-sentence ordering items in Experiment 1, four graphic organizer items in Experiment 2, and two cloze tests and two graphic organizer items in Experiment 3. The conventional and drag & drop groups were compared on test performance (i.e., accuracy) and efficiency (i.e., response time and number of mouse clicks). Across the three experiments, the conventional and drag & drop groups did not differ in mean performance, but the drag & drop group responded more efficiently than the conventional group (faster response time, d = 0.62, and fewer mouse clicks, d = 1.13).
Deep learning is one of the fastest growing technologies in computer science with a plethora of applications. But this unprecedented growth has so far been limited to the consumption of deep learning experts. The prim...
详细信息
ISBN:
(纸本)9781450377164
Deep learning is one of the fastest growing technologies in computer science with a plethora of applications. But this unprecedented growth has so far been limited to the consumption of deep learning experts. The primary challenge being a steep learning curve for learning the programming libraries and the lack of intuitive systems enabling non-experts to consume deep learning. Towards this goal, we study the effectiveness of a "no-code" paradigm for designing deep learning models. Particularly, a visual drag-and-drop interface is found more efficient when compared with the traditional programming and alternative visual programming paradigms. We conduct user studies of different expertise levels to measure the entry level barrier and the developer load across different programming paradigms. We obtain a System Usability Scale (SUS) of 90 and a NASA Task Load index (TLX) score of 21 for the proposed visual programming compared to 68 and 52, respectively, for the traditional programming methods.
暂无评论