In 2006, Kanazawa University (KU) adopted a policy that made it mandatory for all freshmen to have a laptop PC, and started a new class called Introduction to informationtechnology as a compulsory first-year subject....
详细信息
ISBN:
(纸本)9780889867239
In 2006, Kanazawa University (KU) adopted a policy that made it mandatory for all freshmen to have a laptop PC, and started a new class called Introduction to informationtechnology as a compulsory first-year subject. We designed this class as a model for developing an e-Learning environment across all departments. We realized that educational systems and support environments are important for the success of e-Learning, so we added portal functions withthe cooperation of the company which developed the LMS for us. In addition, we set up a wireless LAN infrastructure, and upgraded some lecture rooms for the Introduction to informationprocessing courses. We started up a support room to assist teachers in developing their teaching materials, and opened a PC support room for students in the university hall, near the hub of student activity. In this paper, we will show the outline of the educational system, and report the largely positive student evaluations of the Introduction to informationtechnology course and the educational system during its first two years. the number of classes that use the educational system tripled in 2007.
Collaborative learning, which includes activities of interaction between learners, share knowledge each other, and cooperate in finishing some tasks, is a popular research topic in the past decades. the essence of col...
详细信息
ISBN:
(纸本)9783540850328
Collaborative learning, which includes activities of interaction between learners, share knowledge each other, and cooperate in finishing some tasks, is a popular research topic in the past decades. the essence of collaborative learning is that active participation is significant in the learning process and that learners share the valuable knowledge to the other learners in traditional classroom. Nowadays, computers and informationtechnology (IT) become a general component on a lot of aspects of education. the combination of collaborative learning and informationtechnology is commonly called "Computer Supported Collaborative Learning" (CSCL), and that is currently having much attention. therefore, we have developed a friendly server / client tool, which embedded voice and text chat communication to support collaborative learning via internet. Learner can study from group's collaborative learning, find and solve the problem of C programming language designing through communicating and discussing. Besides, it makes users gained the experience and knowledge of program designing efficiently.
the development of instructional content for web Based Learning is an expensive, time-consuming and complex process that leads to the development of new methodologies. the concept of Learning Objects (LOs) was propose...
详细信息
ISBN:
(纸本)9780889867239
the development of instructional content for web Based Learning is an expensive, time-consuming and complex process that leads to the development of new methodologies. the concept of Learning Objects (LOs) was proposed as an approach to promote content reuse. In this paper, we describe a strategy to modeling learning practices that are embedded in LOs, according to a previous proposal for learning content modeling. these atomic LOs of content and practice are called Component Objects. the work presented in this paper is based on previous experiences and proposals on informationprocessing, learning content structuring and classification of learning activities and practices, which lead to a metamodel for representing learning content and practice. A language for defining sequences of COs is also presented, with a syntax that is adequate to the specification of the possible ways of sequencing COs/LOs. A case study is briefly described to show the applicability of the proposals. Although it is not expected to represent a revolution on learning content and practice structuring, the approach presented in this paper provides alternatives for representing smaller LOs while the experiment in the case study brought enthusiasm to students and professors.
this proceedings contains 27 papers. Data mining and analytics today have advanced rapidly from the early days of pattern finding in commercial databases. they are now a core part of business intelligence and inform d...
ISBN:
(纸本)9781920682682
this proceedings contains 27 papers. Data mining and analytics today have advanced rapidly from the early days of pattern finding in commercial databases. they are now a core part of business intelligence and inform decision-making in many areas of human endeavor including science, business, health care and security. Mining of unstructured text, semi-structured webinformation and multimedia data have continued to receive attention, as have professional challenges to using data mining in industry. Accepted submissions have been grouped into seven sessions reflecting these application areas. Papers published in this conference are categorized under topics such as Algorithms, Approaches for Business and Organisations, Association Rules / Frequent Patterns, Biomedical Data Mining, Engineering Applications, Text Mining. In addition, three Keynote Papers were published. the key terms of this proceedings include natural language learning, data mining, text mining, signal processing, speech processing, audiovisual speech recognition, cognitive linguistics, computational psycholinguistics, receiver operating characteristics, brain computer interface, community structure, networks, modularity, evaluation, imbalanced datasets, roc and cost sensitive learning, lazy Bayesian rules, classification, decision trees, exploratory data mining, visualization, communications analysis, association rule, negative association rule, health data mining, fraud detection, open source data mining, classification of EEG data, brain-computer interfaces, correlation-based feature selection.
Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-languageprocessing, and decision support. the National Center for Biomedical Ontolog...
详细信息
Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-languageprocessing, and decision support. the National Center for Biomedical Ontology is developing BioPortal, a web-based system that serves as a repository for biomedical ontologies. BioPortal defines relationships among those ontologies and between the ontologies and online data resources such as PubMed, ***, and the Gene Expression Omnibus (GEO). BioPortal supports not only the technical requirements for access to biomedical ontologies either via web browsers or via web services, but also community-based participation in the evaluation and evolution of ontology content. BioPortal enables ontology users to learn what biomedical ontologies exist, what a particular ontology might be good for, and how individual ontologies relate to one another. BioPortal is available online at http://***.
the Semantic web relies on carefully structured, well defined data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, oft...
详细信息
the Semantic web relies on carefully structured, well defined data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to bias, observation error or incomplete knowledge. Meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the Semantic webthere is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs/outputs are naturally treated as random variables. this paper discusses a solution to the problem in the form of the Uncertainty Markup language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways: i.e. realisations, statistics and probability distributions. the INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. this paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. An interpolation webprocessing Service (WPS) uses the uncertainty information within these observations to influence and improve its prediction outcome. the output uncertainties from this WPS may also be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. this could form part of a risk management chain or a decision support system, and
A person may have multiple name aliases on the web. Identifying aliases of a name is important for various tasks such as information retrieval, sentiment analysis and name disambiguation. We introduce the notion of a ...
详细信息
Withthe advance of the Semantic webtechnology, increasing data will be annotated with computer understandable structures (i.e. RDF and OWL), which allow us to use more expressive queries to improve our ability in in...
详细信息
Withthe advance of the Semantic webtechnology, increasing data will be annotated with computer understandable structures (i.e. RDF and OWL), which allow us to use more expressive queries to improve our ability in information seeking. However, constructing a structured query is a laborious process, as a user has to master the query language as well as the underlying schema of the queried data. In this demo, we introduce SUITS4RDF, a novel interface for constructing structured queries for the Semantic web. It allows users to start with arbitrary keyword queries and to enrich them incrementally with an arbitrary but valid structure, using computer suggested queries or query components. this interface allows querying the Semantic web conveniently and efficiently, while enabling users to express their intent precisely.
As an important linguistic resource, collocation represents a significant relation between words. Automatic collocation extraction is very important for many natural languageprocessing applications, such as word sens...
详细信息
ISBN:
(纸本)9781424420957
As an important linguistic resource, collocation represents a significant relation between words. Automatic collocation extraction is very important for many natural languageprocessing applications, such as word sense disambiguation, machine translation and information retrieval etc. While traditional collocation extraction approaches use only one single statistical measure, they may not be optimal in that they can not take advantage of multiple statistical measures. In this paper, we propose a logistic linear regression model (LLRM) that combines five classical lexical association measures: chi(2)-test, t-test, co-occurrence frequency, log-likelihood ratio and mutual information. Experiments show that our approach leads to a significant performance improvement in comparison with individual basic methods in both precision and recall.
For easing the exchange of news, the international Press Telecommunication Council (IPTC) has developed the NewsML Architecture (NAR), an XML-based model that is specialized into a number of languages such as NewsML G...
详细信息
ISBN:
(纸本)9783540885634
For easing the exchange of news, the international Press Telecommunication Council (IPTC) has developed the NewsML Architecture (NAR), an XML-based model that is specialized into a number of languages such as NewsML G2 and EventsML G2. As part of this architecture, specific controlled vocabularies, such as the IPTC News Codes, are used to categorize news items together with other industry-standard thesauri. While news is still mainly in the form of text-based stories, these are often illustrated with graphics, images and videos. Media-specific metadata, formats, such as EXIF, DIG35 and XMP, are used to describe the media. the use of different metadata formats in a single production process leads to interoperability problems within the news production chain itself. It also excludes linking to existing web knowledge resources and impedes the construction of uniform end-user interfaces for searching and browsing news content. In order to allow these different metadata standards to interoperate within a single information environment, we design an OWL ontology for the. IPTC News Architecture, linked with other multimedia metadata standards. We convert the IPTC NewsCodes into a SKOS thesaurus and we demonstrate how the news metadata, can then I)e enriched using natural languageprocessing and multimedia analysis and integrated with existing knowledge already formalized on the Semantic web. We discuss the method we used for developing the ontology and give rationale for our design decisions. We provide guidelines for re-engineering schemas into ontologies and formalize their implicit semantics. In order to demonstrate the appropriateness of our ontology infrastructure, we present an exploratory environment for searching and browsing news items,
暂无评论