In recent years, the waste management field has received substantial attention from policymakers, organizations, academics, and researchers due to increased focus on sustainability and carbon footprint reduction as we...
详细信息
In recent years, the waste management field has received substantial attention from policymakers, organizations, academics, and researchers due to increased focus on sustainability and carbon footprint reduction as well as concerns around rapid depletion of natural resources, public health, and environmental impact. Studies on food waste management have become especially important given the dramatic growth of the world's population and global hunger and malnutrition crisis. Considering the fact that one-third of the world's food supply is wasted or lost annually while hundreds of millions of people are living with food insecurity, it is easy to understand the importance of research studying appropriate food waste management actions for sustainability. Since the subject of food waste involves complicated linkages, the determination of a suitable model is pivotal. Integrated assessment models (IAMs) have been commonly used to uncover hidden patterns and present insights to policymakers. Furthermore, these models are well-designed to integrate data, information, and multidisciplinary knowledge into a single framework. This project presents a Fuzzy Cognitive Map (FCM) extended with intuitionistic fuzzy sets using the documentary coding method. This intuitionistic FCM (iFCM) is used to analyze the primary factors, explore the interactions between food waste factors, and prioritize some policies to reduce food waste by incorporating hesitancy weight factors representing the lack of information. Then several what-if scenario analyses are generated to review the interrelationships between factors in the developed model and facilitate a decision-making process for researchers. Eventually, it is concluded that food waste reduction is achievable with the implementation of the right policies, and this also improves the other concepts such as the intention not to food waste, shopping routines, and planning routines.
Objective As clinical text mining continues to mature, its potential as an enabling technology for innovations in patient care and clinical research is becoming a reality. A critical part of that process is rigid benc...
详细信息
Objective As clinical text mining continues to mature, its potential as an enabling technology for innovations in patient care and clinical research is becoming a reality. A critical part of that process is rigid benchmark testing of natural language processing methods on realistic clinical narrative. In this paper, the authors describe the design and performance of three state-of-the-art text-mining applications from the National Research Council of Canada on evaluations within the 2010 i2b2 challenge. Design The three systems perform three key steps in clinical information extraction: (1) extraction of medical problems, tests, and treatments, from discharge summaries and progress notes;(2) classification of assertions made on the medical problems;(3) classification of relations between medical concepts. Machine learning systems performed these tasks using large-dimensional bags of features, as derived from both the text itself and from external sources: UMLS, cTAKES, and Medline. Measurements Performance was measured per subtask, using micro-averaged F-scores, as calculated by comparing system annotations with ground-truth annotations on a test set. Results The systems ranked high among all submitted systems in the competition, with the following F-scores: concept extraction 0.8523 (ranked first);assertion detection 0.9362 (ranked first);relationship detection 0.7313 (ranked second). Conclusion For all tasks, we found that the introduction of a wide range of features was crucial to success. Importantly, our choice of machine learning algorithms allowed us to be versatile in our feature design, and to introduce a large number of features without overfitting and without encountering computing-resource bottlenecks.
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of mu...
详细信息
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
This paper proposes a hybrid approximate pattern matching/ transform-based compression engine. The idea is to use regular video interframe prediction as a pattern matching algorithm that can be applied to document cod...
详细信息
ISBN:
(纸本)9781424479948
This paper proposes a hybrid approximate pattern matching/ transform-based compression engine. The idea is to use regular video interframe prediction as a pattern matching algorithm that can be applied to document coding. We show that this interpretation may generate residual data that can be efficiently compressed by a transform-based encoder. The novelty of this approach is demonstrated by using H. 264/AVC, the newest video compression standard, as a high quality book compressor. The proposed method uses segments of the originally independent scanned pages of a book to create a video sequence, which is encoded through regular H.264/AVC. Results show that the proposed method outperforms AVC-I (H.264/AVC operating in pure intra mode) and JPEG2000 by up to 4 dB and 7 dB, respectively. Superior subjective quality is also achieved.
We examine some research issues in pattern recognition and image processing that have been spurred by the needs of digital libraries. Broader -- and not only linguistic -- context must be introduced in character recog...
详细信息
ISBN:
(纸本)9781595936080
We examine some research issues in pattern recognition and image processing that have been spurred by the needs of digital libraries. Broader -- and not only linguistic -- context must be introduced in character recognition on low-contrast, tightly-set documents because the conversion of documents to coded (searchable) form is lagging far behind conversion to image formats. At the same time, the prevalence of imaged documents over coded documents gives rise to interesting research problems in interactive annotation of document images. At the level of circulation, reformatting document images to accommodate diverse user needs remains a challenge.
暂无评论