Background. Until now, there has been no ideal embolization agent for hemorrhage in interventional treatment. In this study, the thrombin was encapsulated in alginate calcium microsphere using electrostatic droplet te...
详细信息
Background. Until now, there has been no ideal embolization agent for hemorrhage in interventional treatment. In this study, the thrombin was encapsulated in alginate calcium microsphere using electrostatic droplet technique to produce new embolization agent: thrombin loaded alginate calcium microspheres (TACMs). Objectives. The present work was to evaluate the biocompatibility and hemostatic efficiency of TACMs. Methods. Cell cytotoxicity, hemolysis, and superselective embolization of dog liver arteries were performed to investigate the biocompatibility of TACMs. To clarify the embolic effect of TACMs mixed thrombus in vivo, hepatic artery injury animal model of 6 beagles was established and transcatheter artery embolization for bleeding was performed. Results. Coculture with VECs revealed the noncytotoxicity of TACMs, and the hemolysis experiment was negligible. Moreover, the histological study of TACMs in liver blood vessel showed signs of a slight inflammatory reaction. The results of transcatheter application of TACMs mixed thrombus for bleeding showed that the blood flow was shut down completely after the TACMs mixed thrombus was delivered and the postprocedural survival rate of animal models at 12 weeks was 100%. Conclusions. With their good biocompatibility and superior hemostatic efficiency, TACMs might be a promising new hemostatic agent with a wide range of potential applications.
Although the Transformer model has outperformed traditional sequence-to-sequence model in a variety of natural language processing (NLP) tasks, it still suffers from semantic irrelevance and repetition for abstractive...
Although the Transformer model has outperformed traditional sequence-to-sequence model in a variety of natural language processing (NLP) tasks, it still suffers from semantic irrelevance and repetition for abstractive text summarization. The main reason is that the long text to be summarized is usually composed of multi-sentences and has much redundant information. To tackle this problem, we propose a selective and coverage multi-head attention framework based on the original Transformer. It contains a Convolutional Neural Network (CNN) selective gate, which combines n-gram features with whole semantic representation to obtain core information from the long input sentence. Besides, we use a coverage mechanism in the multi-head attention to keep track of the words which have been summarized. The evaluations on Chinese and English text summarization datasets both demonstrate that the proposed selective and coverage multi-head attention model outperforms the baseline models by 4.6 and 0.3 ROUGE-2 points respectively. And the analysis shows that the proposed model generates the summary with higher quality and less repetition.
暂无评论