Shape representations using polynomials in computer-aided geometric design (CAGD) and computer graphics are ubiquitous. This paper shows that any bivariate polynomial p(t, u) of total degree d <= n can be represent...
详细信息
ISBN:
(纸本)9781424422609
Shape representations using polynomials in computer-aided geometric design (CAGD) and computer graphics are ubiquitous. This paper shows that any bivariate polynomial p(t, u) of total degree d <= n can be represented in the forrn of a blossom of another bivariate polynomial b(t, u) of total degree d evaluated off the diagonal at the linear function pairs (Xj(t),Yj(u)), j = 1, ..., n, chosen under some conditions expressed in terms of symmetric functions. The bivariate polynomial b(t, u) is called a bud of the bivariate polynomial p(t, u). An algorithm for finding a bud b(t, u) of a given bivariate polynomial p(t, u) is presented. Successively, a bud of b(t, u) can be computed and so on, to form a sequence of representations. The information represented by the original bivariate polynomial is preserved in its buds. This scheme can be used for encoding/decoding geometric design information. The objects in the encoding/decoding sequence can be rendered graphically and manipulated geometrically like the usual parametric representations. Examples concerning triangular Bezier patches are provided as illustrations.
Traditional hyperspectral image semantic segmentation algorithms can not fully utilize the spatial information or realize efficient segmentation with less sample data. In order to solve the above problems, a U-shaped ...
详细信息
Traditional hyperspectral image semantic segmentation algorithms can not fully utilize the spatial information or realize efficient segmentation with less sample data. In order to solve the above problems, a U-shaped hyperspectral semantic segmentation model (DCCaps-UNet) based on the depthwise separable and conditional convolution capsule network was proposed in this study. The whole network is an encoding-decoding structure. In the encoding part, image features are firstly fully extracted and fused. In the decoding part, images are then reconstructed by upsampling. In the encoding part, a dilated convolutional capsule block is proposed to fully acquire spatial information and deep features and reduce the calculation cost of dynamic routes using a conditional sliding window. A depthwise separable block is constructed to replace the common convolution layer in the traditional capsule network and efficiently reduce network parameters. After principal component analysis (PCA) dimension reduction and patch preprocessing, the proposed model was experimentally tested with Indian Pines and Pavia University public hyperspectral image datasets. The obtained segmentation results of various ground objects were analyzed and compared with those obtained with other semantic segmentation models. The proposed model performed better than other semantic segmentation methods and achieved higher segmentation accuracy with the same samples. Dice coefficients reached 0.9989 and 0.9999. The OA value can reach 99.92% and 100%, respectively, thus, verifying the effectiveness of the proposed model.
Tampered images with false information can mislead viewers and pose security issues. Tampering traces in images are difficult to detect. To locate tampering traces effectively, a dual-domain deep-learning-based image ...
详细信息
Tampered images with false information can mislead viewers and pose security issues. Tampering traces in images are difficult to detect. To locate tampering traces effectively, a dual-domain deep-learning-based image tampering localization method based on RGB and frequency stream branches is proposed in this work. The former branch learns and extracts tampered features on the image and content features of the tampered region. The latter branch extracts tampered features from the frequency domain to complement the RGB stream branch. In addition, an attention mechanism is used to integrate the features from both branches at the fusion stage. In the experiments, the F1 score of the proposed method outperformed those of the baselines on the NIST16 dataset (with a 15.3% improvement), and the AUC score outperformed those of the baselines on the NIST16 and COVERAGE datasets (improvements of 3.9% and 4.7%, respectively). This study provides a beneficial alternative to image tampering localization techniques.
Background Automatic segmentation of brain tumours using deep learning algorithms is currently one of the research hotspots in the medical image segmentation field. An improved U-Net network is proposed to segment bra...
详细信息
Background Automatic segmentation of brain tumours using deep learning algorithms is currently one of the research hotspots in the medical image segmentation field. An improved U-Net network is proposed to segment brain tumours to improve the segmentation effect of brain tumours. Methods To solve the problems of other brain tumour segmentation models such as U-Net, including insufficient ability to segment edge details and reuse feature information, poor extraction of location information and the commonly used binary cross-entropy and Dice loss are often ineffective when used as loss functions for brain tumour segmentation models, we propose a serial encoding-decoding structure, which achieves improved segmentation performance by adding hybrid dilated convolution (HDC) modules and concatenation between each module of two serial networks. In addition, we propose a new loss function to focus the model more on samples that are difficult to segment and classify. We compared the results of our proposed model and the commonly used segmentation models under the IOU, PA, Dice, precision, Hausdorf95, and ASD metrics. Results The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother. Conclusions Our algorithm has better semantic segmentation performance than other commonly used segmentation algorithms. The technology we propose can be used in the brain tumour diagnosis to provide better protection for patients' later treatments.
Nasopharyngeal carcinoma is a malignant tumor that occurs in the epithelium and mucosal glands of the nasopharynx, and its pathological type is mostly poorly differentiated squamous cell carcinoma. Since the nasophary...
详细信息
Nasopharyngeal carcinoma is a malignant tumor that occurs in the epithelium and mucosal glands of the nasopharynx, and its pathological type is mostly poorly differentiated squamous cell carcinoma. Since the nasopharynx is located deep in the head and neck, early diagnosis and timely treatment are critical to patient survival. However, nasopharyngeal carcinoma tumors are small in size and vary widely in shape, and it is also a challenge for experienced doctors to delineate tumor contours. In addition, due to the special location of nasopharyngeal carcinoma, complex treatments such as radiotherapy or surgical resection are often required, so accurate pathological diagnosis is also very important for the selection of treatment options. However, the current deep learning segmentation model faces the problems of inaccurate segmentation and unstable segmentation process, which are mainly limited by the accuracy of data sets, fuzzy boundaries, and complex lines. In order to solve these two challenges, this article proposes a hybrid model WET-UNet based on the UNet network as a powerful alternative for nasopharyngeal cancer image segmentation. On the one hand, wavelet transform is integrated into UNet to enhance the lesion boundary information by using low-frequency components to adjust the encoder at low frequencies and optimize the subsequent computational process of the Transformer to improve the accuracy and robustness of image segmentation. On the other hand, the attention mechanism retains the most valuable pixels in the image for us, captures the remote dependencies, and enables the network to learn more representative features to improve the recognition ability of the model. Comparative experiments show that our network structure outperforms other models for nasopharyngeal cancer image segmentation, and we demonstrate the effectiveness of adding two modules to help tumor segmentation. The total data set of this article is 5000, and the ratio of training and verific
Shape representations using polynomials in computer-aided geometric design (CAGD) and computer graphics are ubiquitous. This paper shows that any bivariate polynomial p(t,u) of total degree d ≤ n can be represented i...
详细信息
Shape representations using polynomials in computer-aided geometric design (CAGD) and computer graphics are ubiquitous. This paper shows that any bivariate polynomial p(t,u) of total degree d ≤ n can be represented in the form of a blossom of another bivariate polynomial b(t,u) of total degree d evaluated off the diagonal at the linear function pairs (X{sub}j(t),Y{sub}j(u)), j = 1,...,n, chosen under some conditions expressed in terms of symmetric functions. The bivariate polynomial b(t,u) is called a bud of the bivariate polynomial p(t,u). An algorithm for finding a bud b(t,u) of a given bivariate polynomial p(t,u) is presented. Successively, a bud of b(t, u) can be computed and so on, to form a sequence of representations. The information represented by the original bivariate polynomial is preserved in its buds. This scheme can be used for encoding/decoding geometric design information. The objects in the encoding/decoding sequence can be rendered graphically and manipulated geometrically like the usual parametric representations. Examples concerning triangular Bezier patches are provided as illustrations.
Teachers need to provide numerous examples sentences for students to translate in the process of teaching English, but the number of sentences given by teachers to practice subjectively is not enough. Therefore, the s...
详细信息
Teachers need to provide numerous examples sentences for students to translate in the process of teaching English, but the number of sentences given by teachers to practice subjectively is not enough. Therefore, the study constructs a text generation model using an improved convolutional neural network semantic segmentation method, where the corpus utterances are keyword extracted and new shorter utterances are generated based on the keywords for language learners to practice translation. The research first uses the textRank algorithm to extract semantic keywords to obtain a dataset, and then uses CNN to construct an encoder to achieve semantic encoding of the keyword dataset. However, during the research process, it was found that traditional CNN models are relatively sensitive to the location of input data. Therefore, the research introduces the idea of Decomposition Machine (FM) to improve the encoder. In order to control text generation, research has introduced a weighted additive attention mechanism in the decoding process to associate the meaning of the generated text sequence with the meaning of the keyword set. Based on this, a text generation model for generating a translation related corpus in English teaching is constructed. This results in a text generation model that can be used to generate a translation-linked corpus for English language teaching. The average BLEU value of model 1 is 0.924, Inform value is 98.40, the Success value is 97.64, and the Combine value is 101.24, which can achieve high-quality text generation by the keyword lexical meaning and provide technical guarantee for the establishment of the corpus in educating in English.
暂无评论