Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generativ...
详细信息
Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph constructionwith code languagemodel: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines.
This paper aims to summarize the findings of a comprehensive study on the correlation of displacement and deformation demands estimated using various code-compliant nonlinear modeling approaches commonly used in real-...
详细信息
This paper aims to summarize the findings of a comprehensive study on the correlation of displacement and deformation demands estimated using various code-compliant nonlinear modeling approaches commonly used in real-life performance-based seismic design and assessment applications, with experimental results obtained from the full-scale shake table tests on the 10-story reinforced concrete building performed at the E-defense testing facility in 2015. Nonlinear response history analysis results obtained using ASCE-41 and Turkish Building Earthquake code model configurations with the lumped plasticity modeling approach adopted for beams and columns, as well as an additional configuration adopting the distributed plasticity approach, were compared with test measurements. Analysis sets were also conducted to evaluate the sensitivity of the analytical predictions to specific modeling assumptions that incorporate varying degrees of uncertainty, including consideration of beam effective flange widths, modeling of the floor slabs, adopting a tensile strength for concrete, selecting alternative effective shear moduli for structural walls, consideration of the vertical component of the ground motion, and selection of the damping ratio. Analysis results were compared with experimental measurements in terms of various global displacement and local deformation demands on the structure, including story displacement time-histories, peak interstory drift ratios, flexural and shear deformations on structural walls, and flexural deformations on beams and columns. The relative effectiveness of different methodologies and assumptions in replicating the experimentally-measured deformation demands on the test building are discussed in detail. Limitations of the adopted modeling approaches in simulating salient experimentally-observed response characteristics of the building are also identified.
Context. In Software Engineering, Model-Driven Engineering (MDE) is a methodology that considers Metamodels as a cornerstone. As an abstract artifact, a metamodel plays a significant role in the specification of a sof...
详细信息
Context. In Software Engineering, Model-Driven Engineering (MDE) is a methodology that considers Metamodels as a cornerstone. As an abstract artifact, a metamodel plays a significant role in the specification of a software language, particularly, in generating other artifacts of lower abstraction level, such as code. Developers then enrich the generated code to build their language services and tooling, e.g., editors, and checkers. Problem. When a metamodel evolves, the generated code is automatically updated. As a consequence, the developers' additional code is impacted and needs to be co-evolved accordingly. Contribution. This paper proposes a new fully automatic code co-evolution approach with the evolution of the Ecore metamodel. The approach relies on pattern matching of the additional code errors. This process aims to analyze the abstraction gap between the evolved metamodel elements and the code errors to co-evolve them. Evaluation and Results. We evaluated our approach on nine Eclipse projects from OCL, Modisco, and Papyrus over several evolved versions of three metamodels. Results show that we automatically co-evolved 771 errors due to metamodel evolution with 631 matched and applied resolutions. Our approach reached an average of 82% of precision and 81% of recall, varying from 48% to 100% for precision and recall respectively. To check the effect of the co-evolution and its behavioral correctness, we rely on generated test cases before and after co-evolution. We observed that the percentage of passing, failing, and erroneous tests remained the same with insignificant variations in some projects. Thus, suggesting the behavioral correctness of the co-evolution Moreover, we conducted a comparison with the use of quick fixes that represent a usual tool for correcting code errors in an IDE. We found that our automatic co-evolution approach outperforms the use of quick fixes that lacked the context of metamodel evolution. Finally, we also compared our approach
This study evaluates the performance of two Precise Point Positioning (PPP) software solutions, GAPS and GAMP, for estimating Vertical Total Electron Content (VTEC). Their outputs were compared against the Internation...
详细信息
This study evaluates the performance of two Precise Point Positioning (PPP) software solutions, GAPS and GAMP, for estimating Vertical Total Electron Content (VTEC). Their outputs were compared against the International Reference Ionosphere (IRI) models, the Center for Orbit Determination in Europe (code) solutions, SWARM satellite data, and COSMIC-1 observations. The results demonstrate that GAPS achieves superior accuracy and reliability after initialization, with VTEC estimates closely aligning with code, IRI, COSMIC-1, and SWARM data. In contrast, GAMP, while exhibiting faster convergence, showed greater variability and a tendency to underestimate VTEC, especially under dynamic ionospheric conditions. Statistical analyses revealed that GAPS produced lower Root Mean Square (RMS) errors across stations, with values below 20 Total Electron Content Units (TECU) when compared to code, IRI and between 5 and 25 TECU when validated against COSMIC-1 and SWARM data. Conversely, GAMP's RMS values reached up to 65 TECU, indicating lower precision. GAPS also showed smaller average and absolute differences, confirming its ability to capture localized ionospheric variations more effectively than GAMP and IRI models. A t-test analysis indicated no statistically significant differences between GAPS and code, IRI, SWARM, or COSMIC-1 for most stations, demonstrating the robustness of GAPS in representing ionospheric behavior. GAMP, however, often exhibited significant differences in VTEC estimates relative to these references. These findings demonstrate GAPS's superior performance in ionospheric studies and VTEC estimation, underscoring the importance of choosing suitable PPP solutions and prepossessing for high-precision GNSS and atmospheric research.
Contemporary general circulation models (GCMs) and Earth system models (ESMs) are developed by a large number of modeling groups globally. They use a wide range of representations of physical processes, allowing for s...
详细信息
Contemporary general circulation models (GCMs) and Earth system models (ESMs) are developed by a large number of modeling groups globally. They use a wide range of representations of physical processes, allowing for structural (code) uncertainty to be partially quantified with multi-model ensembles (MMEs). Many models in the MMEs of the Coupled Model Intercomparison Project (CMIP) have a common development history due to sharing of code and schemes. This makes their projections statistically dependent and introduces biases in MME statistics. Previous research has focused on model output and code dependence, and model code genealogy of CMIP models has not been fully analyzed. We present a full reconstruction of CMIP3, CMIP5, and CMIP6 code genealogy of 167 atmospheric models, GCMs, and ESMs (of which 114 participated in CMIP) based on the available literature, with a focus on the atmospheric component and atmospheric physics. We identify 12 main model families. We propose family and ancestry weighting methods designed to reduce the effect of model structural dependence in MMEs. We analyze weighted effective climate sensitivity (ECS), climate feedbacks, forcing, and global mean near-surface air temperature, and how they differ by model family. Models in the same family often have similar climate properties. We show that weighting can partially reconcile differences in ECS and cloud feedbacks between CMIP5 and CMIP6. The results can help in understanding structural dependence between CMIP models, and the proposed ancestry and family weighting methods can be used in MME assessments to ameliorate model structural sampling biases.
The International GNSS Service aims to provide combined satellite clock and code/phase bias products in the recent third reprocessing campaign (Repro3) to enable more robust ambiguity resolution in precise point posit...
详细信息
The International GNSS Service aims to provide combined satellite clock and code/phase bias products in the recent third reprocessing campaign (Repro3) to enable more robust ambiguity resolution in precise point positioning (PPP-AR). However, the interoperability of the code/phase bias products from different analysis centers (ACs) was seriously corrupted by their inconsistent corrections of GPS Block III and Galileo satellite antenna phase center (APC) effects. To achieve a sufficient and rapid reconciliation among the Repro3 bias products, we correct efficiently only for the antenna phase center offsets in the radial direction (i.e., z-PCOs), rather than ask each AC to reprocess their 25 years of products by strictly correcting for APC effects repeatedly. A clock/bias combination for 2020 demonstrates that our approximate APC correction significantly reduces the inconsistency among ACs' satellite clock/bias products. The mean residuals from the combination of differential code biases and wide-lane phase biases are within 0.1 ns and 0.03 cycles, respectively. A 30-day PPP-AR test using GPS and Galileo data from 90 stations shows that our combination clock/bias products can still guarantee a wide-lane ambiguity fixing rate that differs by less than 0.5% from that resulting from APC strictly corrected combination products.
Conventional luminol chemiluminescence (CL) is widely used in imaging and analytical assays. As far as we can search, there are no studies on the use of CL for information encryption and protection. Here, a novel CL h...
详细信息
Conventional luminol chemiluminescence (CL) is widely used in imaging and analytical assays. As far as we can search, there are no studies on the use of CL for information encryption and protection. Here, a novel CL hydrogel is developed. A rapid color visualization of sarcosine in urine using a mobile phone camera is achieved based on CL energy resonance transfer. Four different colors of CL hydrogels are prepared by adding different ratios of rhodamine B or fluorescein. In the multicolor CL hydrogels, activated CL can generate conventional multicolor code shift and original white light. The luminescence duration and the cause of the white light are analyzed using CL spectroscopy. Sarcosine oxidase is replaced with sarcosine in the multicolor CL hydrogels to obtain the translation of editable code, which facilitates the secondary information encryption and code transmissions. The effective luminescence time of these hydrogels are short (13-50 min), and code can be deleted naturally.
We discuss and implement a spectral method approach to computing stationary and axisymmetric black hole solutions and their properties in modified theories of gravity. The resulting code is written in the Julia langua...
详细信息
We discuss and implement a spectral method approach to computing stationary and axisymmetric black hole solutions and their properties in modified theories of gravity. The resulting code is written in the Julia language and is transparent and easily adapted to new settings. We test the code on both general relativity and on Einstein-scalar-Gauss-Bonnet gravity. It is accurate and fast, converging on a spinning solution in these theories with tiny errors (
Artificial intelligence literature refers to the automatic writing of artificial intelligence that produces literary works. The emergence of AI literature leads to the disenchantment of word and also leads to the ench...
详细信息
Artificial intelligence literature refers to the automatic writing of artificial intelligence that produces literary works. The emergence of AI literature leads to the disenchantment of word and also leads to the enchantment of code. AI literature is closely linked to AI natural language processing technology, the essence of which lies in the automatic generation of words and the combination of words. AI has no actual human author, which allows for a reader-centred reception theory as its key interpretive theory. The acceptance of AI texts as literature is also due to a secondary consensus on the form of literature and the sense of ritual. The charming of code embodies the infinite trust and excessive worship of technology and certainty, and to disarm it, it also necessarily requires the analysis and regulation of code. AI literature is more often noticed as an artistic event, and this artistic event is the disenchantment of art and the enchantment of technology.
What does it mean to produce trustworthy code for computer scientists? Based primarily on ethnographic fieldwork in an undergraduate computer science program in Singapore, this article explores what it means for compu...
详细信息
What does it mean to produce trustworthy code for computer scientists? Based primarily on ethnographic fieldwork in an undergraduate computer science program in Singapore, this article explores what it means for computer science students to write 'good code.' In doing so, it explores the values that underlie ideas of trust in the computer science discipline. Drawing on the work of Rebecca Bryant, this article shows how, as students learn to become 'good at' writing code that is technically functional, aesthetically un-individuated, and decontextually efficient, they also learn to become 'good' computer scientists. These standards of good code are distributed across human and nonhuman actors and provide a framework for 'trustless trust' in code. That is, while computer science often assumes an omnipresence of mistrust, this article argues that the production of 'good' code and 'good' computer scientists works to build a system of distrust for computer scientists. At the same time, becoming a good computer scientist is intimately intertwined with students' selfhoods, undermining the foundation of trustless trust even as the ideal of objectively 'good' and trustworthy code cuts this contradiction from view.
暂无评论