Cumulative displacements on faults can tell us how they have accumulated over time, and thus how faults have grown. We approach this question from two angles, focusing on normal faults with topographic escarpment. Fir...
详细信息
Cumulative displacements on faults can tell us how they have accumulated over time, and thus how faults have grown. We approach this question from two angles, focusing on normal faults with topographic escarpment. First, we develop a code, Auto_Throw, to automate the measurements of throws on normal fault scarps. Using a piecewise linear fitting approach and basic slope and length constraints, Auto_Throw mathematically explores all possible configurations of scarps and offset surfaces on a topographic profile, so that a throw and its uncertainty are defined from the statistical analysis of tens to hundreds of measurements. Auto_Throw also measures the scarp slopes and the across-fault widths. We test Auto_Throw performance on 35 normal faults of different lengths and contexts, performing a million measurements. Second, we address the tectonic implications of the measurements. We conduct a global analysis of all fault data in order to smooth out site-dependent complexities and identify common features that may result from the fault growth process. We find that throws, scarp slopes and fault widths significantly vary along faults, yet in a similar manner for each fault quantity. Throw distributions show a common envelope shape, asymmetric with a long slip taper. Steepest scarp slopes are subconstant along faults but systematically drop down towards fault tips. Fault width scales with throw, but a rough zone of constant width is identified off the faults. Tapering slip on a fault, shallowing of its scarp slope, and distributed deformation off its trace, could all be indicative of the fault lateral propagation.
Context. In Software Engineering, Model-Driven Engineering (MDE) is a methodology that considers Metamodels as a cornerstone. As an abstract artifact, a metamodel plays a significant role in the specification of a sof...
详细信息
Context. In Software Engineering, Model-Driven Engineering (MDE) is a methodology that considers Metamodels as a cornerstone. As an abstract artifact, a metamodel plays a significant role in the specification of a software language, particularly, in generating other artifacts of lower abstraction level, such as code. Developers then enrich the generated code to build their language services and tooling, e.g., editors, and checkers. Problem. When a metamodel evolves, the generated code is automatically updated. As a consequence, the developers' additional code is impacted and needs to be co-evolved accordingly. Contribution. This paper proposes a new fully automatic code co-evolution approach with the evolution of the Ecore metamodel. The approach relies on pattern matching of the additional code errors. This process aims to analyze the abstraction gap between the evolved metamodel elements and the code errors to co-evolve them. Evaluation and Results. We evaluated our approach on nine Eclipse projects from OCL, Modisco, and Papyrus over several evolved versions of three metamodels. Results show that we automatically co-evolved 771 errors due to metamodel evolution with 631 matched and applied resolutions. Our approach reached an average of 82% of precision and 81% of recall, varying from 48% to 100% for precision and recall respectively. To check the effect of the co-evolution and its behavioral correctness, we rely on generated test cases before and after co-evolution. We observed that the percentage of passing, failing, and erroneous tests remained the same with insignificant variations in some projects. Thus, suggesting the behavioral correctness of the co-evolution Moreover, we conducted a comparison with the use of quick fixes that represent a usual tool for correcting code errors in an IDE. We found that our automatic co-evolution approach outperforms the use of quick fixes that lacked the context of metamodel evolution. Finally, we also compared our approach
This study evaluates the performance of two Precise Point Positioning (PPP) software solutions, GAPS and GAMP, for estimating Vertical Total Electron Content (VTEC). Their outputs were compared against the Internation...
详细信息
This study evaluates the performance of two Precise Point Positioning (PPP) software solutions, GAPS and GAMP, for estimating Vertical Total Electron Content (VTEC). Their outputs were compared against the International Reference Ionosphere (IRI) models, the Center for Orbit Determination in Europe (code) solutions, SWARM satellite data, and COSMIC-1 observations. The results demonstrate that GAPS achieves superior accuracy and reliability after initialization, with VTEC estimates closely aligning with code, IRI, COSMIC-1, and SWARM data. In contrast, GAMP, while exhibiting faster convergence, showed greater variability and a tendency to underestimate VTEC, especially under dynamic ionospheric conditions. Statistical analyses revealed that GAPS produced lower Root Mean Square (RMS) errors across stations, with values below 20 Total Electron Content Units (TECU) when compared to code, IRI and between 5 and 25 TECU when validated against COSMIC-1 and SWARM data. Conversely, GAMP's RMS values reached up to 65 TECU, indicating lower precision. GAPS also showed smaller average and absolute differences, confirming its ability to capture localized ionospheric variations more effectively than GAMP and IRI models. A t-test analysis indicated no statistically significant differences between GAPS and code, IRI, SWARM, or COSMIC-1 for most stations, demonstrating the robustness of GAPS in representing ionospheric behavior. GAMP, however, often exhibited significant differences in VTEC estimates relative to these references. These findings demonstrate GAPS's superior performance in ionospheric studies and VTEC estimation, underscoring the importance of choosing suitable PPP solutions and prepossessing for high-precision GNSS and atmospheric research.
Large Language Models (LLMs) have made significant strides in Natural Language Processing and coding, yet they struggle with robustness and accuracy in complex function calls. To tackle these challenges, this paper in...
详细信息
Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension. However, due to the great cost of authoring with the comments, many code projects do not conta...
详细信息
Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension. However, due to the great cost of authoring with the comments, many code projects do not contain adequate comments. Automatic comment generation techniques have been proposed to generate comments from pieces of code in order to alleviate the human efforts in annotating the code. Most existing approaches attempt to exploit certain correlations (usually manually given) between code and generated comments, which could be easily violated if coding patterns change and hence the performance of comment generation declines. In addition, recent approaches ignore exploiting the code constructs and leveraging the code snippets like plain text. Furthermore, previous datasets are also too small to validate the methods and show their advantage. In this paper, we propose a new attention mechanism called codeAttention to translate code to comments, which is able to utilize the code constructs, such as critical statements, symbols and keywords. By focusing on these specific points, codeAttention could understand the semantic meanings of code better than previous methods. To verify our approach in wider coding patterns, we build a large dataset from open projects in GitHub. Experimental results in this large dataset demonstrate that the proposed method has better performance over existing approaches in both objective and subjective evaluation. We also perform ablation studies to determine effects of different parts in codeAttention.
Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generativ...
详细信息
Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph constructionwith code languagemodel: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines.
In programming courses, the traditional assessment approach tends to evaluate student performance by scoring one or more project-level summative assignments. This approach no longer meets the requirements of a quality...
详细信息
In programming courses, the traditional assessment approach tends to evaluate student performance by scoring one or more project-level summative assignments. This approach no longer meets the requirements of a quality programming language education. Based on an upgraded peer code review model, we propose a formative assessment approach to assess the learning of computer programming languages, and develop an online assessment system(OOCourse) to implement this approach. Peer code review and inspection is an effective way to ensure the high quality of a program by systematically checking the source code. Though it is commonly applied in industrial and open-source software development, it is rarely taught and practiced in undergraduate-level programming courses. We conduct a case study using the formative assessment method in a sophomore level Object-Oriented Design and Construction course with more than 240 students. We use Moodle(an online learning system) and some relevant plugins to conduct peer code review. We also conduct data mining on the running data from the peer assessment activities. The case study shows that formative assessment based on peer code review gradually improved the programming ability of students in the undergraduate class.
Recently, Tavakoli et *** a self-testing scheme in the prepare-and-measure scenario, showing that self-testing is not necessarily based on entanglement and violation of a Bell inequality [***.A 98 062307(2018)].They r...
详细信息
Recently, Tavakoli et *** a self-testing scheme in the prepare-and-measure scenario, showing that self-testing is not necessarily based on entanglement and violation of a Bell inequality [***.A 98 062307(2018)].They realized the self-testing of preparations and measurements in an N → 1(N ≥ 2) random access code(RAC), and provided robustness bounds in a 2 → 1 *** all N → 1 RACs with shared randomness are combinations of 2 → 1 and 3 → 1 RACs, the3 → 1 RAC is just as important as the 2 → 1 *** this paper, we find a set of preparations and measurements in the3 → 1 RAC, and use them to complete the robustness self-testing analysis in the prepare-and-measure *** method is robust to small but inevitable experimental errors.
Next-generation malware adopt self-mutation to circumvent current malware detection techniques. The authors propose a strategy based on code normalization that reduces different instances of the same malware into a co...
详细信息
Next-generation malware adopt self-mutation to circumvent current malware detection techniques. The authors propose a strategy based on code normalization that reduces different instances of the same malware into a common form that can enable accurate detection
暂无评论