The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
Full BiBTeX |
2024
Compilers pose significant challenges in their development as software products. Language developers face the complexities of ensuring efficiency, adhering to good design practices, and maintaining the overall codebase. These factors make it difficult to predict the unexpected impact of updates on existing software built on the current compiler stack. Furthermore, software created for a specific compiler often lacks reusability for other compiler environments. In this study, we propose a comprehensive framework for the uniform development of compilers that addresses these issues. Our approach involves developing compilers as a collection of small transpilation units, referred to as deltas. The transpilation infrastructure takes source code written in a particular source language and searches for a path of deltas to generate equivalent source code in the target language. By adopting this methodology, language developers can easily update their languages by introducing new deltas into the system. Existing code remains unaffected as old transpilation paths remain available. To support this framework, we have devised a metric space for efficient delta search. This metric space enables us to define a non-overestimating heuristic function, which proves valuable in solving the search problem. Leveraging the A* search algorithm, we can efficiently transpile programs from a source language to the target language. To evaluate the effectiveness of our approach, we conducted a benchmark comparison between the A* search algorithm and the simpler breadth-first search (BFS) algorithm. The benchmark consisted of over 100 transpilation searches, providing valuable insights into the performance and capabilities of this framework.
In this work, we analyze both theoretically and empirically the effect of tied input-output embeddings—a popular technique that reduces the model size while often improving training. Interestingly, we found that this technique is connected to Harris (1954)'s distributional hypothesis—often portrayed by the famous Firth (1957)'s quote “a word is characterized by the company it keeps”. Specifically, our findings indicate that words (or, more broadly, symbols) with similar semantics tend to be encoded in similar input embeddings, while words that appear in similar contexts are encoded in similar output embeddings (thus explaining the semantic space arising in input and output embedding of foundational language models). As a consequence of these findings, the tying of the input and output embeddings is encouraged only when the distributional hypothesis holds for the underlying data. These results also provide insight into the embeddings of foundation language models (which are known to be semantically organized). Further, we complement the theoretical findings with several experiments supporting the claims.
In this work, we introduce Basilisk, a high-level architectural pattern designed to facilitate interoperability among various languages, platforms, and ecosystems. The pursuit of language-independent software development is highly desirable, enabling developers to utilize existing software products with most programming languages. Achieving platform independence is equally advantageous, allowing code deployment on different platforms effortlessly. While the development community has often aimed for either language or platform independence, Basilisk aims to combine both into a single product. To realize this dual objective, Basilisk employs two fundamental components. The first is a transpilation infrastructure used to render software products language-independent. The second is an abstraction layer over platforms, enabling the creation of platform-independent software products. To illustrate Basilisk’s potential, we introduce Hydra, a one-to-many, declarative transpilation infrastructure. Hydra has been utilized to develop transpilers from HydraKernel (source language) to various target languages, including D, C++, C#, Scala, Ruby, Hy, and Python. Additionally, we instantiate the abstraction layer in Wyvern, a low-level embedded domain-specific language for GPU programming, supporting any Vulkan-compatible GPU. With the Hydra transpilation infrastructure, Wyvern becomes available for D, C++, C#, Scala, Ruby, Hy, and Python. We evaluate Basilisk through the instantiation of Hydra and Wyvern, writing five algorithms from the Rodinia suite for the seven available languages, totaling 35 benchmarks. These benchmarks are executed on four different hardware platforms.
Legacy software poses a critical challenge for organizations due to the costs of maintaining and modernizing outdated systems, as well as the scarcity of experts in aging programming languages. The issue extends beyond commercial applications, affecting public administration, as exemplified by the urgent need for COBOL programmers during the COVID-19 pandemic. In response, this work introduces a modernization approach based on dynamic language product lines, a subset of dynamic software product lines. This approach leverages open language implementations and dynamically generated micro-languages for the incremental migration of legacy systems to modern technologies. The language can be reconfigured at runtime to adapt to the execution of either legacy or modern code, and to generate a compatibility layer between the data types handled by the two languages. Through this process, the costs of modernizing legacy systems can be spread across several iterations, as developers can replace legacy code incrementally, with legacy and modern code coexisting until a complete refactoring is possible. By moving the overhead of making legacy and modern features work together in a hybrid system from the system implementation to the language implementation, the quality of the system itself does not degrade due to the introduction of glue code. To demonstrate the practical applicability of this approach, we present a case study on a COBOL system migration to Java. Using the Neverlang language workbench to create modular and reconfigurable language implementations, both the COBOL interpreter and the application evolve to spread the development effort across several iterations. Through this study, this work presents a viable solution for organizations dealing with the complexity of modernizing legacy software to contemporary technologies. The contributions of this work are (i) a language-oriented, incremental refactoring process for legacy systems, (ii) a concrete application of open language implementations, and (iii) a general template for the implementation of interoperability between languages in hybrid systems.
2023
Compilers translate programs from a high level of abstraction into a low level representation that can be understood and executed by the computer; interpreters directly execute instructions from source code to convey their semantics. Undoubtedly, the correctness of both compilers and interpreters is fundamental to reliably execute the semantics of any software developed by means of high-level languages. Testing is one of the most important methods to detect errors in any software, including compilers and interpreters. Among testing methods, mutation testing is an empirically effective technique often used to evaluate and improve the quality of test suites. However, mutation testing imposes severe demands in computing resources due to the large number of mutants that need to be generated, compiled and executed. In this work, we introduce a mutation approach for programming languages that mitigates this problem by leveraging the properties of language product lines, language workbenches and separate compilations. In this approach, the base language is taken as a black-box and mutated by means of mutation operators performed at language feature level to create a family of mutants of the base language. Each variant of the mutant family is created at runtime, without any access to the source code and without performing any additional compilation. We report results from a preliminary case study in which mutants of an ECMAScript interpreter are tested against the Sputnik conformance test suite for the ECMA-262 specification. The experimental data indicates that this approach can be used to create generally non-trivial mutants.
We introduce a novel approach to source code representation to be used in combination with neural networks. Such a representation is designed to permit the production of a continuous vector for each code statement. In particular, we present how the representation is produced in the case of Java source code. We test our representation for three tasks: code summarization, statement separation, and code search. We compare with the state-of-the-art non-autoregressive and end-to-end models for these tasks. We conclude that all tasks benefit from the proposed representation to boost their performance in terms of f1-score, accuracy, and MRR, respectively. Moreover, we show how models trained on code summarization and models trained on statement separation can be combined to address methods with tangled responsibilities. Meaning that these models can be used to detect code misconduct.
Language workbenches are tools that enable the definition, reuse and composition of programming languages and their ecosystem. This breed of frameworks aims to make the development of new languages easier and more affordable. Consequently, the understandability and learnability of the language used in a language workbench (i.e., the meta-language) should be an important aspect to consider and evaluate. To the best of our knowledge, although the quantitative aspects of language workbenches are often discussed in the literature, the evaluation of understandability and learnability are typically neglected. Neverlang is a language workbench that enables the definition of languages with a modular approach. This paper presents a preliminary study that intends to assess the comprehensibility of Neverlang programs, evaluated in terms of users’ effectiveness and efficiency in a code comprehension task. The study also investigates the relationship between Neverlang comprehensibility and the users’ working memory capacity. Furthermore, we intend to capture the relationship between Neverlang comprehensibility and users’ acceptance, in terms of perceived ease of use, perceived usefulness, and intention to use. Our preliminary results on 10 subjects suggest that the users’ working memory capacity may be related to the ability to comprehend Neverlang programs. On the other hand, effectiveness and efficiency do not appear to be associated with an increase in users’ acceptance variables.
Programming languages are complex software systems integrated across an ecosystem of different applications such as language compilers or interpreters but also an integrated development environment comprehensive of syntax highlighting, code completion, error recovery, and a debugger. The complexity of language ecosystems can be faced using language workbenches—i.e., tools that tackle the development of programming languages, domain specific languages and their ecosystems in a modular way.As with any other software system, one of the priorities that developers struggle to achieve when developing programming languages is reusability. After all, the capacity to easily reuse and adapt existing components to new scenarios can dramatically improve development times. Therefore, as programming languages offer features to reuse existing code, language workbenches should offer tools to reuse existing language assets. However, reusability can be achieved in many different ways.
In this work, we identify six forms of linguistic reusability, ordered by level of granularity: (i) sub-languages composition, (ii) language features composition, (iii) syntax and semantics assets composition, (iv) semantic assets composition, (v) actions composition, and. (vi) action extension. We use these mechanisms to extend the taxonomy of language composition proposed by Erdweg et al. To show a concrete application of this taxonomy, we evaluate the capabilities provided by the Neverlang language workbench with regards to our taxonomy and extend it by adding explicit support for any granularity level that was originally not supported. This is done by instantiating two levels of reusability as actual operators—desugaring, and delegation. We evaluate these operators against the clone-and-own approach, which was the only form of reuse at that level of granularity prior to the introduction of explicit operators. We show that with the clone-and-own approach the design quality of the source code is negatively affected. We conclude that language workbenches can benefit from the introduction of mechanisms to explicitly support reuse at all granularity levels.
This study presents a novel category of Transformer architectures known as comb transformers, which effectively reduce the space complexity of the self-attention layer from a quadratic to a sub-quadratic level. This is achieved by processing sequence segments independently and incorporating X -word embeddings to merge cross-segment information. The reduction in attention memory requirements enables the deployment of deeper architectures, potentially leading to more competitive outcomes. Furthermore, we design an abstract syntax tree (AST)-based code representation to effectively exploit comb transformer properties. To explore the potential of our approach, we develop nine specific instances based on three popular architectural concepts: funnel, hourglass, and encoder-decoder. These architectures are subsequently trained on three code-related tasks: method name generation, code search, and code summarization. These tasks encompass a range of capabilities: short/long sequence generation and classification. In addition to the proposed comb transformers, we also evaluate several baseline architectures for comparative analysis. Our findings demonstrate that the comb transformers match the performance of the baselines and frequently perform better.
The introduction of better abstractions is at the forefront of research and practice. Among many approaches, domain-specific languages are subject to an increase in popularity due to the need for easier, faster and more reliable application development that involves programmers and domain experts alike. To smooth the adoption of such a language-driven development process, researchers must create new engineering techniques for the development of programming languages and their ecosystems. Traditionally, programming languages are implemented from scratch and in a monolithic way. Conversely, modular and reusable language development solutions would improve maintainability, reusability and extensibility. Many programming languages share similarities that can be leveraged to reuse the same language feature implementations across several programming languages; recent language workbenches strive to achieve this goal by solving the language composition and language extension problems. Yet, some features are inherently complex and affect the behavior of several language features. Most notably, the exception handling mechanism involves varied aspects, such as the memory layout, variables, their scope, up to the execution of each statement that may cause an exceptional event—e.g., a division by zero. In this paper, we propose an approach to untangle the exception handling mechanism dubbed the exception handling layer: its components are modular and fully independent from one another, as well as from other language features. The exception handling layer is language-independent, customizable with regards to the memory layout and supports unconventional exception handling language features. To avoid any assumptions with regards to the host language, the exception handling layer is a stand-alone framework, decoupled from the exception handling mechanism offered by the back-end. Then, we present a full-fledged, generic Java implementation of the exception handling layer. The applicability of this approach is presented through a language evolution scenario based on a Neverlang implementation of JavaScript and LogLang, that we extend with conventional and unconventional exception handling language features using the exception handling layer, with limited impact on their original implementation.
Modular software development is taken for granted thanks to the abstractions of modern programming languages. However, the programming languages themselves are still often developed as a monolithic whole, despite them being the very foundation our software is based on. Mainstream programming languages are rarely developed incrementally and their evolution and maintenance are a mere afterthought. The growing research field of language workbenches tries to improve language design and development by employing modularization techniques that maximize code reuse up to the language construct granularity. In this paper, we investigate how such an approach fits an agile style of development: agile engineering concepts such as product backlog items and sprint goals can be directly mapped onto language workbench concepts such as language features and language variants. This direct mapping allows for an easier development of modular, extensible and maintainable programming languages. We conceptualized an agile language development style that acknowledges such a mapping and then turned concepts into theory by outlining an agile language development methodology. Then, thanks to our partnership with Tyl, we put theory into practice in an industrial production environment. Now, we share our experience on the agile creation process of a domain-specific language for the enterprise resource planning (ERP).
Software product lines (SPL) describe highly-variable software systems as a family of similar products that differ in terms of the features they provide. The promise of SPL engineering is to enable massive software reuse by allowing software features to be reused across a variety of different products made for several customers. However, there are some disadvantages in the extraction of SPLs from standard applications. Most notably, approaches to the development of SPLs are not supported by the base language and use a syntax and composition techniques that require a deep understanding of the tools being used. Therefore, the same features cannot be used in a different application and developers must face a steep learning curve when developing SPLs for the first time or when switching from one approach to a different one. Ultimately, this problem is due to a lack of standards in the area of SPL engineering and in the way SPLs are extracted from variability-unaware applications. In this work, we present a framework based on LSP and dubbed SPLL PS that aims at standardizing such a process by decoupling the refactoring operations made by the user from the effect they have on the source code. This way, the server for a specific SPL development approach can be used across several development environments that provide clients with customized refactoring options. Conversely, the developers can use the same client to refactor SPLs made according to different approaches without needing to learn the syntax of each approach. To showcase the applicability of the approach, we present an evaluation performed by refactoring four SPLs according to two different approaches: the results show that a minimal implementation of the SPLL PS client and server applications can be used to reduce the effort of extracting an SPL up to the 93% and that it can greatly reduce or even completely hide the implementation details from the developer, depending on the chosen approach.
2022
Programming languages are complex systems that are usually implemented as monolithic interpreters and compilers. In recent years, researchers and practitioners gained interest in product line engineering to improve the reusability of language assets and the management of variability-rich systems, introducing the notions of language workbenches and language product lines (LPLs). Nonetheless, language development remains a complex activity and design or implementation flaws can easily waste the efforts of decomposing a language specification into language features. Poorly designed language decompositions result in high inter-dependent components, reducing the variability space of the LPL system and its maintainability. One should detect and fix the design flaws posthaste to prevent these risks while minimizing the development overhead. Therefore, various aspects of the quality of a language decomposition should be quantitatively measurable through adequate metrics. The evaluation, analysis and feedback of these measures should be a primary part of the engineering process of a LPL. In this paper, we present an exploratory study trying to capture these aspects by introducing a design methodology for LPLs; we define the properties of a good language decomposition and adapt a set of metrics from the literature to the framework of language workbenches. Moreover, we leverage the AiDE 2 LPL engineering environment to perform an empirical evaluation of 26 Neverlang-based LPLs based on this design methodology. Our contributions form the foundations of a design methodology for Neverlang-based LPLs. This methodology is comprised of four different elements: i) an engineering process that defines the order in which decisions are made, ii) an integrated development environment for LPL designers and iii) some best practices in the design of well-structured language decomposition when using Neverlang, supported by iv) a variety of LPL metrics that can be used to detect errors in design decisions.
The Erlang programming language is used to build concurrent, distributed, scalable and resilient systems. Every component of these systems has to be thoroughly tested not only for correctness, but also for performance. Performance analysis tools in the Erlang ecosystem, however, do not provide a sufficient level of automation and insight needed to be integrated in modern tool chains. In this paper, we present PerformERL: an extendable performance testing framework that combines the repeatability of load testing tools with the details on how the resources are internally used typical of the performance monitoring tools. These features allow PerformERL to be integrated in the early stages of testing pipelines, providing users with a systematic approach to identifying performance issues. This paper introduces the PerformERL framework, focusing on its features, design and imposed monitoring overhead measured through both theoretical estimates and trial runs on systems in production. The uniqueness of the features offered by PerformERL, together with its usability and contained overhead prove that the framework can be avaluable resource in the development and maintenance of Erlang applications.
Modern software systems must fulfill the needs of an ever-growing customer base. Due to the innate diversity of human needs, software should be highly customizable and reconfigurable. Researchers and practitioners gained interest in software product lines (SPL), mimicking aspects of product lines in industrial production for the engineering of highly-variable systems. There are two main approaches towards the engineering of SPLs. The first uses macros—such as the #ifdef macro in C. The second—called feature-oriented programming (FOP)—uses variability-aware preprocessors called composers to generate a program variant from a set of features and a configuration. Both approaches have disadvantages. Most notably, these approaches are usually not supported by the base language; for instance Java is one of the most commonly used FOP languages among researchers, but it does not support macros rather it relies on the C preprocessor or a custom one to translate macros into actual Java code. As a result, developers must struggle to keep up with the evolution of the base language, hindering the general applicability of SPL engineering. Moreover, to effectively evolve a software configuration and its features, their location must be known. The problem of recording and maintaining traceability information is considered expensive and error-prone and it is once again handled externally through dedicated modeling languages and tools. Instead, to properly convey the FOP paradigm, software features should be treated as first-class citizens using concepts that are proper to the host language, so that the variability can be expressed and analyzed with the same tools used to develop any other software in the same language. In this paper, we present a simple and flexible design pattern for JVM-based languages—dubbed devise pattern—that can be used to express feature dependencies and behaviors with a light-weight syntax both at domain analysis and at domain implementation level. To showcase the qualities and feasibility of our approach, we present several variability-aware implementations of a MNIST-encoder—including one using the devise pattern—and compare strengths and weaknesses of each approach.
2021
Regression test selection (RTS) approaches reduce the cost of regression testing of evolving software systems. Existing RTS approaches based on UML models use behavioral diagrams or a combination of structural and behavioral diagrams. However, in practice, behavioral diagrams are incomplete or not used. In previous work, we proposed a fuzzy logic based RTS approach called FLiRTS that uses UML sequence and activity diagrams. In this work, we introduce FLiRTS 2, which drops the need for behavioral diagrams and relies on system models that only use UML class diagrams, which are the most widely used UML diagrams in practice.FLiRTS 2 addresses the unavailability of behavioral diagrams by classifying test cases using fuzzy logic after analyzing the information commonly provided in class diagrams. We evaluated FLiRTS 2 on UML class diagrams extracted from 3331 revisions of 13 open source software systems, and compared the results with those of code-based dynamic (Ekstazi) and static (STARTS) RTS approaches. The average test suite reduction using FLiRTS 2 was 82.06 FLiRTS 2 selected on average about 82% of the test cases that were selected by Ekstazi and STARTS. The average precision violations of FLiRTS 2 with respect to Ekstazi and STARTS were 13.27% and 9.01% respectively. The average mutation score of the full test suites was 18.90%; the standard deviation of the reduced test suites from the average deviation of the mutation score for each subject was 1.78% for FLiRTS 2, 1.11% for Ekstazi, and 1.43% for STARTS. Our experiment demonstrated that the performance of FLiRTS 2 is close to the state-of-art tools for code-based RTS but requires less information and performs the selection in less time.
2020
Language development is inherently complex. With the support of a suitable language development environment most computer scientists could develop their own domain-specific language (DSL) with relative ease. Yet, when the DSL is the result of a configuration over a language product line (LPL)—a special software product line (SPL) of compilers/interpreters and corresponding IDE services—they fail to provide adequate support. An environment for LPL engineering should facilitate the underlying process involving three distinct roles: a language engineer developing the LPL, a language deployer configuring a language product, and a language user using the language product. Neither IDEs nor SPLE environments can cater all three roles and fully support the LPL engineering process with distributed, incremental development, configuration, and deployment of language variants. In this paper, we present an LPL engineering process for the distributed, incremental development of LPLs and an integrated language product line development environment supporting this process, catering the three roles, and ensuring the consistency among all artifacts of the LPL : language components implementing a language feature, the feature model, language configurations and the resulting language products. To create such an environment, we married the Neverlang language workbench and AiDE its LPL engineering environment with the FeatureIDE SPL engineering environment. While Neverlang supports the development of LPLs and deployment of language products, AiDE generates the feature model for the LPL under development, whereas FeatureIDE handles the feature configuration. We illustrate the applicability of the LPL engineering process and the suitability of our development environment for the three roles by showcasing its application for teaching programming with a growable language. In there, an LPL for Javascript was developed/refactored, 15 increasingly complex language products were configured/updated and finally deployed.
2019
Models can be used to ease and manage the development, evolution, and runtime adaptation of a software system. When models are adapted, the resulting models must be rigorously tested. Apart from adding new test cases, it is also important to perform regression testing to ensure that the evolution or adaptation did not break existing functionality. Since regression testing is performed with limited resources and under time constraints, regression test selection (RTS) techniques are needed to reduce the cost of regression testing. Applying model-level RTS for model-based evolution and adaptation is more convenient than using code-level RTS because the test selection process happens at the same level of abstraction as that of evolution and adaptation. In earlier work, we proposed a model-based RTS approach called MaRTS to be used with a fine-grained model-based adaptation framework that targets applications implemented in Java. MaRTS uses UML models consisting of class and activity diagrams. It classifies test cases as obsolete, reusable, or retestable based on changes made to UML class and activity diagrams of the system being adapted. However, MaRTS did not take into account the changes made to the inheritance hierarchy in the class diagram and the impact of these changes on the selection of test cases. This paper extends MaRTS to support such changes and demonstrates that the extended approach performs as well as or better than code-based RTS approaches in safely selecting regression test cases. While MaRTS can generally be used during any model-driven development or model-based evolution activity, we have developed it in the context of runtime adaptation. We evaluated the extended MaRTS on a set of applications and compared the results with code-based RTS approaches that also support changes to the inheritance hierarchy. The results showed that the extended MaRTS selected all the test cases relevant to the inheritance hierarchy changes and that the fault detection ability of the selected test cases was never lower than that of the baseline test cases. The extended MaRTS achieved comparable results to a graph-walk code-based RTS approach (DejaVu) and showed a higher reduction in the number of selected test cases when compared with a static analysis code-based RTS approach (ChEOPSJ).
The idea to treat domain-specific languages (DSL) as software product lines (SPL) of compilers/interpreters led to the introduction of language product lines (LPL). Although there exist various methodologies and tools for designing LPLs, they fail to provide basic IDE services for language variants—such as, syntax highlighting, auto completion, and debugging support—that programmers normally expect. While state-of-the-art language development tools permit the generation of basic IDE services for a specific language variant, most tools fail to consider and support reuse of basic IDE services of families of DSLs. Consequently, to provide basic IDE services for an LPL, one either generates them for the many language variants or designs a separate SPL of IDEs scattering language concerns. In contrast, we aim to piggyback basic IDE services on language features and provide an IDE for LPLs, which fosters their reuse when generating language variants. In detail, we extended the Neverlang language workbench to permit piggybacking syntax highlighting and debugging support on language components. Moreover, we developed an LPL-driven Eclipse-based plugin that includes a syntax highlighting editor and debugger for an LPL with piggybacked basic IDE services, i.e., where modular language features include the definition for syntax highlighting and debugging. Within this work, we introduce a general mechanism for fostering the basic IDE services' reuse and demonstrate its feasibility by realizing context-aware syntax highlighting for a Java-based family of role-oriented programming languages and providing debugging support for the family of JavaScript-based languages.
2018
Today software systems play a critical role in society’s infrastructures and many are required to provide uninterrupted services in their constantly changing environments. As the problem domain and the operational context of such software changes, the software itself must be updated accordingly. In this paper we propose to support dynamic software updating through language semantic adaptation; this is done through use of micro-languages that confine the effect of the introduced change to specific application features. Micro-languages provide a logical layer over a programming language and associate an application feature with the portion of the programming language used to implement it. Thus, they permit to update the application feature by updating the underlying programming constructs without affecting the behaviour of the other application features. Such a linguistic approach provides the benefit of easy addition/removal of application features (with a special focus on non-functional features) to/from a running application by separating the implementation of the new feature from the original application, allowing for the application to remain unaware of any extensions. The feasibility of this approach is demonstrated with two studies; its benefits and drawbacks are also analysed.
Domain-Specific Languages (DSLs) bridge the gap between the problem space, in which stakeholders work, and the solution space, i.e., the concrete artifacts defining the target system. They are usually small and intuitive languages whose concepts and expressiveness fit a particular domain. DSLs recently found their application in an increasingly broad range of domains, e.g., cyber-physical systems, computational sciences and high performance computing. Despite recent advances, the development of DSLs is error-prone and requires substantial engineering efforts. Techniques to reuse from one DSL to another and to support customization to meet new requirements are thus particularly welcomed. Over the last decade, the Software Language Engineering (SLE) community has proposed various reuse techniques. However, all these techniques remain disparate and complicate the development of real-world DSLs involving different reuse scenarios.In this paper, we introduce the Concern-Oriented Language Development (COLD) approach, a new language development model that promotes modularity and reusability of language concerns. A language concern is a reusable piece of language that consists of usual language artifacts (e.g., abstract syntax, concrete syntax, semantics) and exhibits three specific interfaces that support (1) variability management, (2) customization to a specific context, and (3) proper usage of the reused artifact. The approach is supported by a conceptual model which introduces the required concepts to implement COLD. We also present concrete examples of some language concerns and the current state of their realization with metamodel-based and grammar-based language workbenches. We expect this work to provide insights into how to foster reuse in language specification and implementation, and how to support it in language workbenches.
2017
Context: This paper presents the concept of open programming language interpreters and the implementation of a framework-level metaobject protocol (MOP) to support them.Inquiry: We address the problem of dynamic interpreter adaptation to tailor the interpreter's behavior on the task to be solved and to introduce new features to fulfill unforeseen requirements. Many languages provide a MOP that to some degree supports reflection. However, MOPs are typically language-specific, their reflective functionality is often restricted, and the adaptation and application logic are often mixed which hardens the understanding and maintenance of the source code. Our system overcomes these limitations.
Approach: We designed and implemented a system to support open programming language interpreters. The prototype implementation is integrated in the Neverlang framework. The system exposes the structure, behavior and the runtime state of any Neverlang-based interpreter with the ability to modify it.
Knowledge: Our system provides a complete control over interpreter's structure, behavior and its runtime state. The approach is applicable to every Neverlang-based interpreter. Adaptation code can potentially be reused across different language implementations.
Grounding: Having a prototype implementation we focused on feasibility evaluation. The paper shows that our approach well addresses problems commonly found in the research literature. We have a demonstrative video and examples that illustrate our approach on dynamic software adaptation, aspect-oriented programming, debugging and context-aware interpreters.
Importance: To our knowledge, our paper presents the first reflective approach targeting a general framework for language development. Our system provides full reflective support for free to any Neverlang-based interpreter. We are not aware of any prior application of open implementations to programming language interpreters in the sense defined in this paper. Rather than substituting other approaches, we believe our system can be used as a complementary technique in situations where other approaches present serious limitations.
Regression testing is performed to verify that previously developed functionality of a software system is not broken when changes are made to the system. Since executing all the existing test cases can be expensive, regression test selection (RTS) approaches are used to select a subset of them, thereby improving the efficiency of regression testing. Model-based RTS approaches select test cases on the basis of changes made to the models of a software system. While these approaches are useful in projects that already use model-driven development methodologies, a key obstacle is that the models are generally created at a high level of abstraction. They lack the information needed to build traceability links between the models and the coverage-related execution traces from the code-level test cases. In this paper, we propose a fuzzy logic based approach named FLiRTS, for UML model-based RTS. FLiRTS automatically refines abstract UML models to generate multiple detailed UML models that permit the identification of the traceability links. The process introduces a degree of uncertainty, which is addressed by applying fuzzy logic based on the refinements to allow the classification of the test cases as retestable according to the probabilistic correctness associated with the used refinement. The potential of using FLiRTS is demonstrated on a simple case study. The results are promising and comparable to those obtained from a model-based approach (MaRTS) that requires detailed design models, and a code-based approach (DejaVu).
A proposed approach moves variability support from the programming language to the language implementation level. This enables contextual variability in any application independently of whether the underlying language supports context-oriented programming. A Neverlang-based prototype implementation illustrates this approach.
2016
An increasing number of modern software systems need to be adapted at runtime while they are still executing. It becomes crucial to validate each adaptation before it is deployed to the running system. Models are used to ease software maintenance and can, therefore, be used to manage dynamic software adaptations. For example, models are used to manage coarse-grained anticipated adaptations for self-adaptive systems. However, the need for both fine-grained and unanticipated adaptations is becoming increasingly common, and their validation is also becoming more crucial.This paper proposes an approach to validate unanticipated, fine-grained adaptations performed on models before the adaptations are deployed into the running system. The proposed approach exploits model execution where model representations of the test suites of a software system are executed. The proposed approach is demonstrated and evaluated within the Fine Grained Adaptation (FiGA) framework.
Intrduction. To achieve fast and transparent production of empirical evidence in healthcare research we see increased use of existing observational data. Multiple databases are often used, to increase power or assess rare exposures or outcomes or study diverse populations. For privacy and sociological reasons, original data on individualsubjects can’t be shared, requiring a distributed network approach where data processing is performed prior to data sharing.Case Descriptions and Variation Among Sites. We created a conceptual framework distinguishing three steps in local data processing: (1) data reorganization into a data structure common across the network; (2) derivation of study variables not present in original data; (3) application of study design to transform longitudinal data into aggregated datasets for statistical analysis. We applied this framework to four case studies to identify similarities and differences in the United States and Europe: EU-ADR, OMOP, Mini-Sentinel and MATRICE.
Findings. National networks (OMOP, Mini-Sentinel, MATRICE) all adopted shared procedures for local data reorganization. The multinational EU-ADR network needed locally-defined procedures to reorganize its heterogeneous data into a common structure. Derivation of new data elements was centrally-defined in all networks but the procedure was not shared in EU-ADR. Application of study design was automated and shared in all the case studies. Computer procedures were embodied in different programming languages, including SAS, R, SQL, Java and C++.
Conclusion. Using our conceptual framework we identified several areas that would benefit from research to identify optimal standards for production of empirical knowledge from existing databases.
As with traditional software, the complexity of a programming language implementation is faced with modularization which favors the separation of concerns, independent development, maintainability and reuse. However, modularity interferes with language optimization as the latter requires context information that crosses over the single module boundaries and involves other modules. This renders hard to provide the optimization for a single language concept to be reusable with the concept itself. Therefore, the optimization is in general postponed to when all language concepts are available. We defined a model for modular language development with a multiple semantic actions dispatcher based on condition guards that are evaluated at runtime. The optimization can be implemented as context-dependent extensions applied a posteriori to the composed language interpreter without modifying a single component implementation. This renders effective the defined optimization within the language concept boundaries according to the context provided by other language concepts when available and eases its reuse with the language concepts implementation independently of its usage context. The presented model is integrated into the Neverlang development framework and is demonstrated on the optimization of a Javascript interpreter written in Neverlang. We also discuss the applicability of our model to other frameworks for modular language development.
Significant research has been dedicated to dynamic software evolution and adaptation that lead to different approaches which can mainly be categorized as either architecture-based or language-based. But there was little or no focus on dynamic evolution achieved through language interpreter adaptation. In this paper we present a model for such adaptations and illustrate their applicability and usefulness on practical examples developed in Neverlang, a framework for modular language development with features for dynamic adaptation of language interpreters.
Recent advances in tooling and modern programming languages have progressively brought back the practice of developing domain-specific languages as a means to improve software development. Consequently, the problem of making composition between languages easier by emphasizing code reuse and componentized programming is a topic of increasing interest in research. In fact, it is not uncommon for different languages to share common features, and, because in the same project different DSLs may coexist to model concepts from different problem areas, it is interesting to study ways to develop modular, extensible languages. Earlier work has shown that traits can be used to modularize the semantics of a language implementation; a lot of attention is often spent on embedded DSLs; even when external DSLs are discussed, the main focus is on modularizing the semantics. In this paper we will show a complete trait-based approach to modularize not only the semantics but also the syntax of external DSLs, thereby simplifying extension and therefore evolution of a language implementation. We show the benefits of implementing these techniques using the Scala programming language.
An increasing number of modern software systems need to be adapted at runtime without stopping their execution. Runtime adaptations can introduce faults in existing functionality, and thus, regression testing must be conducted after an adaptation is performed but before the adaptation is deployed to the running system. Regression testing must be completed subject to time and resource constraints. Thus, test selection techniques are needed to reduce the cost of regression testing.The FiGA framework provides a complete loop from code to models and back that allows fine-grained model-based adaptation and validation of running Java systems without stopping their execution. In this paper we present a model-based test selection approach for regression testing during the validation activity to be used with the FiGA framework. The evaluation results show that our approach was able to reduce the number of selected test cases, and that the model-level fault detection ability of the selected test cases was never lower than that of the original test cases.
Modularization and component reuse are concepts that can speed up the design and implementation of domain specific languages. Several modular development frameworks have been developed that rely on attributes to share information among components. Unfortunately, modularization also fosters development in isolation and attributes could be undefined or used inconsistently due to a lack of coordination. This work presents 1) a type system that permits to trace attributes and statically validate the composition against attributes lack or misuse and 2) a correct and complete type inference algorithm for this type system. The type system and inference are based on the Neverlang development framework but it is also discussed how it can be used with different frameworks.
Learning programming is a difficult task. The learning process is particularly disorienting when you are approaching programming for the first time. As a student you are exposed to several new concepts (control flow, variable, etc. but also coding, compiling etc.) and new ways to think (algorithms). Teachers try to expose the students gradually to the new concepts by presenting them one by one but the tools at student's disposal do not help: they provide support, suggestion and documentation for the full programming language of choice hampering the teacher's efforts. On the other side, students need to learn real languages and not didactic languages. In this work we propose an approach to gradually teaching programming supported by a programming language that grows—together with its implementation—along with the number of concepts presented to the students. The proposed approach can be applied to the teaching of any programming language and some experiments with Javascript are reported
Over the past decade language development tools have been significantly improved. This permitted both practitioners and researchers to design a wide variety of domain-specific languages (DSL) and extensions to programming languages. Moreover, multiple researchers have combined different language variants to form families of DSLs as well as programming languages. Unfortunately, current language development tools cannot directly support the development of these families. To overcome this limitation, researchers have recently applied ideas from software product lines (SPL) to create product lines of compilers/interpreters for language families, denoted language product lines (LPL). Similar to SPLs, however, these product lines can be created either using a top-down or a bottom-up approach. Yet, there exist no case study comparing the suitability of both approaches to the development of LPLs, making it unclear how language development tools should evolve. Accordingly, this paper compares both feature modeling approaches by applying them to the development of an LPL for the family of role-based programming languages and discussing their applicability, feasibility and overall suitability for the development of LPLs. Although one might argue that this compares apples and oranges, we believe that this case still provides crucial insights into the requirements, assumptions, and challenges of each approach.
Dynamic Software Updating (DSU) provides mechanisms to update a program without stopping its execution. An indiscriminate update, that does not consider the current state of the computation, potentially undermines the stability of the running application. To automatically determine a safe moment when to update the running system is still an open problem often neglected from the existing DSU systems. This paper proposes a mechanism to support the choice of a safe update point by marking which point can be considered unsafe and therefore dodged during the update. The method is based on decorating the code with some specific meta-data that can be used to find the right moment to do the update. The proposed approach has been implemented as an external component that can be plugged into every DSU system. The approach is demonstrated on the evolution of the HSQLDB system from two distinct versions to their next update.
2015
Evolving software programs requires that software developers reason quantitatively about the modularity impact of several concerns, which are often scattered over the system. To this respect, concern-oriented software analysis is rising to a dominant position in software development. Hence, measurement techniques play a fundamental role in assessing the concern modularity of a software system. Unfortunately, existing measurements are still fundamentally module-oriented rather than concern-oriented. Moreover, the few available concern-oriented metrics are defined in a non-systematic and shared way and mainly focus on static properties of a concern, even if many properties can only be accurately quantified at run-time. Hence, novel concern-oriented measurements and, in particular, shared and systematic ways to define them are still welcome. This paper poses the basis for a unified framework for concern-driven measurement. The framework provides a basic terminology and criteria for defining novel concern metrics. To evaluate the framework feasibility and effectiveness, we have shown how it can be used to adapt some classic metrics to quantify concerns and in particular to instantiate new dynamic concern metrics from their static counterparts.
As our understanding and care for sustainability concerns increases, so does the demand for incorporating these concerns into software. Yet, existing programming language constructs are not well-aligned with concepts of the sustainability domain. This undermines what we term technical sustainability of the software due to (i) increased complexity in programming of such concerns and (ii) continuous code changes to keep up with changes in (environmental, social, legal and other) sustainability-related requirements. In this paper we present a proof-of-concept approach on how technical sustainability support for new and existing concerns can be provided through flexible language-level progr amming. We propose to incorporate sustainability-related behaviour into programs through micro-languages enabling such behaviour to be updated and/or redefined as and wh en required.
Although most programming languages naturally share several language features, they are typically implemented as a monolithic product. Language features cannot be plugged and unplugged from a language and reused in another language. Some modular approaches to language construction do exist but composing language features requires a deep understanding of its implementation hampering their use. The choose and pick approach from software product lines provides an easy way to compose a language out of a set of language features. However, current approaches to language product lines are not sufficient enough to cope with the complexity and evolution of real world programming languages. In this work, we propose a general light-weight bottom-up approach to automatically extract a feature model from a set of tagged language components. We applied this approach to the Neverlang language development framework and developed the AiDE tool to guide language developers towards a valid language composition. The approach has been evaluated on a decomposed version of Javascript to highlight the benefits of such a language product line.
Reuse in programming language development is an open research problem. Many authors have proposed frameworks for modular language development. These frameworks focus on maximiz ing code reuse, providing primitives for componentizing language implementations. There is also an open debate on combining feature-orientation with modular language developme nt. Feature-oriented programming is a vision of computer programming in which features can be implemented separately, and then combined to build a variety of software products. How ever, even though feature-orientation and modular programming are strongly connected, modular language development frameworks are not usually meant primarily for feature-orient ed language definition. In this paper we present a model of language development that puts feature implementation at the center, and describe its implementation in the Neverla ng framework. The model has been evaluated through several languages implementations: in this paper, a state machine language is used as a means of comparison with other framew orks, and a JavaScript interpreter implementation is used to further illustrate the benefits that our model provides.
2014
Modern software systems that play critical roles in society's infrastructures are often required to change at runtime so that they can continuously provide essential services in the dynamic environments they operate in. Updating open, distributed software systems at runtime is very challenging. Using runtime models as an interface for updating software at runtime can help developers manage the complexity of updating software while it is executing. To support this idea, we developed the FiGA framework that permits developers to update running software through changes made to UML models of the running software. In this paper, we address the following question: can the UML models be used to express any type of code change a developer desires? Specifically, we report our experience on applying Fowler's code refactoring catalog through model refactoring in the FiGA framework. The goal of this work is to show that the set of FiGA change operators is complete by showing that the refactorings at the source code level can be expressed as model changes in the FiGA approach.
The ability to annotate code and, in general, the capability to attach arbitrary meta-data to portions of a program are features that have become more and more common in programming languages.Annotations in Java make it possible to attach custom, structured meta-data to declarations of classes, fields and methods. However, the mechanism has some limits: annotations can only decorate declarations and their instantiation can only be resolved statically.
With this work, we propose an extension to Java (named @Java) with a richer annotation model, supporting code block and expression annotations, as well as dynamically evaluated members. In other words, in our model, the granularity of annotations extends to the statement and expression level and annotations may hold the result of runtime-evaluated expressions.
Our extension to the Java annotation model is twofold: (i) we introduced block and expression annotations and (ii) we allow every annotation to hold dynamically evaluated values. Our implementation also provides an extended reflection API to support inspection and retrieval of our enhanced annotations.
Neverlang 2 is a JVM-based framework for language development that emphasizes code reuse through composition of language features. This paper is aimed at showing how to develop extensible, custom languages using Neverlang's component-based model of implementation. Using this model, each feature of the language can be implemented as a separate, conceptually isolated unit that can be compiled and distributed separately from the others. A live tutorial of the framework can be found at http://youtu.be/Szxvg7XLbXc
Modern software systems that play critical roles in society are often required to change at runtime so that they can continuously provide essential services in the dynamic environments they operate in. Updating open, distributed software systems at runtime is very challenging. Using runtime models as an interface for updating software at runtime can help developers manage the complexity of updating software while it is executing. In this chapter we describe an approach to updating Java software at runtime through the use of runtime models consisting of UML class and sequence diagrams. Changes to models are transformed to changes on Java source code, which is then propagated to the runtime system using the JavAdaptor technology. In particular, the presented approach permits in-the-small software changes, i.e., changes at the code statement level, as opposed to in-the-large changes, i.e., changes at the component level. We present a case study that demonstrates the major aspects of the approach and its use. We also give the results of a preliminary evaluation of the approach.
No system escapes from the need of evolving either to fix bugs, to be reconfigured or to add new features. To evolve becomes particularly problematic when the system to evolve cannot be stopped. Traditionally the evolution of a continuously running system is tackled on by calculating all the possible evolutions in advance and coding them in the artifact itself. This approach gives origin to the code pollution phenomenon where the code is polluted by code that could never be applied. The approach has the following defects: i) code bloating, ii) it is impossible to predict any possible change and iii) the code becomes hard to read and maintain.Computational reflection by definition allows an artifact to introspect and to intercede on its own structure and behavior endowing, therefore, a reflective artifact with (potentially) the ability of self-evolving. Furthermore, to deal with the evolution as a nonfunctional concern can limit the code pollution phenomenon.
To bring the design information (model and/or architecture) at run-time provides the artifact with a basic knowledge about itself to reflect on when a change is necessary and on how to deploy it. The availability of such a knowledge at run-time enables the designer of postponing the planning and the coding of the evolution to when and only when really necessary. Reflection permits to separate the evolution from the artifact and the design information allows a (semi-)automatic planning of how the artifact should evolve when necessary.
In this contribution, we overview the role that reflection and design information have in the development of self-evolving artifacts. Moreover, we summarize the lesson learned as a high-level reflective architecture to support dynamic self-evolution in various contexts and we show how some of the existing frameworks adhere to such an architecture and how the kind of evolution affects their structure.
Modern software systems are often required to adapt their behavior at runtime in order to maintain or enhance their utility in dynamic environments. Models at runtime research aims to provide suitable abstractions, techniques, and tools to manage the complexity of adapting software systems at runtime. In this chapter, we discuss challenges associated with developing mechanisms that leverage models at runtime to support runtime software adaptation. Specifically, we discuss challenges associated with developing effective mechanisms for supervising running systems, reasoning about and planning adaptations, maintaining consistency among multiple runtime models, and maintaining fidelity of runtime models with respect to the running system and its environment. We discuss related problems and state-of-the-art mechanisms, and identify open research challenges.
Recently, domain-specific language development has become again a topic of interest, as a means to help designing solutions to domain-specific problems. Componentized language frameworks, coupled with variability modeling, have the potential to bring language development to the masses, by simplifying the configuration of a new language from an existing set of reusable components. However, designing variability models for this purpose requires not only a good understanding of these frameworks and the way components interact, but also an adequate familiarity with the problem domain.In this paper we propose an approach to automatically infer a relevant variability model from a collection of already implemented language components, given a structured, but general representation of the domain. We describe techniques to assist users in achieving a better understanding of the relationships between language components, and find out which languages can be derived from them with respect to the given domain.
The LR(0) goto-graph is the basis for the construction of parsers for several interesting grammar classes such as LALR and GLR. Early work has shown that even when a grammar is an extension to another, the goto-graph of the first is not necessarily a subgraph of the second. Some authors presented algorithms to grow and shrink these graphs incrementally, but the formal proof of the existence of a particular relation between a given goto-graph and a grown or shrunk counterpart seems to be still missing in literature as of today. In this paper we use the recursive projection of paths of limited length to prove the existence of one such relation, when the sets of productions are in a subset relation. We also use this relation to present two algorithms (Grow and Shrink) that transform the goto-graph of a given grammar into the goto-graph of an extension or a restriction to that grammar. We implemented these algorithms in a dynamically updatable LALR parser generator called DEXTER (the Dynamically EXTEnsible Recognizer) that we are now shipping with our current implementation of the Neverlang framework for programming language development.
2013
Software is changed frequently during its life cycle. New requirements come, and bugs must be fixed. To update an application, it usually must be stopped, patched, and restarted. This causes time periods of unavailability, which is always a problem for highly available applications. Even for the development of complex applications, restarts to test new program parts can be time consuming and annoying. Thus, we aim at dynamic software updates to update programs at runtime. There is a large body of research on dynamic software updates, but so far, existing approaches have shortcomings either in terms of flexibility or performance. In addition, some of them depend on specific runtime environments and dictate the program’s architecture. We present JavAdaptor, the first runtime update approach based on Java that (a) offers flexible dynamic software updates, (b) is platform independent, (c) introduces only minimal performance overhead, and (d) does not dictate the program architecture. JAVADAPTOR combines schema changing class replacements by class renaming and caller updates with Java HotSwap using containers and proxies. It runs on top of all major standard Java virtual machines. We evaluate our approach’s applicability and performance in non-trivial case studies and compare it with existing dynamic software update approaches.
A number of authors have suggested that AspectJ-like pointcut languages are too limited, and that they cannot select every possible join point in a program. Many enhanced pointcut languages have been proposed; they require virtually no change to the original code, but their improved expressive power comes often at the cost of making the pointcut expression too tightly connected with the structure of the programs that are being advised. Other solutions consist in simple extensions to the base language; they require only small changes to the original code, but they frequently serve no other immediate purpose than exposing pieces of code to the weaver. Annotations are a form of metadata that has been introduced in Java 5. Annotations have a number of uses: they may provide hints to the compiler, information to code processing tools and they can be retained at runtime. At the moment of writing, runtime-accessible annotations in the Java programming language can only be applied to classes, fields and methods. The support to annotate expressions and blocks feels like a natural extension to Java's annotation model, that can be also exploited to expose join points at a finer-grained level. In this paper we present an extension to the @AspectJ language to select block and expression annotations in the @Java language extension.
The ability to annotate code and, in general, the capability to attach arbitrary metadata to portions of a program are features that have become more and more common in programming languages. In fact, various programming techniques and tools exploit their explicit availability for a number of purposes, such as extracting documentation, guiding code profiling, enhancing the description of a data type, marking code for instrumentation (for instance, in aspect-oriented frameworks), and the list could go on.While support to attach metadata to code is not a new concept (programming platforms as CLOS and Smalltalk have pioneered in this field), consistent, pervasive APIs to define and manage code annotations are something comparatively recent on modern platforms like the .NET and Java.
Annotations in Java make possible to attach custom, structured metadata to declarations of classes, fields and methods. With this work, we propose an extension to Java (named @Java) that has a richer annotation model, supporting code block and expression annotations. In other words, the granularity of annotations extends to the statement and expression level and does not limit to class, method and field declarations.
Context-oriented programming emerged as a new paradigm to support fine-grained dynamic adaptation of software behaviour according to the context of execution. Though existing context-oriented approaches permit the adaptation of individual methods, in practice behavioural adaptations to specific contexts often require the modification of groups of interrelated methods. Furthermore, existing approaches impose a composition semantics that cannot be adjusted on a domain-specific basis. The mechanism of traits seems to provide a more appropriate level of granularity for defining adaptations, and brings along a flexible composition mechanism that can be exploited in a dynamic setting. This paper explores how to achieve context-oriented programming by using traits as units of adaptation, and trait composition as a mechanism to introduce behavioural adaptations at run time. First-class contexts reify relevant aspects of the environment in which the application executes, and they directly influence the trait composition of the objects that make up the application. To resolve conflicts arising from dynamic composition of behavioural adaptations, programmers can explicitly encode composition policies. With all this, the notion of context traits offers a promising approach to implementing dynamically adaptable systems. To validate the context traits model we implemented a JavaScript library and conducted case studies on context-driven adaptability.
2012
Service-based software systems could require to evolve during their execution. To support this, we need to consider system evolving since the design phase. Reflective Petri nets separate the system from its evolution by describing it and how it can evolve. However, reflective Petri nets have some expressivity limits and render overcomplicated the consistency checking necessary during service evolution. In this paper, we extend the reflective Petri nets approach to overcome such limits and show that on a case study.
Often an ad hoc programming language integrating features from different programming languages and paradigms represents the best choice to express a concise and clean solution to a problem. But, developing a programming language is not an easy task and this often discourages from developing your problem-oriented or domain-specific language. To foster DSL development and to favor clean and concise problem-oriented solutions we developed Neverlang.The Neverlang framework provides a mechanism to build custom programming languages up from features coming from different languages. The composability and flexibility provided by Neverlang permit to develop a new programming language by simply composing features from previously developed languages and reusing the corresponding support code (parsers, code generators, ...).
In this work, we explore the Neverlang framework and try out its benefits in a case study that merges functional programming à la Python with coordination for distributed programming as in Linda.
2011
Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracle's current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the program's architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime.
2010
The use of domain specific languages (DSL), instead of general purpose languages introduces a number of advantages in software development even if could be problematic to maintain the DSL consistent with the evolution of the domain. Traditionally, to develop a compiler/interpreter from scratch but also to modify an existing compiler to support the novel DSL is a long and difficult task. We have developed Neverlang to simplify and speed up the development and maintenance of DSLs. The framework presented in this article not only allows to develop the syntax and the semantic of a new language from scratch but it is particularly focused on the reusability of the language definition. The interpreters/compilers produced with such a framework are modular and it is easy to add remove or modify their sections. This allows to modify the DSL definition in order to follow the evolution of the underneath domain. In this work, we explore the Neverlang framework and try out the adaptability of its language definition.
2009
Generally progressive procedural content in the context of 3D scene rendering is expressed as recursive functions where a finer level of detail gets computed on demand. Typical examples of content procedurally generated are fractal images and noise textures. Unfortunately, not always the content can be expressed in this way, developers and content creators need the data to have some peculiarity (like windows on a wall for a house 3D model) and a method to drive data simplification without losing relevant details.In this paper we discuss how aspect oriented (AO) techniques can be used to drive the content creation process by mapping each data peculiarity to the code to generate it. Using aspects will let us to partially evaluate the code of the procedure improving the performance without losing the flow of the generation logic. We will also discuss how the use of AO can provide techniques to build simplified version of the data through code transformations.
Aspect-oriented software development has been proposed with the intent of better modularizing object-oriented programs by confining crosscutting concerns in aspects. Unfortunately, the aspects do not completely keep their promises. Most of the current approaches revealed to be tightly coupled with the base-program's code compromising the modularity. Moreover, the feasible modularization has a coarse-grain since the aspects can only be woven at the public interface level but not on a generic statement. We have designed the Blueprint framework to overcome these limits. The join points are located through the description of the context where they could be found. This work is about the framework realization and the role that graph grammars play in locating the join points in the base-program from the context description.
Nowadays, many problems are solved by using a domain specific language (DSL), i.e., a programming language tailored to work on a particular application domain. Normally, a new DSL is designed and implemented from scratch requiring a long time-to-market due to implementation and testing issues. Whereas when the DSL simply extends another language it is realized as a source-to-source transformation or as an external library with limited flexibility.The Hive framework is developed with the intent of overcoming these issues by providing a mechanism to compose different programming features together forming a new DSL, what we call a sectional DSL. The support (both at compiler and interpreter level) of each feature is separately described and easily composed with the others. This approach is quite flexible and permits to build up a new DSL from scratch or simplifying an existing language without penalties. Moreover, it has the desirable side-effect that each DSL can be extended at any time potentially also at run-time.
The design of dynamic discrete-event systems calls for adequate modeling formalisms and tools to manage possible changes occurring during system's lifecycle. A common approach is to pollute design with details that do not regard the current system behavior rather its evolution. That hampers analysis, reuse and maintenance in general. A reflective Petri net model (based on classical Petri nets) was recently proposed to support dynamic discrete-event system's design, and was applied to dynamic workflow's management. Behind there is the idea that keeping functional aspects separated from evolutionary ones and applying them to the (current) system only when necessary, results in a simple formal model on which the ability of verifying properties typical of Petri nets is preserved. In this paper we provide the reflective Petri nets with a (labeled) state-transition graph semantics.
Creating tailor-made programs based on the concept of software product lines (SPLs) gains more and more momentum. This is, because SPLs significantly decrease development costs and time to market while increasing product's quality. Especially highly available programs benefit from the quality improvements caused by an SPL. However, after a program variant is created from an SPL and then started, the program is completely decoupled from its SPL. Changes within the SPL, i.e., source code of its features do not affect the running program. To apply the changes, the program has to be stopped, recreated, and restarted. This causes at least short time periods of program unavailability which is not acceptable for highly available programs. Therefore, we present a novel approach based on class replacements and Java HotSwap that allows to apply features to running programs.
No system escapes from the need of evolving either to fix bugs, to be reconfigured or to add new features. To evolve becomes particularly problematic when the system to evolve can not be stopped.Traditionally the evolution of a continuously running system is tackled on by calculating all the possible evolutions in advance and hardwiring them in the application itself. This approach gives origin to the code pollution phenomenon where the code of the application is polluted by code that could never be applied. The approach has the following defects: i) code bloating, ii) it is impossible to forecast any possible change and iii) the code becomes hard to read and maintain.
Computational reflection by definition allows an application to introspect and intercede on its own structure and behavior endowing, therefore, a reflective application with (potent ially) the ability of self-evolving. Furthermore, to deal with the evolution as a nonfunctional concerns, i.e., that can be separated from the current implementation of the applicat ion, can limit the code pollution phenomenon.
To bring the design information (model and/or architecture) at run-time provides the application with a basic knowledge about itself to reflect on when a change is necessary and on how to deploy it. The availability of such a knowledge at run-time frees the designer from forecasting and coding all the possible evolutions in favor of a sort of evolutionary engi ne that, to some extent, can evaluate which countermove to apply.
In this contribution, the author will explore the role of reflection and of the design information in the development of self-evolving applications. Moreover, the author will sketch a basic reflective architecture to support dynamic self-evolution and he will analyze the adherence of the existing frameworks to such an architecture.
Most discrete-event systems are subject to evolution during lifecycle. Evolution often implies the development of new features, and their integration in deployed systems. Taking evolution into account since the design phase therefore is mandatory. A common approach consists of hard-coding the foreseeable evolutions at the design level. Neglecting the obvious ifficulties of this approach, we also get system's design polluted by details not concerning functionality, which hamper analysis, reuse and maintenance. Petri Nets, as a central formalism for discrete-event systems, are not exempt from pollution when facing evolution. Embedding evolution in Petri nets requires expertise, other than early knowledge of evolution. The complexity of resulting models is likely to affect the consolidated analysis algorithms for Petri nets. We introduce Reflective Petri nets, a formalism for dynamic discrete-event systems. Based on a reflective layout, in which functional aspects are separated from evolution, this model preserves the description effectiveness and the analysis capabilities of Petri nets. Reflective Petri nets are provided with timed state-transition semantics.
Industrial/business processes are an evident example of discrete-event systems which are subject to evolution during life-cycle. The design and management of dynamic workflows need adequate formal models and support tools to handle in sound way possible changes occurring during workflow operation. The known, well-established workflow's models, among which Petri nets play a central role, are lacking in features for representing evolution. We propose a recent Petri net-based reflective layout, called Reflective Petri nets, as a formal model for dynamic workflows. A localized open problem is considered: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template. The problem is efficiently but rather empirically addressed in a workflow management system. Our approach is formal, may be generalized, and is based on the preservation of classical Petri nets structural properties, which permit an efficient characterization of workflow's soundness.
2008
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes during workflow operation. A common approach is to pollute workflow design with details that do not regard the current behavior, but rather evolution. That hampers analysis, reuse and maintenance in general. We propose and discuss the adoption of a recent Petri net-based reflective model as a support to dynamic workflow design. Keeping separated functional aspects from evolution, results in a dynamic workflow model merging flexibility and ability of formally verifying basic workflow properties. A structural on-the-fly characterization of sound dynamic workflows is adopted based on Petri net's free-choiceness preservation. An application is presented to a localized open problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template.
Traditional approaches to dynamic system analysis and metrics measurement are based on system code (both source, intermediate and executable code) instrumentation or need ad hoc support by the run-time environment. In these contexts, the measurement process is tricky, invasive and the results could be affected by the process itself making the data not germane.Moreover, the tool based on these approaches are difficult to customize, extend and often use since their properties are rooted at specific system details (e.g., special tools such as bytecode analyzers or virtual machine goodies such as the debugger interface) and require high efforts, skills and knowledges to be adapted.
Notwithstanding its importance, software measurement is clearly a nonfunctional concern and should not impact on the software development and efficiency. Aspect-oriented programming provides the mechanisms to deal with this kind of concern and to overcome the software measurement limitations.
In this paper, we present a different approach to dynamic software measurements based on aspect-oriented programming and the corresponding support framework named AOP→HiddenMetrics. The proposed approach makes the measurement process highly customizable and easy to use reducing its invasiveness and the dependency from the code knowledge.
Aspect-oriented design needs to be systematically assessed with respect to modularity flaws caused by the realization of driving system concerns, such as tangling, scattering, and excessive concern dependencies. As a result, innovative concern metrics have been defined to support quantitative analyses of concern's properties. However, the vast majority of these measures have not yet being theoretically validated and managed to get accepted in the academic or industrial settings. The core reason for this problem is the fact that they have not been built by using a clearly-defined terminology and criteria. This paper defines a concern-oriented framework that supports the instantiation and comparison of concern measures. The framework subsumes the definition of a core terminology and criteria in order to lay down a rigorous process to foster the definition of meaningful and well-founded concern measures. In order to evaluate the framework generality, we demonstrate the framework instantiation and extension to a number of concern measures suites previously used in empirical studies of aspect-oriented software maintenance.
2007
Evolvability and adaptability are intrinsic properties of today's software applications. Unfortunately, the urgency of evolving/adapting a system often drives the developer to directly modify the application code neglecting to update its design models. Even, most of the development environments support the code refactoring without supporting the refactoring of the design information.Refactoring, evolution and in general every change to the code should be reflected into the design models, so that these models consistently represent the application and can be used as documentation in the successive maintenance steps. The code evolution should not evolve only the application code but also its design models. Unfortunately, to co-evolve the application code and its design is a hard job to be carried out automatically, since there is an evident and notorious gap between these two representations.
We propose a new approach to code evolution (in particular to code refactoring) that supports the automatic co-evolution of the design models. The approach relies on a set of predefined meta-data that the developer should use to annotate the application code and to highlight the refactoring performed on the code. Then, these meta-data are retrieved through reflection and used to automatically and coherently update the application design models.
All software systems are subject to evolution, independently by the developing technique. Aspect oriented software in addition to separate the different concerns during the software development, must be “not fragile” against software evolution. Otherwise, the benefit of disentangling the code will be burred by the extra complication in maintaining the code.To obtain this goal, the aspect-oriented languages/tools must evolve, they have to be less coupled to the base program. In the last years, a few attempts have been proposed, the Blueprint is our proposal based on behavioral patterns.
In this paper we test the robustness of the Blueprint aspect-oriented language against software evolution.
Aspect-oriented techniques are widely used to better modularize object-oriented programs by introducing crosscutting concerns in a safe and non-invasive way, i.e., aspect-oriented mechanisms better address the modularization of functionality that orthogonally crosscuts the implementation of the application.Unfortunately, as noted by several researchers, most of the current aspect-oriented approaches are too coupled with the application code, and this fact hinders the concerns separability and consequently their re-usability since each aspect is strictly tailored on the base application. Moreover, the join points (i.e., locations affected by a crosscutting concerns) actually are defined at the operation level. It implies that the possible set of join points includes every operation (e.g., method invocations) that the system performs. Whereas, in many contexts we wish to define aspects that are expected to work at the statement level, i.e., by considering as a join point every point between two generic statements (i.e., lines of code).
In this paper, we present our approach, called Blueprint, to overcome the abovementioned limitations of the current aspect-oriented approaches. The Blueprint consists of a new aspect-oriented programming language based on modeling the join point selection mechanism at a high-level of abstraction to decouple aspects from the application code. To this regard, we adopt a high-level pattern-based join point model, where join points are described by join point blueprints, i.e., behavioral patterns describing where the join points should be found.
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes occurring during workflow operation. A common approach is to pollute design with details that do not regard the current workflow behavior, but rather its evolution. That hampers analysis, reuse and maintenance in general.We propose and discuss the adoption of a recent Petri Net based reflective model (based on classical PN) as a support to dynamic workflow design, by addressing a localized problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template.
Behind there is the idea that keeping functional aspects separated from evolutionary ones, and applying evolution to the (current) workflow template only when necessary, results in a simple reference model on which the ability of formally verifying typical workflow properties is preserved, thus favoring a dependable adaptability.
Nowadays, software evolution is a very hot topic. It is particularly complex when it regards critical and nonstopping systems. Usually, these situations are tackled by hard-coding all the foreseeable evolutions in the application design and code.Neglecting the obvious difficulties in pursuing this approach, we also get the application code and design polluted with details that do not regard the current system functionality, and that hamper design analysis, code reuse and application maintenance in general. Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue.
The goal of this work is to propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model only if necessary. Such an approach tries to keep system's model as simple as possible, preserving (and exploiting) ability of formally verifying system properties typical of PN, granting at the same time adaptability.
2006
Nowadays, software evolution is a very hot topic. Many applications need to be updated or extended with new characteristics during their lifecycle. Software evolution is characterized by its huge cost and slow speed of implementation. Often, software evolution implies a redesign of the whole system, the development of new features and their integration in the existing and/or running systems (this last step often implies a complete rebuilding of the system). A good evolution is carried out through the evolution of the system design information and then propagating the evolution to the implementation.Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue. Several times a system modeled through Petri nets has to be updated and consequently also the model should be updated. Often, some kinds of evolution are foreseeable and could be hardcoded in the code or in the model, respectively.
Embedding evolutionary steps in the model or in the code however requires early and full knowledge of the evolution. The model itself should be augmented with details that do not regard the current system functionality, and that jeopardize or make very hard analysis and verification of system properties.
In this work, we propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model if necessary. Such an approach tries to keep the model as simple as possible, preserving (and exploiting) the ability of formally verifying system properties typical of PN, granting at the same time model adaptability.
Aspect-oriented programming (AOP) has been designed to provide a better separation of concerns at development level by modularizing concerns that would otherwise be tangled and scattered across other concerns. Current mainstream AOP techniques separate crosscutting concerns on a syntactic basis whereas a concern is more a semantic matter. Therefore, a different, more semantic-oriented, approach to AOP is needed. In this position paper, we investigate the limitations of mainstream AOP techniques, mainly , in this regard and highlight the issues that need to be addressed to design semantic-based join point models.
Aspect–Oriented Programming (AOP) is increasingly being adopted by developers to better modularize object–oriented design by introducing crosscutting concerns. However, due to tight coupling of existing approaches with the implementing code and to the poor expressiveness of the pointcut languages a number of problems became evident. Model–Driven Architecture (MDA) is an emerging technology that aims at shifting the focus of software development from a programming language specific implementation to application design, using appropriate representations by means of models which could be transformed toward several development platforms. Therefore, this work presents a possible solution based on modeling aspects at a higher level of abstraction which are, in turn, transformed to specific targets.
Rectangular dualization is an effective, hierarchically oriented visualization method for network topologies and can be used in many other problems having in common with networks the condition that objects and their interoccurring relations are represented by means of a planar graph. However, only 4-connected triangulated planar graphs admit a rectangular dual. In this paper we present a linear time algorithm to optimally construct a rectangular layout for a general class of graphs and we discuss a variety of application fields where this approach represents an helpful support for visualization tools.
Aspect-Oriented Programming (AOP) is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact harms the evolvability of the program, hinders the concerns selection and reduces the aspect reusability. To overcome this problem is an hot topic.This work propose a possible solution to the limits of the current aspect-oriented techniques based on modeling the join point selection mechanism at a higher level of abstraction to decoupling base program and aspects.
In this paper, we will present by examples a novel join point model based on design models (e.g., expressed through UML diagrams). Design models provide a high-level view on the application structure and behavior decoupled by base program. A design oriented join point model will render aspect definition more robust against base program evolution, reusable and independent of the base program.
The urgency that characterizes many requests for evolution forces the system administrators/developers of directly adapting the system without passing through the adaptation of its design. This creates a gap between the design information and the system it describes. The existing design models provide a static and often outdated snapshot of the system unrespectful of the system changes. Software developers spend a lot of time on evolving the system and then on updating the design information according to the evolution of the system. To this respect, we present an approach to automatically keep the design information (diagrams in our case) updated when the system evolves. The diagrams are bound to the application and all the changes to it are reflected to the diagrams as well.
Aspect-Oriented Programming is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact hinders the concerns separability and reusability since each aspect is strictly tailored on the base application.This work proposes a possible solution to this problem based on modeling the join points selection mechanism at a higher level of abstraction. In our view, the aspect designer does not need to know the inner details of the application such as a specific implementation or the used name conventions rather he exclusively needs to know the application behavior to apply his/her aspects.
In the paper, we present a novel join point model with a join point selection mechanism based on a high-level program representation. This high-level view of the application decouples the aspects definition from the base program structure and syntax. The separation between aspects and base program will render the aspects more reusable and independent of the manipulated application.
2005
Reflective programming is becoming popular due to the increasing set of dynamic services provided by execution environments like JVM and CLR. With custom attributes Microsoft introduced an extensible model of reflection for CLR: they can be used as additional decorations on element declarations. The same notion has been introduced in Java 1.5. The extensible model proposed in both platforms limits annotations to class members. In this paper we describe [a]C#, an extension of the C# programming language, that allows programmers to annotate statements or code blocks and retrieve these annotations at run-time. We show how this extension can be reduced to the existing model. A set of operations on annotated code blocks to retrieve annotations and manipulate bytecode is introduced. Finally, we discuss how to use [a]C# to annotate programs giving hints on how to parallel a sequential method and how it can be implemented by means of the abstractions provided by the run-time of the language.
In this paper, we have briefly explored the aspect-oriented approach as a tool for supporting the software evolution. The aim of this analysis is to highlight the potentiality and the limits of the aspect-oriented development for software evolution. From our analysis follows that in general (and in particular for AspectJ) the approach to join points, pointcuts and advices definition are not enough intuitive, abstract and expressive to support all the requirements for carrying out the software evolution. We have also examined how a mechanism for specifying pointcuts and advices based on design information, in particular on the use of UML diagrams, can better support the software evolution through aspect oriented programming. Our analysis and proposal are presented through an example.
Software modeling has received a lot a of attention in the last decade and now is an important support for the design process.Actually, the design process is very important to the usability and understandability of the system, for example functional requirements present a complete description of how the system will function from the user's perspective, while non-functional requirements dictate properties and impose constraints on the project or system.
The design models and implementation code must be strictly connected, i.e. we must have correlation and consistency between the two previous views, and this correlation must exist during all the software cycle. Often, the early stages of development, the specifications and the design of the system, are ignored once the code has been developed. This practice cause a lot of problems, in particular when the system must evolve. Nowadays, to maintain a software is a difficult task, since there is a high coupling degree between the software itself and its environment. Often, changes in the environment cause changes in the software, in other words, the system must evolve itself to follow the evolution of its environment.
Typically, a design is created initially, but as the code gets written and modified, the design is not updated to reflect such changes.
This paper describes and discusses how the design information can be used to drive the software evolution and consequently to maintain consistency among design and code.
Library development has greatly benefited by the wide adoption of virtual machines like Java and Microsoft .NET. Reflection services and first class dynamic loading have contributed to this trend. Microsoft introduced the notion of custom annotation, which is a way for the programmer to define custom meta-data stored along reflection meta-data within the executable file. Recently also Java has introduced an equivalent notion into the virtual machine. Custom annotations allow the programmer to give hints to libraries about his intention without having to introduce semantics dependencies within the program; on the other hand these annotations are read at run-time introducing a certain amount of overhead. The aim of this paper is to investigate the impact of this new feature on library design, focusing both on expressivity and performance issues.
Reflective programming is becoming popular due to the increasing set of dynamic services provided by execution environments like JVM and CLR. With custom attributes Microsoft introduced an extensible model of reflection for CLR: they can be used as additional decorations on element declarations. The same notion has been introduced in Java 1.5. The annotation model, both in Java and in C#, limits annotations to classes and class members. In this paper we describe [a]C#, an extension of the C# programming language, that allows programmers to annotate statements and code blocks and retrieve these annotations at run-time. We show how this extension can be reduced to the existing model. A set of operations on annotated code blocks to retrieve annotations and manipulate bytecode is introduced. We also discuss how to use [a]C# to annotate programs giving hints on how to parallelize a sequential method and how it can be implemented by means of the abstractions provided by the run-time of the language. Finally, we show how our model for custom attributes has been realized.
2004
The growing diffusion of wireless technologies is leading to deployment of small-scale and location dependent information services (LDISs). Those new services call for provisioning schemes that are able to operate in a distributed environment and do not require network infrastructure. This paper describes an approach to a service-oriented middleware which enables a mobile device to be aware of the surrounding environment and to transparently exploit every LDIS discovered in the coverage area of the hosting wireless network. the paper introduces seamless nomadic system-aware (SNA) servant. SNA servants run on mobile devices, discover LDISs and are not associated with any specific service. The paper also describes the key features for the SNA servants implementation and for rendering them interoperable and cross-platform on, at least, .NET and JVM frameworks.
In the last few years the interest in reflection has grown and many modern programming languages/architectures have provided the programmer with reflective mechanisms. As well as any other novelty also reflection has detractors. They rightly or wrongly accuse reflection to be too inefficient to be used with real profit. In this work, we have investigated about the performance of Java reflection library (especially of the class Method and of its method invoke) and realized a mechanism which improves its performances. Our mechanism consists of a class, named SmartMethod and of a parser contributing to transform reflective invocations into direct call carried out by the standard invocation mechanism of Java. The SmartMethod class is compliant — that is, it provides exactly the same services —, with the class Method of the standard Java core reflection library but it provides a more efficient reflective method invocation.
Computational reflection provides the developers with a programming mechanism devoted to favorite code extensibility, reuse and maintenance. Notwithstanding that, it has not achieved developers' unanimous acceptance and its full potential yet. In our opinion, this depends on the intrinsic complexity of most of the reflective approaches that hinders their efficient implementation. The aim of this paper consists of defining the essence of reflection, that is, to identify the minimal set of characteristics that a software system must have to be considered reflective. The consequence is the realization of a run-time environment supporting the essence of reflection without affecting the programming language and with a minimal impact on the programming system design. This achievement will improve reflective system performances reducing the impact of one of the most diffuse criticism about reflection: low performance.
In a deregulated energy market the adoption of multipurpose and flexible software tools for the optimal design and sizing of energy systems is becoming mandatory.For these reasons, we have developed WIDGET-TEMP (Web-based Interface and Distributed Graphical Environment for TEMP, ThermoEconomic Modular Program), a tool which is the result of an interdisciplinary research which applied recent IT innovations such as XML and web-based approaches to the analysis and optimization of energy plant layouts on a thermoeconomic basis. WIDGET provides an interface for remotely accessing the internal thermoeconomic analysis, the full life-cycle cost and investment assessment, which includes the economic impact of environmental costs due to pollutant emissions. This approach reduces the requirements for the local machine in terms of processor time and memory, and allows users to exploit the tool just when needed.
An initial functional productive diagram of the plant is now automatically drawn and is available to the user on a visual basis.
We present a general description of the tool organization and outline the approach for modeling the technical performance and cost of the component. Then, we describe the latest upgrades of TEMP, and report the new gas turbine cost equations. Finally, the tool is applied to a conventional simple and combined cycle, showing both the usability of the new tool and the reliability of the results.
In this paper, we have briefly analyzed the aspect-oriented approach with respect to the software evolution topic. The aim of this analysis is to highlight the aspect-oriented potentiality for software evolution and its limits. From our analysis, we can state that actual pointcut definition mechanisms are not enough expressive to pick out from design information where software evolution should be applied. We will also give some suggestions about how to improve the pointcut definition mechanism.
Software systems today need to dynamically self-adapt against dynamic requirement changes. In this paper we describe a reflective middleware whose aim consists of consistently evolving software systems against runtime changes. This middleware provides the ability to change both structure and behavior for the base-level system at run-time by using its design information. The meta-level is composed of cooperating objects, and has been specified by using a design pattern language. The base objects are controlled by meta-objects that drive their evolution. The essence of is the ability of extracting the design data from the base application, and of constraining the dynamic evolution to stable and consistent systems.
In this paper we present an -based formalism to describe hierarchically organized communication networks. After a short overview of existing graph description languages, we discuss the advantages and disadvantages of their application in network optimization. We conclude by extending one of these formalisms with features supporting the description of the relationship between the optimized logical layout of a network and its physical counterpart. Elements for describing traffic parameters are also given.
In this paper we present a proposal for safely evolving a software system against run-time changes. This proposal is based on a reflective architecture which provides objects with the ability of dynamically changing their behavior by using their design information. The meta-level system of the proposed architecture supervises the evolution of the software system to be adapted that runs as the base-level system of the reflective architecture. The meta-level system is composed of cooperating components; these components carry out the evolution against sudden and unexpected environmental changes on a reification of the design information (e.g., object models, scenarios and statecharts) of the system to be adapted. The evolution takes place in two steps: first a meta-object, called evolutionary meta-object, plans a possible evolution against the detected event then another meta-object, called consistency checker meta-object validates the feasibility of the proposed plan before really evolving the system. Meta-objects use the system design information to govern the evolution of the base-level system. Moreover, we show our architecture at work on a case study.
This paper describes how design information, in our case specifications, can be used to evolve a software system and validate the consistency of such an evolution. This work complements our previous work on reflective architectures for software evolution describing the role played by meta-data in the evolution of software systems. The whole paper focuses on a case study; we show how the urban traffic control system (UTCS) or part of it must evolve when unscheduled road maintenance, a car crush or a traffic jam block normal vehicular flow in a specific road. The UTCS case study perfectly shows how requirements can dynamically change and how the design of the system should adapt to such changes. Both system consistency and adaptation are governed by rules based on meta-data representing the system design information. As we show by an example, such rules represent the core of our evolutionary approach driving the evolutionary and consistency checker meta-objects and interfacing the meta-level system (the evolutionary system) with the system that has to be adapted.
In the last few years the interest in reflection has grown and many modern programming languages/environments (e.g., Java and .NET) have provided the programmer with reflective mechanisms, i.e., with the ability of dynamically looking into (introspect) the structure of the code from the code itself. In spite of its evident usefulness, reflection has many detractors, who claim that it is too inefficient to be used with real profit. In this work, we have investigated about the performance issue in the context of the Java reflection library and presented a different approach to the introspection in Java that improves its performances. The basic idea of the proposed approach consists of moving most of the overhead due to the dynamic introspection from run-time to compile-time. The efficiency improvement has been proved by providing a new reflection library compliant – that is, it provides exactly the same services –, with the standard Java reflection library based on the proposed approach. This paper is focused on speeding up the reification and the invocation of methods, i.e., on the class SmartMethod that replaces the class Method of the standard reflection library.
2003
Wireless computing, because of the limited memory capacity of the palmtops, forces to separate data (stored on a remote server) from the application (running on the palmtops) that uses them. In applications working on data that frequently change, several kilobytes of data are exchanged between the server and the client palmtops. It is fairly evident that a similar tight coupling may easily saturate the network bandwidth when many palmtops are used in parallel, thus degrading the performances of the application running on it. This paper shows a way to reduce the waste of bandwidth by exploiting at the best the palmtop memory to deal with data caching. The proposed smart data caching is based on context information. We have also applied our method and analyzed its features in a specific application: electronic guide to archeological sites in the PAST EC IT project.
In the last few years there has been a considerable penetration of wireless technology in everyday life. This penetration has also increased the availability of Location-Dependent Information Services (LDIS), such as local information access (e.g. traffic reports, news, etc.), nearest-neighbor queries (such as finding the nearest restaurant, gas station, medical facility, ATM, etc.) and others.New wireless environments and paradigms are continuously evolving and novel LDISs are continuously being deployed. Such a growth means the need to deal with:
services without standard interfaces - same or similar LDISs being offered by different vendors through different APIs but with same standard functional interfaces; services deployed dynamically - LDIS made available on a need basis or when the scenario dynamically mutates and in addition provides dynamic roaming between services and dynamic service interchangeability; and non-classified services (i.e., novel services).
The classical remote method invocation (RMI) mechanism adopted by several object-based middleware is `black box' in nature, and the RMI functionality, i.e., the RMI interaction policy and its configuration, is hard-coded into the application. This RMI nature hinders software development and reuse, forcing the programmer to focus on communication details often marginal to the application he is developing. Extending the RMI behavior with extra functionality is also a very difficult job, because added code must be scattered among the entities involved in communications.This situation could be improved by developing the system in several separate layers, confining communications and related matters to specific layers. As demonstrated by recent work on reflective middleware, reflection represents a powerful tool for realizing such a separation and therefore overcoming the problems referred to above. Such an approach improves the separation of concerns between the communication-related algorithms and the functional aspects of an application. However, communications and all related concerns are not managed as a single unit separate from the rest of the application, which makes their reuse, extension and management difficult. As a consequence, communications concerns continue to be scattered across the meta-program, communication mechanisms continue to be black-box in nature, and there is only limited opportunity to adjust communication policies through configuration interfaces.
In this paper we examine the issues raised above, and propose a reflective approach especially designed to open up the Java RMI mechanism. Our proposal consists of a new reflective model, called multi-channel reification, that reflects on and reifies communication channels, i.e., it renders communication channels first-class citizens. This model is designed both for developing new communication mechanisms and for extending the behavior of communication mechanisms provided by the underlying system. Our approach is embodied in a framework called mChaRM which is described in detail in this paper.
2002
The main objective of remote-method-invocation- and object-based middleware is to provide a convenient environment for the realization of distributed computations. In most cases, unfortunately, interaction policies in these middleware platforms are hardwired into the platform itself. Some platforms, e.g., CORBA's interceptors, offer means to redefine such details but their flexibility is limited to the possibilities that the designer has foreseen.In this way, distributed algorithms must be exclusively embedded in the application code, breaking any separation of concerns between functional and nonfunctional code. Some programming languages like Java disguise remote interactions as local calls, thus rendering their presence transparent to the programmer. However their management is not so transparent and easily maskable to the programmer.
We can summarize these kinds of problems with current middleware platforms as follows:
1. interaction policies are hidden from the programmer who cannot customize them (lack of adaptability);
2. communication, synchronization, and tuning code is intertwined with application code (lack of separation of concerns);
3. algorithms are scattered among several objects, thus forcing the programmer to explicitly coordinate their work (lack of global view).
In this paper we show how to enhancing the Java RMI framework to support object groups. The package we have developed allows programmers to dynamically deal with groups of servers all implementing the same interface. Our group mechanism can be used both to improve reliability preventing system failures and to implement processor farm parallelism. Each service request dispatched to an object group returns all the values computed by the group members permitting the implementation of both kind of applications. Moreover, these approaches differ both over computations failure and over the semantic of the implemented interface. Our extension is achieved enriching the classic RMI framework and the existing RMI registry with new functionalities. From user's point of view the multicast RMI acts just like the traditional RMI system, and really the same architecture has been used.
An efficient method for cross-correlating images, in querying operations over an image database is presented. The method relies on a multiresolution compression method, based on a variant of the wavelet packets best-basis algorithm of Coifman and Wickerhauser. It is shown that a searched image can be correlated with the compressed images of the database in a fraction of the time required by using the traditional cross-correlation function computed on the original bitmap image.
Today, complex information systems need a simple way for changing the object behavior according with changes that occur in its running environment. We present a reflective architecture which provides the ability to change object behavior at run-time by using design-time information. By integrating reflection with design patterns we get a flexible and easily adaptable architecture. A reflective approach that describes object model, scenarios and statecharts helps to dynamically adapt the software system to environmental changes. The object model, system scenario and many other design information are reified by special meta-objects, named evolutionary meta-objects. Evolutionary meta-objects deal with two types of run-time evolution. Structural evolution is carried out by causal connection between evolutionary meta-objects and its referents through changing the structure of these referents by adding or removing objects or relations. Behavioral evolution allows the system to dynamically adapt its behavior to environment changes by itself. Evolutionary meta-objects react to environment changes for adapting the information they have reified and steering the system evolution. They provide a natural liaison between design information and the system based on such information. This paper describes how this liaison can be built and how it can be used for adapting a running system to environment changes.
The fragment of pattern language proposed in this paper, shows how to adapt a nonstoppable software system to reflect changes in its running environment. These framework patterns depend on well-known techniques for programs to dynamically analyze and modify their own structure, commonly called computational reflection. Our patterns go together with common reflective software architectures.
2001
The Problem
From our experience, RMI-based frameworks and in general all frameworks supplying distributed computation seem to have some troubles. We detected at least three problems related to their flexibility and applicability.Most of them lack in flexibility. Their main duty consists in providing a friendly environment suitable for simply realizing distributed computations. Unfortunately, interaction policies are hardwired in the framework. If it is not otherwise foreseen, it is a hard job to change, for example, how messages are marshaled/unmarshaled, or the dispatching algorithm which the framework adopts. Some frameworks provide some limited mechanism to redefine such details but their flexibility is limited from the possibility that the designer has foreseen.
Distributed algorithms are imbued in the applicative code breaking the well-known software engineering requirement termed as separation of concerns. Some programming languages like Java mask remote interactions (i.e., remote method or procedure call) as local calls rendering their presence transparent to the programmer. However their management, — i.e., tuning the needed environment to rightly carry out remote computations, and synchronizing involved objects — is not so transparent and easily maskable to the programmer. Such a behavior hinders the distributed algorithms reuse.
Object-oriented distributed programming is not distributed object-oriented programming. It is an hard job to write object-oriented distributed applications based on information managed by several separated entities. Algorithms originally designed as a whole, have to be scattered among several entities and no one of these entities directly knows the whole algorithm. This fact improves the complexity of the code that the programmer has to write because (s)he has to extend the original algorithm with statements for synchronizing and for putting in touch all the remote objects involved in the computation. Moreover the scattering of the algorithm among several objects contrasts with the object-oriented philosophy which states that data and algorithms managing them are encapsulated into the same entity, because each object can't have a global view of any data it manages, thus we could say that this approach lacks of global view. The lack of global view forces the programmer to strictly couple two or more distributed object.
A reflective approach, as stated in [Briot98], can be considered as the glue sticking together distributed and object-oriented programming and filling the gaps in their integration. Reflection improves flexibility, allows developers to provide their own solutions to communication problems, and keeps communication code separated from the application code, and completely encapsulated into the meta-level.
Hence reflection could help to solve most of the troubles we detected. Reflection permits to expose implementation details of a systems, i.e., in our case allows to expose the interaction policies. It also permits to easily manipulate them. A reflective approach also permits to easily separate the interaction management from the applicative code. Using reflection and some syntactic sugar for masking the remote calls we can achieve a good separation of concerns also in distributed environments. Thanks to such considerations a lot of distributed reflective middleware have been developed. Their main goal consists both in overcoming the lacking of flexibility and in decoupling the interaction code from the applicative code.
By the way, reflective distributed middlewares exhibit the same troubles detected in the distributed middlewares. They still fail in considering each remote invocation in terms of the entity involved in the communication (i.e., the client, the server, the message and so on) and not as a single entity. Hence the global view requirement is not achieved. This is due to the fact that most of the meta-models that have been presented so far and used to design the existing reflective middlewares are object-based models. In these models, every object is associated to a meta-object, which traps the messages sent to the object and implements the behavior of that invocation. Such a meta-models inherit the trouble of the lack of global view from the object-oriented methodology which encapsulates the computation orthogonally to the communication.
Hence, these approaches are not appropriate to handle all the aspects of distributed computing. In particular adopting an object-based model to monitor distributed communications, the meta-programmer often has to duplicate the base-level communication graph into the meta-level augmenting the meta-program complexity. Thus, object-based approaches to reflection on communications move the well-known problem [Videira-Lopez95] of nonfunctional code intertwined to functional one from the base- to the meta-level. Simulating a base-level communication into the meta-level allows to perform meta-computations either related sending or receiving action, but not related to the whole communication or which involve information owned both by the sender and by the receiver without dirty tricks. This trouble goes under the name of global view lacking.
Besides, object-based reflective approaches and their reflective middlewares based on them allow only to carry out global changes to the mechanisms responsible for message dispatching, neglecting the management of each single message. Hence they fail to differentiate the meta-behavior related to each single exchanged message. In order to apply a different meta-behavior to either each or group of exchanged messages the meta-programmer has to write the meta-program planning a specific meta-behavior for each kind of incoming message. Unfortunately, in this way the size of the meta-program grows to the detriment of its readability, and of its maintenance.
Due to such a consideration, a crucial issue of opening a RMI-based framework consists in choosing a good meta-model which permits to go around the global view lacking, and to differentiate the meta-behavior for each exchanged message.
Our Solution
From the problem analysis we have briefly presented we learned that in order to solve the drawbacks of the RMI-based framework we have to provide an open RMI mechanism, i.e., a reflective RMI mechanism, which exposes its details to be manipulated by the meta-program and allows the meta-program to manage each communication separately and as a single entity. The main goal of this work consists in designing such a mechanism using a reflective approach.To render the impact of reflection on object-oriented distributed framework effective, and to obtain a complete separation of concern, we need new models and frameworks especially designed for communication-oriented reflection, i.e., we need a reflective approach suitable for RMI-based communication which allows meta-programmer to enrich, manipulate and replace each remote method invocation and its semantics with a new one. That is, we need to encapsulate message exchanging into a single logical meta-object instead of scattering any relevant information related to it among several meta-objects and mimicking the real communication with one defined by the meta-programmer among such meta-objects as it is done using traditional approaches.
To fulfill this commitment we designed a new model, called multi-channel reification model. The multi-channel reification model is based on the idea of considering a method call as a message sent through a logical channel established among a set of objects requiring a service, and a set of objects providing such a service. This logical channel is reified into a logical object called multi-channel, which monitors message exchange and enriches the underlying communication semantics with new features used for the performed communication. Each multi-channel can be viewed as an interface established among the senders and the receivers of the messages. Each multi-channel is characterized by its behavior, termed kind, and the receivers, which it is connected to.
multi-channel ≡ (kind, receiveri, ..., receivern)Thanks to this multi-channel's characterization it is possible to connect several multi-channels to the same group of objects. In such a case, each multi-channel will be characterized by a different kind and will filter different patterns of messages.
This model permits to design an open RMI-based mechanism which potentially overcomes the previously exposed problems.
In this way, each communication channel is reified into a meta-entity. Such a meta-entity has a complete access to all details related to the communications it filters, i.e. the policies related to both the sender, and the receivers side, and, of course, the messages it filters. A channel realizes a close meta-system with respect to the communications. It encapsulates all base-level aspect related to the communication providing the global view feature.
Of course, this model keeps all the properties covered by the other reflective models, such as transparency and separation of concerns. Hence the approach also guarantees to go around the problems already solved using reflection. Protocols and other realizative stuff are exposed to the meta-programmer manipulations, and the remote method invocation management is completely separated from the applicative code.
Moreover through the kind mechanism we can differentiate the behavior which is applied to a specified pattern of messages. So a set of multi-channels (each one with a different kind) can be associated to the same communication channel. Each channel will operate to a different set of messages. In this way the channel's code is related to a unique behavior it indiscriminately has to apply to all the messages it filters.
mChaRM is a framework developed by the authors which opens the RMI mechanism supplied by Java. This framework supplies a development and run-time environment based on the multi-channel reification model. Multi-channels will be developed in Java, and the underlying mChaRM framework will dynamically realize the context switching and the causal connection link. A beta version of mChaRM, documentations and examples are available from:
https://cazzola.di.unimi.it/mChaRM_webpage.htmlSuch a system provided RMI-based programming environment. The supplied RMI mechanism is multi-cast (i.e., supplies a mechanism to remotely invoke a method of several servers), open (the RMI mechanism is fully customizable through reflection), and globally aware of its aspects. Some example of application are also provide.
One of the main goals in optimizing communication networks is to enhance performances by minimizing the number of message hops, i.e. the number of graph nodes traversed by a message. Most of the optimization techniques are based on clustering, i.e., the network layout is reconfigured in sub-networks. Network clustering has been largely studied in the literature but most of the available algorithms are application dependent.In this paper we restrict our attention to algorithms based on the location of the median points, in order to build clusters with a balanced number of elements and to minimize communication time. We present two algorithms and relative experimental results about the quality of the computed clusterizations, in terms of the minimum number of computed hops. One algorithm is based on the well-known multi-median heuristic algorithm, while the other adopts a greedy approach, i.e., at each step the algorithm computes clusters farther and farther from each central node.
To the achieved clusterization we apply a further step, which consists in finding a virtual path layout according to Gerstel's (VPPL) algorithm. The adopted criterium for our experimental comparisons is the optimality, in terms of the number of signal hops, of the achieved virtual path layout. The experiments are carried out upon a set of networks representing real environments.
In this paper we describe how to realize a Java RMI framework supporting multi-point method invocation. The package we have realized allows programmers to build groups of servers that could provide services in two different modes: fault tolerant and parallel. These modes differ over computations failure. Our extension is based upon the creation of entities which maintain a common state between different servers. This has been done extending the existing RMI registry. From the user's point of view the multi-point RMI acts just like the traditional RMI system, and really the same architecture has been used.
2000
Traditional methods for object-oriented analysis and modeling focus on the functional specification of software systems, i.e., application domain modeling. Non-functional requirements such as fault-tolerance, distribution, integration with legacy systems, and so on, have no clear collocation within the analysis process, since they are related to the architecture and workings of the system itself rather than the application domain. They are thus addressed in the system's design, based on the partitioning of the system's functionality into classes resulting from analysis. As a consequence, the smooth transition from analysis to design that is usually celebrated as one of the main advantages of the object-oriented paradigm does not actually hold for what concerns non-functional issues. A side effect is that functional and non-functional concerns tend to be mixed at the implementation level. We argue that the reflective approach whereby non-functional properties are ascribed to a meta-level of the software system may be extended “back to” analysis. Adopting a reflective approach in object-oriented analysis may support the precise specification of non-functional requirements in analysis and, if used in conjunction with a reflective approach to design, recover the smooth transition from analysis to design in the case of non-functional system's properties.
Architectural reflection is the computation performed by a software system about its own software architecture. Building on previous research and on practical experience in industrial projects, in this paper we expand the approach and show a practical (albeit very simple) example of application of architectural reflection. The example shows how one can express, thanks to reflection, both functional and non-functional requirements in terms of object-oriented concepts, and how a clean separation of concerns between application domain level and architectural level activities can be enforced.
1999
This paper proposes a novel reflective approach, orthogonal to the classic computational approach, whereby a system performs computation on its software architecture instead of individual components. The approach supports system's self-management activities such as dynamic reconfiguration to be realized in a systematic and conceptually clean way and added to existing systems without modifying the system itself. The parallelism between such architectural reflection and classic reflection is discussed, as well as the transposition of classic reflective concepts in the architectural domain.
We propose here a mechanism for history-dependent access control for a distributed object-oriented system, implemented using reflection. In a history-dependent access control system, access is decided based not only on the current request, but also on the previous history of accesses to some entity or service. We consider timing constraints expressed using temporal logic, and we describe a possible implementation for our mechanism. The expected benefits from the reflective approach are: more stability of the security layer (i.e., with a more limited number of hidden bugs), better software modularity, more reusability, and the possibility to adapt the security module with relatively few changes to other applications and other authorisation policies.
We analyze how to use the reflective approach to integrate an authorization system into a distributed object-oriented framework. The expected benefits from the reflective approach are: more stability of the security layer (i.e., with a more limited number of hidden bugs), better software and development modularity, more reusability, and the possibility to adapt the security module with at most a few changes to other applications. Our analysis is supported by simple and illustrative examples written in Java.
As software systems become larger and more complex, a relevant part of code shifts from the application domain to the management of the system's run-time architecture (e.g., substituting components and connectors for run-time automated tuning). We propose a novel design approach for component-based systems supporting architectural management in a systematic and conceptually clean way and allowing for the transparent addition of architectural management functionality to existing systems. The approach builds on the concept of reflection, extending it to the programming-in-the-large level, thus yielding architectural reflection (AR). This paper focuses on one aspect of AR, namely the monitoring and dynamic modification of the system's overall control structure (strategic reflection), which allows the behaviour of a system to be monitored and adjusted without modifying the system itself.
Traditional methods for object-oriented analysis and modeling focus on the functional specification of software systems. Non-functional requirements such as fault-tolerance, distribution, integration with legacy systems, and the like, do not have a clear collocation within the analysis process, as they are related to the architecture and workings of the system itself rather than the application domain. They are thus addressed in the system's design, based on the partitioning of the system's functionality into classes as resulting from the analysis. As a consequence of this, the “smooth transition from analysis to design” that is usually celebrated as one of the main advantages of the object-oriented paradigm does not actually hold for what concerns non-functional issues. Moreover, functional and non-functional concerns tend to be mixed at the implementation level. We argue that the reflective design approach whereby non-functional properties are ascribed to a meta-level of the software system may be extended “back to” analysis. Reflective Object Oriented Analysis may support the precise specification of non-functional requirements in analysis and, if used in conjunction with a reflective approach to design, recover the smooth transition from analysis to design in the case of non-functional system's properties.
1998
The paper presents a new reflective model, called Channel Reification, which can be used in distributed computations to overcome difficulties experienced by other models in the literature when monitoring communication among objects.The channel is an extension of the message reification model. A Channel is a communication manager incarning successive messages exchanges by two objects: its application range between those of message reification and those of meta-object model.
After a brief review of existing reflective models and how reflections can be used in distributed systems, channel reification is presented and compared to the widely used meta-object model. Applications of channel reification to protocol implementation, and to fault tolerant object systems are shown. Future extensions to this model are also summarized.
As the size and complexity of software systems increase, a relevant part of the system overall functionality shifts from the applicative domain to run-time system management activities, i.e., management activities which cannot be performed off-line. These range from monitoring to dynamic reconfiguration and, for non-stopping systems, also include evolution, i.e., addition or replacement of components or entire subsystems. In current practice, run-time system management is impeded by the fact that the knowledge of the overall structure and functioning of the system (i.e., its software architecture) is confined in design specification documents, while it is only implicit in running systems. In this paper we introduce, provide rationale for, and briefly demonstrate an approach to system management where the system maintains, and operates on, an architectural description of itself. This description is causally connected to the system's concrete structure and state, i.e., any change of the system architecture affects the description, and vice versa. This model can be said to extend the principles of computational reflection from the realm of programming-in-the-small to that of programming-in-the-large.
Writing code to handle dynamic data structures might seem to be an easy task, but write an efficient, readable and maintainable code is not such a simple task.In this short note we investigate some problems in developing code for handling dynamic data structures, and we propose techniques to overcome them. We take into account the interesting method proposed by Qiu in SIGPLAN Notices.
Many authors have addressed the problem of handling dynamic data structures by suggesting several clever tricks to simplify dynamic data structures management and to avoid the non-uniform behavior of empty data structures, and access to the first and the last element. In particular, the use of data sentinels and dummy headers makes the implementation more orthogonal and easy to maintain. The cost is a slightly increased size in space which is negligible when compared with the benefits produced.
A reflective approach for modeling and implementing authorization systems is presented. The advantages of the combined use of computational reflection and authorization mechanisms are discussed, and three reflective architectures are examined for pointing out the corresponding merits and defects.
In this paper we explore the object-oriented reflective world, performing an overview of the existing models and presenting a set of features suitable to evaluate the quality of each reflective model. The purpose of the paper is to determine the context applicability of each reflective model examined.
Realizing a shift of software engineering towards a component based approach to software development requires the development of higher level programming systems supporting the development of systems from components. The paper presents a novel approach to the design of large software systems where a program in the large describing the system's architecture is executed at run time to rule over the assembly and dynamic cooperation of components. This approach has several advantages following from a clean separation of concerns between programming in the small and programming in the large issues in instantiated systems.
The problem of designing complex dependable systems is addressed in this paper. Due to some peculiarities of their application and behavior these are often referred to as reactive systems. Two main paradigms for their design have recently been proposed; we name these paradigms living processes and hidden concurrency, depending on their approach to concurrency handling. The analysis of application requirements and constraints is proposed as a methodology for selecting the most suitable implementation paradigm for a given application. Finally, it is shown that in some cases an intermediate paradigm may provide a suitable solution.
1997
The paper presents a new reflective model, called Channel Reification, which can be used to implement communication abstractions. After a brief review of existing reflective models and how reflections can be used in distributed systems, channel reification is presented and compared to the widely used meta-object model. An application to protocol implementation, and hints on other channel applications are also given.