The problem to be solved is two-fold: avoid problems with communication and consistency. A common language will be defined to ease communication within and between teams. A single source of information implies consistency, and in turn, prevents misunderstandings.

communication and consistency

A very good introduction to domain modeling is (Tomassetti, 2017).


Model driven software development (MDSD) allows introducing a supplementary layer of abstraction above the code: This additional complexity can be employed to complement, and in some cases replace traditional model based design approaches, and thus represents a significant design element of a software architecture. Such model driven approaches often also refer to domain specific languages (DSL). These languages make the design easy to communicate (defined language at design level) and at the same time the design itself constitutes of actual software artifacts, since code is directly generated from it (single source of information).

Meta-model, model, and all that

The meta modeling toolsets utilized on this site typically allow to define (1) a meta model, which defines what is modeled, e.g., the basic layout of a data structure in terms of structures which have attributes. This meta model can be compared to a data base layout. It also can be compared to glossary for a given domain. Using the meta model (2) model data can be defined by a user. This model data represents the concrete data structures for a given domain. (3) Validation: importantly, the meta model definition allows to validate and check concrete models. Finally, (4) code generators and model transformations represent primary end user tools to allow mass production based on a single source of information (the model).

Real value for a project

Notably, if a model represents a real abstraction of a software aspect - something people talk about and which hides important details - we get a big impact in productivity. From the one and single source of specification (the model data) we can extract documentation, plan and archive changes (in the models) and at the same time guarantee the consistency of model and multiple code artifacts.

The real value in terms of working hours is not the meta model, the code generator or the editor of a model, but the model data itself. The rest of the employed toolset and definitions (e.g. the meta model) further increase this value, by allowing to validate, transform or evaluate the model data.

Keeping this in mind, one should be able to parse and understand the model data in, say, 10 years. Also think about what happens if multiple users create conflicts in model files. One should be able to repair such situations. This is easier if the model stored on disk resembles the model entered by the user, than if an alternative representation is used with, e.g., binary data representations or machine generated IDs which are difficult to understand. Such a simple representation on disk also allows to easily analyze a model with a traditional text search tool (grep) and to make quick modifications to the model with a simple editor.

Moreover, the interoperability of employed toolsets is of crucial importancy: the ability to define inter-model relations eases large scale model driven approaches and the IDE integration increases end user acceptance.