Index: /reasoner/evaluation.tex
===================================================================
--- /reasoner/evaluation.tex	(revision 231)
+++ /reasoner/evaluation.tex	(revision 232)
@@ -33,19 +33,23 @@
 \subsection{Model Complexity}\label{sectModelComplexity}
 
-When evaluating the reasoning capabilities of traditional variabilit models, such as feature models, typically a measure combining the number of features and the number of constraints is applied, e.g., the average number of constraints per feature or the constraint ratio \TBD{refs}. However, just relating the number of variables to the number of constraints in an IVML model can lead to a misleading view on the complexity of the configuration / reasoning problem. As long as only top-level variables and Boolean constraints as in some generated models are used, a measure based on the number of variables and constraints seems to correctly classify such models according as illustrated in \TBD{Figure}. However, models with compounds, containers and quantor constraints appear less complex as the nested variables are not considered and iterating container expressions are just counted as a basic Boolean constraint.
+When evaluating the reasoning capabilities of variability models, such as feature models, typically a measure is employed to characterize the model. Measures include the number of features \cite{Benavides2006AFS}, the number of constraints \cite{Benavides2006AFS} or combinations, e.g., constraints per feature or the constraint ratio \cite{Mendoca08}, graph width \cite{PohlStrickerPohl13} or more complex approaches \cite{StuikysDamasevicius09}. 
 
-For this evaluation, we combine the basic idea of counting variables / constraints with the approach used in the McCabe complexity metric, i.e., also counting the nested elements and weighting them according to their perceived complexity. Although we 'calibrate' the weights according to our test models to obtain an objective criterion to display complexity vs. reasoning time, we do not claim that the weights are universal and hold for all kinds of IVML models. 
+However, just relating the number of variables to the number of constraints in an IVML model can be misleading. As long as only top-level variables and Boolean constraints as in some generated models are used, a measure based on the number of variables and constraints may lead to a usual complexity classification. However, at a glance, models with compounds, containers and quantor constraints then are underestimated as nested variables are not considered and iterating container expressions are just counted like a basic Boolean constraint (regardless of its number of terms).
+
+For this evaluation, we combine the basic idea of counting variables vs. constraints with the approach used in the McCabe complexity metric, i.e., we also count the nested elements and weight them according to their perceived complexity. For the weights, we 'calibrate' the calculation based on our test models. However, it is important to note that we do not claim that the weights are universal and hold for all kinds of IVML models or even the approach is universal for all kinds of variability models. The only aim is to obtain an objective (hopefully fair) criterion to rank IVML models in order to support the visualization and the comparison of the reasoning time for several (different) models.
 
 The complexity metric applied here consists of four parts, the
 \begin{enumerate}
-\item measure of the variable structure $cpx_v(e)$ of a given model element $e$.
-\item measure of the constraints $cpx_c(e)$  for a certain model element $e$, which is based on the 
-\item measure of a (constraint, default value) expression $cpx_e(e)$ of a given expression $e$.
+\item measure of the structure (type) of a given model element $e$, denoted $cpx_v(e)$.
+\item measure of the constraints of a certain model element $e$, denoted by $cpx_c(e)$, which, in turn, is based on the 
+\item measure of a given (constraint, default value) expression $e$, denoted as $cpx_e(e)$ .
 \item weighting $w_{cpx}(e)$ for a model element, constraint or expression $e$.
 \end{enumerate}
 
-Due to the nested structure of IVML models, most of the formulae for calculating the complexity are recursive. Within these formulae, the weighting function $w_cpx(e)$ is mostly applied additive. In two cases, we use $w_cpx(e)$ also in a multiplicative fashion, in particular to disable parts of the calculation, e.g., for constraints or nested variables, so that we also can express the traditional counting approach for features and constraints through $cpx_v(e)$ and $cpx_c(e)$ in an integrated way. We explain now the formulae for the four parts. Thereby, we implicitly introduce some new functions for model elements that have not been used so far and that will only be used within this section.
+Due to the nested structure of IVML models, most of the formulae are recursive. Within these formulae, the weighting function $w_{cpx}(e)$ is mostly applied in additive manner. In two cases, we use $w_{cpx}(e)$ in a multiplicative manner, in particular to disable parts of the calculation, e.g., for constraints or nested variables, so that we also can express the traditional counting approach for features and constraints through $cpx_v(e)$ and $cpx_c(e)$ in an integrated way. We explain now the formulae for the four parts. Thereby, we introduce some additional properties of IVML model elements that have not been used so far and that will only be used within this section.
 
-The measure of the variable structure $cpx_v(e)$ calculates a weighted sum over the number of nested variables for an IVML model element starting with a given configuration. We do not rely here on the meta-model, i.e., the project, as a configuration contains all actually available variables, i.e., also those created by assignment constraints in terms of compound or container instances. When applying $cpx_v(e)$ to a configuration, the sum of the measures for all variables is calculated. In turn, for a variable, we add a weight for the type of the variable (e.g., if we want to weight complex types like containers or compounds higher) with the (recursive) sum of $cpx_v(e)$ over all nested variables (weighted by the IVML type \IVML{Var} for decision variables to disable measuring nested variables).
+\TBD{here}
+
+The measure of the variable structure $cpx_v(e)$ calculates a weighted sum of the number of nested variables for an IVML model element starting with a given configuration. We do not rely here on the meta-model, i.e., the project, as a configuration contains all actually available variables, i.e., also those created by assignment constraints in terms of compound or container instances. When applying $cpx_v(e)$ to a configuration, the sum of the measures for all variables is calculated. In turn, for a variable, we add a weight for the type of the variable (e.g., if we want to weight complex types like containers or compounds higher) with the (recursive) sum of $cpx_v(e)$ over all nested variables (weighted by the IVML type \IVML{Var} for decision variables to disable measuring nested variables).
 %
 $$
Index: /reasoner/reasoner.bib
===================================================================
--- /reasoner/reasoner.bib	(revision 231)
+++ /reasoner/reasoner.bib	(revision 232)
@@ -196,4 +196,44 @@
  keywords = {Optimal feature selection, many-objective optimization, satisfiability (SAT) solvers, vector angle--based evolutionary algorithm (VaEA)},
 } 
+
+@INPROCEEDINGS{PohlStrickerPohl13,
+author={R. Pohl and V. Stricker and K. Pohl},
+booktitle={Intl. Conference on Automated Software Engineering (ASE'13)},
+title={{Measuring the structural complexity of feature models}},
+year={2013},
+volume={},
+number={},
+pages={454-464},
+keywords={computational complexity;optimisation;software product lines;structural complexity metric;feature models;automated analysis;NP complete problems;exponential worst case execution time;practical relevant analysis;state-of-the-art analysis tools;SOTA analysis tools;heuristics;graph theory;Complexity theory;Frequency modulation;Encoding;Analytical models;Measurement;Data structures;Boolean functions;software product line;feature model;automated analysis;performance measurement},
+doi={10.1109/ASE.2013.6693103},
+ISSN={},
+_month={Nov},
+}
+
+@thesis{Mendoca08,
+ author = "M. Mendonça",
+ title = {{Efficient Reasoning Techniques for Large Scale Feature Models}},
+ type = "PhD thesis", 
+ school = "University of Waterloo", 
+ year = "2008",
+}
+
+@inproceedings{Benavides2006AFS,
+  title={{A first step towards a framework for the automated analysis of feature models}},
+  author={David Benavides and Sergio Segura and Pablo Trinidad and Antonio Ruiz-Cortes},
+  year={2006},
+  booktitle="Managing Variability for Software Product Lines: Working With Variability Mechanisms"
+}
+
+@article{StuikysDamasevicius09,
+author = {Štuikys, Vytautas and Damasevicius, Robertas},
+year = {2009},
+month = {01},
+pages = {179--187},
+title = {Measuring Complexity of Domain Models Represented by Feature Diagrams},
+volume = {38},
+number = {3},
+journal = {Information technology and control}
+}
 
 @InProceedings{SchmidEichelberger08a,
