Before delving into the specific problems that exist in program slicing currently, let's explore the surface of this thesis' relevant fields: program slicing and exception handling. The last one will be focused specifically on the Java programming language, but could be generalized to other popular programming languages which feature a similar exception handling system (e.g., Python, JavaScript, C++).
This section provides a series of definitions and background information so that future definitions can be grounded in a common foundation. \carlos{ampliar intro?}
Given a program $P$, composed of statements and containing variables $x_1, x_2 ... x_n \in\textnormal{vars}$, a \textit{slicing criterion} is a tuple $SC =\langle s, v \rangle$ where $s \in P$ is a single statement that belongs to the program, and $v$ is a set of variables from $P$. Each variable in $v$ may not appear in $s$.
\end{definition}
\begin{definition}[Slice] \label{def:slice}
Given a program $P$ and a slicing criterion $SC =\langle s, v \rangle$, a \textit{slice} is a subset of statements of $P$ ($S \subset P$), which behaves like the original program $P$, when considering the values of the variables in $v$ in statement $s$.
Given a program $P$, composed of a set of statements $S =\{s_1, s_2, s_3 ... s_n\}$, and a set of input values $I$, the \textit{execution history} of $P$ given $I$ is the list of statements $H$ that is executed, in the order that they were executed.
\end{definition}
Until now, the concept of slicing has been centred around finding the instructions that affect a variable.
That is the original definition, but as time has progressed, variations have been proposed, with the one described in definitions \ref{def:program-slicing}, \ref{def:slicing-criterion} and \ref{def:slice} is called \textit{static backward slicing}.
It is also the one that will be used throughout this thesis, though the errors detected and solutions proposed can be easily generalized to others.
The different variations are described later in this chapter, but there exist two fundamental dimensions along which the slicing problem can be proposed \cite{Sil12}:
\item\textit{Static} or \textit{dynamic}: slicing can be performed statically or dynamically.
\textit{Static slicing}\cite{Sil12} produces slices that consider all possible executions of the program: the slice will be correct regardless of the input supplied.
In contrast, \textit{dynamic slicing}\cite{KorL88,AgrH90b} considers a single execution of the program, thus, limiting the slice to the statements present in an execution log.
The slicing criterion is expanded to include a position in the execution history that corresponds to one instance of the selected statement, making it much more specific.
It may help find \carlos{idk if I need the ``to''} a bug related to indeterministic behaviour ---such as a random or pseudo-random number generator--- but, despite selecting the same slicing criterion in the same program, the slice must be recomputed for each set of input values or execution considered. \carlos{Talk about quasi-static as a middle ground?}
\item\textit{Backward} or \textit{forward}: \textit{backward slicing}\cite{Sil12} looks for the statements that affect the slicing criterion.
It sits among the most commonly used slicing technique.
In contrast, \textit{forward slicing}\cite{BerC85} computes the statements that are affected by the slicing criterion.
There also exists a middle-ground approach called \textit{chopping}\cite{JacR94}, which is used to find all the statements that affect some variables in the slicing criterion and at the same time they are affected by some other variables in the slicing criterion.
Since the seminal definition of program slicing by Weiser \cite{Wei81}, the most studied variation of slicing has been \textit{static backward slicing}, which has been defined in previous sections of this thesis.
That definition can be split in two sub-types, \textit{strong} and \textit{weak} slices, with different levels of requirements and uses in different fields.
\sergio{Esta definicion no obligaba tambien a acabar con el mismo error en caso de que la ejecucion no termine? Si es asi, plantearse poner algo al respecto.}
2019-12-04 16:24:19 +01:00
\josep{hay que revisar la definición de (1) Weiser, (2) Binkley y Gallagher y (3) Frank Tip. Mi opinion es que NO: Creo que no es necesario que el error se repita. Lo que dice es que el valor de las variables del SC debe ser el mismo, pero no dice nada del error.}
\josep{Si esa cita no es, entonces puedes usar la de Binkley: \cite{BinG96}}
Given a program $P$ and a slicing criterion $SC =\langle s,v \rangle$, $S$ is the \textit{weak static backward slice} of $P$ with respect to $SC$ if $S$ fulfils the following properties:
\sergio{$\forall~i~\in~I, v\in~V~\rightarrow~seq(i,v,P)~Pref~seq(i,v,S)$ where $seq(i,a,A)$ representa la secuencia de valores obtenidos para $a$ al ejecutar el input $i$ en el programa $A$. $I$ es el conjunto de todos los inputs posibles para $P$. Por ahi irian los tiros creo yo.}\sergio{Formalizacion existente en el repo: Program Slicing $\rightarrow$ Trabajos $\rightarrow$ Erlang Benchmarks $\rightarrow$ Papers $\rightarrow$ ICSM 2018 $\rightarrow$ Submitted (Section III - A)}
2019-12-04 16:24:19 +01:00
\josep{Si se formaliza con el uso de seq, entonces puedes mirar la definicion del paper de POI testing (Sergio sabe cual es).}
used throughout the literature (see, e.g., \cite{pending}\carlos{Which citation? Most papers on exception slicing do not indicate or hint whether they use strong or weak.}\sergio{Josep?}\josep{para Strong se puede poner a Weiser. Para Weak se puede poner a Binkley \cite{BinG96}}).
Most do not differentiate them, or acknowledge the other variant, because most publications focus on one variant exclusively.
Therefore, although the definitions come from different authors, the \textit{weak} and \textit{strong} nomenclature employed here originates from a control dependency analysis by Danicic~\cite{DanBHHKL11}, where slices that produce the same output as the original are named \textit{strong}, and those where the original is a prefix of the slice, \textit{weak}.
Different applications of program slicing use the option that fits their needs, though \textit{weak} is used if possible, because the resulting slices are smaller statement-wise, and the algorithms used tend to be simpler.
Of course, if the application of program slices requires the slice to behave exactly like the original program, then \textit{strong} slices are the only option.
As an example, debugging uses weak slicing, as it does not matter what the program does after reaching the slicing criterion, which is typically the point where an error has been detected.
In contrast, program specialization requires strong slicing, as it extracts features or computations from a program to create a smaller, standalone unit which performs in the exact same way.
Along the thesis, we indicate which kind of slice is produced with each problem detected and technique proposed.
Consider table~\ref{tab:slice-weak}, which displays the sequence of values or execution history obtained with respect to different slices of a program and the same slicing criterion.
The first row stands for the original program, which computes $3!$.
Slice A's execution history is identical to the original and therefore it is a strong slice.
Slice B's execution history does not stop after producing the same first 3 values as the original: it is a weak slice. An instruction responsible for stopping the loop may have been excluded from the slice.
Slice C is incorrect, as the execution history differs from the original program in the second column. It seems that some dependency has not been accounted for and the value is not updating.
\carlos{The following paragraph has already been repeated in previous sections, mainly the motivation. Consider its removal and the addition of citations to the previous mention.}
\josep{Even though the original proposal by Weiser~\cite{Wei81} focussed on an imperative language, program slicing is a language--agnostic technique.} Program slicing is a language--agnostic technique, but the original proposal by
\carlos{Se pueden poner más, faltan las citas correspondientes.}\sergio{Guay, hay que buscarlas y ponerlas, la biblio la veo corta para todos los papers que hay, yo creo que cuando este todo deberia haber sobre 30 casi, si no mas.}\josep{Si. Muchas de esas referencias puedes sacarlas de los ultimos surveys de slicing.}
\subsection{Computing program slices with the system dependence graph}
There exist multiple program representations, data structures and algorithms that can be used to compute a slice, but the most efficient and broadly used data structure is the \textit{system dependence graph} (SDG), introduced by Horwitz et al. \cite{HorRB90}.
It is computed from the program's source code, and once built, a slicing criterion is chosen and mapped on the graph, then the graph is traversed using a specific algorithm, and the slice is obtained.
Its efficiency relies on the fact that, for multiple slices performed on the same program, the graph generation process is only performed once.
Performance-wise, building the graph has quadratic complexity ($\mathcal{O}(n^2)$), and its traversal to compute the slice has linear complexity ($\mathcal{O}(n)$); both with respect to the number of statements in the program being sliced.
The SDG is a directed graph, and as such it has a set of nodes, each representing a statement in the program ---barring some auxiliary nodes introduced by some approaches--- and a set of directed edges, which represent the dependencies among nodes.
Those edges represent several kinds of dependencies ---control, data, calls, parameter passing, summary.
To create the SDG, first a \textit{control flow graph} (CFG) is built for each method in the program, some dependencies are computed based on the CFG.
With that data, a new graph representation is created, called the \textit{program dependence graph} (PDG) \cite{OttO84}.
Each method's PDG is then connected to form the SDG.
For a simple visual example, see Example~\ref{exa:create-sdg} below, which briefly illustrates the intermediate steps in the SDG creation. The whole process is explained in detail in section~\ref{sec:first-def-sdg}.
Once the SDG has been created, a slicing criterion can be mapped on the graph and the edges are traversed backwards starting.
The process is performed twice, the first time ignoring a specific kind of edge, and the second, ignoring another kind.
Once the second pass has finished, all the nodes visited form the slice.
\begin{example}[The creation of a system dependence graph]
\label{exa:create-sdg}\sergio{Este ejemplo da demasiados detalles en cuanto a los grafos.}
Consider the code provided in Figure~\ref{fig:create-sdg-code}, where a simple Java program containing two methods (\texttt{main} and \texttt{multiply}) is displayed.
Now turn your attention to Figure~\ref{fig:create-sdg-cfg}\carlos{is this too personal? the second person is used in other places, but not as directly}: a CFG has been created for each method. The CFG has a unique source node (without incoming edges) and a unique sink node (without outgoing edges), named ``Entry'' and ``Exit''. In between, the statements are structured according to all possible executions that could happen.
\caption{The control flow graphs for the code in Figure~\ref{fig:create-sdg-code}.}
\label{fig:create-sdg-cfg}
\end{figure}
Next is Figure~\ref{fig:create-sdg-pdg}, which is a reordering of the CFG's nodes according to the dependencies between statements: the PDG. Finally, both PDGs are connected into the SDG.
\item[Completeness.] The solution includes all the statements that affect the slicing criterion. This is the most important feature, and almost all techniques and implemented tools set to achieve at least the generation of complete slices. There exists a trivial way of achieving completeness, by including the whole program in the slice.
\item[Correctness.] The solution excludes all statements that do not affect the slicing criterion. Most solutions are complete, but the degree of correctness is what sets them apart, as solutions that are more correct will produce smaller slices, which will execute fewer instructions to compute the same values, decreasing the executing time and complexity.
\item[Features covered.] Which features (polymorphism, global variables, arrays, etc.), programming languages or paradigms a slicing tool is able to cover. There are slicing tools (publicly published or commercially available) for most popular programming languages, from C++ to Erlang. Some slicing techniques only cover a subset of the targeted language, and as such are less useful, but can be a stepping stone in the betterment of the field. There also exist tools that cover multiple languages or that are language-independent \cite{BinGHI14}. A small set-back of language-independent tools is that they are not as efficient in other metrics.
\item[Resource consumption.] Speed and memory consumption for the graph generation and slice creation. As previously stated, slicing is a two-step process: building a graph and traversing it, with the first process being quadratic and the second lineal (in time). Proposals that build upon the SDG try to keep traversal linear, even if that means making the graph bigger or slowing down its building process.
Though this metric may not seem as important as others, program slicing is not a simple analysis. On top of that, some applications of software slicing like debugging constantly change the program and slicing criterion, which makes faster slicing software preferable for them.
Memory consumption is less relevant, mainly due to its availability, but could become a concern in big systems with millions of lines of code. \carlos{Check this.}
As stated before, there are many uses for program slicing: program specialization, software maintenance, code obfuscation... but there is no doubt that program slicing is first and foremost a debugging technique.
Program slicing can also be performed with small variations on the algorithm or on the meaning of ``slice'' and ``slicing criterion'', so that it answers a slightly or totally different question.
Each variation of program slicing answers a different question and serves a different purpose:
\item[Forward static \cite{GalL91}.] Used to obtain the lines affected by the slicing criterion,
used to perform software maintenance: when changing a statement, slice the program w.r.t. that statement to discover the parts of the program that will be affected by the change.
executions instead of only one. It is another middle ground between static and dynamic slicing, similarly to quasy-static slicing.
Likewise, it can offer a slightly bigger slice than pure dynamic slicing while keeping the scope focused on the slicing criterion and the set of executions.
There exist many more, which have been detailed in surveys of the field, such as \cite{Sil12}, which analyzes the different dimensions that can be used to classify slicing techniques.
Exception handling is common in most modern programming languages. It generally consists of a few new instructions used to modify the normal execution flow and later return to it. Exceptions are used to react to an abnormal program behaviour (controlled or not), and either solve the error and continue the execution, or stop the program gracefully. In our work we focus on the Java programming language, so in the following, we describe the elements that Java uses to represent and handle exceptions:
that may be thrown. Its two main implementations are \texttt{Error} for internal errors in the Java Virtual Machine and \texttt{Exception} for normal errors. The first ones are generally not caught, as they indicate a critical internal error, such as running out of memory, or overflowing the stack. The second kind encompasses the rest of exceptions that occur in Java.
All exceptions can be classified as either \textit{unchecked}
(those that extend \texttt{RuntimeException} or \texttt{Error}) or
\textit{checked} (all others, may inherit from \texttt{Throwable}, but typically they do so from \texttt{Exception}). Unchecked exceptions may be thrown anywhere without warning, whereas
checked exceptions, if thrown, must be either caught in the same method or declared in the method header.
All exceptions thrown in the statements contained or any methods called will be processed by the list of \texttt{catch} statements. If no \texttt{catch} matches the type of the exception, the exception propagates to the \texttt{try} block that contains the current one, or, in its absence, the method that called the current one.
\item[catch.] Contains two elements: a variable declaration, whose type must extend from \texttt{Throwable}, and a block of statements to be executed when an exception of a matching type is thrown.
The type of a thrown exception $T_1$ matches the type of a \texttt{catch} statement $T_2$ if one of the following is true: (1) $T_1= T_2$, (2) $T_1~\textnormal{extends}~T_2$, (3) $T_1~\textnormal{extends}~T \wedge T~\textnormal{matches}~T_2$.
\textit{catch} clauses are processed sequentially, although their order does not matter, due to the restriction that each type must be placed after all of its subtypes.
When a matching clause is found, its block is executed and the rest are ignored.
Variable declarations may be of multiple types \texttt{(T1|T2 e)}, when two unrelated types of exception must be caught and the same code executed for both.
If there is an inheritance relationship, the parent suffices.\footnotemark
\item[finally.] Contains a block of statements that will always be executed, no matter what, if the \textit{try} is entered.
It is used to tidy up, for example closing I/O streams. The \texttt{finally} statement can be reached in two ways:
with an exception pending ---thrown in \texttt{try} and not captured by
any \texttt{catch}, or thrown inside a \texttt{catch}--- or without it
(when the \texttt{try} or \texttt{catch} end successfully). After
\footnotetext{Only available from Java 7 onward. For more details, see \url{https://docs.oracle.com/javase/7/docs/technotes/guides/language/catch-multiple.html} (retrieved November 2019).}
\footnotetext{From a survey on software developers by StackOverflow. Source: \url{https://insights.stackoverflow.com/survey/2019/\#technology-\_-programming-scripting-and-markup-languages} (retrieved November 2019).}
featuring a \texttt{throw} statement (i.e. \texttt{raise} in Python), \texttt{try-catch}-like
structure, and most include a \texttt{finally} statement that may be appended to \texttt{try} blocks.
The difference resides in the value passed by the exception.
In programming languages with inheritance and polymorphism, the value is restricted to any type that extends a generic error type (e.g. \texttt{Throwable} in Java). The exceptions are filtered using types.
In languages without inheritance, the value is an arbitrary one (e.g.
JavaScript, TypeScript), with the exceptions being filtered using a boolean condition or pattern to be matched (e.g. JavaScript). In both cases
there exists a way to indicate that all possible exceptions should be caught, regardless
Regarding the languages that do not offer an exception handling mechanism similar to Java's, error-handling is covered by a variety of systems, which are briefly detailed below.
\caption{A simple \texttt{main} method (left) with an emulated \texttt{try-catch} and a method that computes a square root (left), emulating a \texttt{throw} statement if the number is negative.}
\label{fig:exceptions-c}
\end{figure}
Consider Figure~\ref{fig:exceptions-c}: in the \texttt{main} function, line 2 will be executed twice: first when
instances can be accumulated. When appropriate, they will run in LIFO
(Last In--First Out) order.
\item[Assembly.] Assembly is a representation of machine code, and each computer architecture has its own instruction set, which makes an analysis impossible. In general, though, no unified exception handling is provided: each processor architecture may provide its own system or not. As with previous entries on this list, the exception system can be emulated, in this case with the low-level instructions commonly available in most architectures.