% !TEX encoding = UTF-8 % !TEX spellcheck = en_GB % !TEX root = ../paper.tex \chapter{Main explanation?} \label{cha:incremental} \carlos{Review if we want to call nodes ``Enter'' and ``Exit'' or ``Start'' and ``End'' (I'd prefer the first one).} \sergio{Enter o Entry?} \section{First definition of the SDG} \label{sec:first-def-sdg} The system dependence graph (SDG) is \deleted{a method}\added{the main data structure for program representation used in the}\deleted{for} program slicing\added{ area. It}\deleted{that} was first proposed by Horwitz, Reps and Blinkey \cite{HorwitzRB88}\added{ and, since then, many approaches have based their models on it}. It builds upon the existing control flow graph (CFG), defining dependencies between vertices of the CFG, and building a program dependence graph (PDG), which represents them.\sergio{Volvemos a poner las siglas y su significado?CFG?PDG? ya se han puesto antes} The \deleted{system dependence graph (}SDG\deleted{)} is then built from the assembly of the different PDGs (each representing a method of the program), linking each method call to its corresponding definition. Because each graph is built from the previous one, new constructs can be added with to the CFG, without the need to alter the algorithm that converts \added{each} CFG to PDG and then to \added{the final} SDG. The only modification possible is the redefinition of a\added{n already defined} dependency or the addition of new kinds of dependence. The language covered by the initial proposal \deleted{was}\added{is}\sergio{todo en presente o todo en pasado} a simple one, featuring procedures with modifiable parameters and basic instructions, including calls to procedures, variable assignments, arithmetic and logic operators and conditional instructions (branches and loops)\deleted{:}\added{, i.e.,}\sergio{no se si i.e., queda bien aqui :/} the basic features of an imperative programming language. The \deleted{control flow graph was}\added{CFGs are} as simple as the programs themselves, with each graph representing one procedure. The instructions of the program are represented as vertices of the graph and are split into two categories: statements, which have no effect on the control flow (\added{e.g., }assignments, procedure calls) and predicates, whose execution may lead to one of multiple ---though traditionally two--- \added{different paths} (\added{e.g., }conditional instructions). \deleted{S}\added{While s}tatements are connected sequentially to the next instruction\deleted{. P}\added{, on the contrary, p}redicates have two outgoing edges, each\added{ of them} connected to the first statement that should be executed\deleted{,} according to the result of evaluating the conditional expression in the guard of the predicate. \begin{definition}[Control Flow Graph \carlos{add original citation}] \label{def:cfg} A \emph{control flow graph} $G$ of a program\sergio{program o method?} $P$ is a directed graph, represented as a tuple $\langle N, E \rangle$, where $N$ is a set of nodes, composed of a method's\sergio{method o program?} statements plus two special nodes, ``Start'' and ``End''; and $E$ is a set of edges of the form $e = \left(n_1, n_2\right) |~n_1, n_2 \in N$. Most algorithms\added{, in order} to generate the SDG\added{,} mandate the ``Start'' node to be the only source and \added{the} ``End'' \added{node} to be the only sink in the graph. \carlos{Is it necessary to define source and sink in the context of a graph?}. Edges are created according to the possible execution paths that exist; each statement is connected to any statement that may immediately follow it. Formally, an edge $e = (n_1, n_2)$ exists if and only if there exists an execution of the program where $n_2$ is executed immediately after $n_1$. In general, expressions are not evaluated \added{when generating the CFG}; so a\deleted{n \texttt{if}}\added{ conditional} instruction \added{will have}\deleted{has} two outgoing edges \added{regardless the condition value being} \deleted{even if the condition is} always true or false, e.g.\added{,} \texttt{1 == 0}. \end{definition} To build the PDG and then the SDG, there are two dependencies based directly on the CFG's structure: data and control dependence. \sergio{But first, we need to define the concept of postdominance in a graph necessary in the definition of control dependency:}\sergio{no me convence mucho pero plantearse si poner algo aqui o dejarlo como esta.} \begin{definition}[Postdominance \carlos{add original citation?}] \label{def:postdominance} Vertex $b$ \textit{postdominates} vertex $a$ if and only if $b$ is on every path from $a$ to the ``End'' vertex. \end{definition} \begin{definition}[Control dependency\sergio{dependency o dependence?} \carlos{add original citation}] \label{def:ctrl-dep} Vertex $b$ is \textit{control dependent} on vertex $a$ ($a \ctrldep b$) if and only if $b$ postdominates one but not all of $a$'s successors. It follows that a vertex with only one successor cannot be the source of control dependence. \end{definition} \begin{definition}[Data dependency\sergio{dependency o dependence?} \carlos{add original citation}] \label{def:data-dep} Vertex $b$ is \textit{data dependent} on vertex $a$ ($a \datadep b$) if and only if $a$ may define a variable $x$, $b$ may use $x$ and there exists a \carlos{could it be ``an''??} $x$-definition free path from $a$ to $b$. Data dependency was originally defined as flow dependency, and split into loop and non--loop related dependencies, but that distinction is no longer useful to compute program slices \sergio{Quien dijo que ya no es util? Vale la pena citarlo?}. It should be noted that variable definitions and uses can be computed for each statement independently, analysing the procedures called by it if necessary. The variables used and defined by a procedure call are those used and defined by its body. \end{definition} With the data and control dependencies, the PDG may be built by replacing the edges from the CFG by data and control dependence edges. The first tends to be represented as a thin solid line, and the latter as a thick solid line. In the examples, \added{data and control dependencies are represented by thin solid red and black lines respectively}\deleted{data dependencies will be thin solid red lines}. \begin{definition}[Program dependence graph] \label{def:pdg} The \textsl{program dependence graph} (PDG) is a directed graph (and originally a tree\sergio{???}) represented by three elements: a set of nodes $N$, a set of control edges $E_c$ and a set of data edges $E_d$. \sergio{$PDG = \langle N, E_c, E_d \rangle$} The set of nodes corresponds to the set of nodes of the CFG, excluding the ``End'' node. Both sets of edges are built as follows. There is a control edge between two nodes $n_1$ and $n_2$ if and only if $n_1 \ctrldep n_2$\sergio{acordarse de lo de evitar la generacion de arcos para prevenir la transitividad. Decidir si definimos Control arc como ua definicion aparte.}, and a data edge between $n_1$ and $n_2$ if and only if $n_1 \datadep n_2$. Additionally, if a node $n$ does not have any incoming control edges, it has a ``default'' control edge $e = (\textnormal{Start},n)$; so that ``Start'' is the only source node of the graph. Note: the most common graphical representation is a tree--like structure based on the control edges, and nodes sorted left to right according to their position on the original program. Data edges do not affect the structure, so that the graph is easily readable. \end{definition} \sergio{creo que en la definicion de CFG y PDG tiene que quedar mas claro que hay varios por programa (uno por funcion), para que esta ultima frase cobre mas sentido.} Finally, the SDG is built from the combination of all the PDGs that compose the program. \begin{definition}[System dependence graph] \label{def:sdg} The \textsl{system dependence graph} (SDG) is a directed graph that represents the control and data dependencies of a whole program. It has three kinds of edges: control, data and function call. The graph is built combining multiple PDGs, with the ``Start'' nodes labeled after the function they begin. There exists one function call edge between each node containing one or more calls and each of the ``Start'' node of the method called. In a programming language where the function call is ambiguous (e.g. with pointers or polymorphism), there exists one edge leading to every possible function called.\sergio{Esta definicion ha quedado muy informal no? Donde han quedado los $E_c,~E_d,~E_{fc},$ Nodes del PDG...?} \end{definition} \begin{example}[Creation of a SDG from a simple program] Given the program shown below (left), the control flow graphs for both methods are shown on the right: \\ \begin{minipage}{0.2\linewidth} \begin{lstlisting} proc main() { a = 10; b = 20; f(a, b); } proc f(x, y) { while (x > y) { x = x - 1; } print(x); } \end{lstlisting} \end{minipage} \begin{minipage}{0.79\linewidth} \includegraphics[width=0.6\linewidth]{img/cfgsimple} \end{minipage} \sergio{Centrar la figura, sobra mucho espacio a la derecha} Then, control and data dependencies are computed, arranging the nodes in the PDG \sergio{FigureRef missing}. Finally, the two graphs are connected with summary edges\sergio{with que? esto no se sabe aun ni lo que es ni para que sirve. En todo caso function call edges, y si ese es el negro que va de f(a,b) a Start f() para diferenciarlo deberia ser de otro color} to create the SDG: \begin{center} \includegraphics[width=0.8\linewidth]{img/sdgsimple} \end{center} \end{example} \subsubsection{Function calls and data dependencies} \carlos{Vocabulary: when is appropriate the use of method, function and procedure????}\sergio{buena pregunta, yo creo que es jerarquico, method incluye function y procedure y los dos ultimos son disjuntos entre si no?} In the original definition of the SDG, there was special handling of data dependencies when calling functions, as it was considered that parameters were passed by value, and global variables did not exist. \carlos{Name and cite paper that introduced it} solves this issue by splitting function calls and function \added{definitions} into multiple nodes. This proposal solved everything\sergio{lo resuelve todo?} related to parameter passing: by value, by reference, complex variables such as structs or objects and return values. To such end, the following modifications are made to the different graphs: \begin{description} \item[CFG.] In each CFG, global variables read or modified and parameters are added to the label of the ``Start'' node in assignments of the form $par = par_{in}$ for each parameter and $x = x_{in}$ for global variables. Similarly, global variables and parameters modified are added to the label of the ``End'' node as \added{assignments of the form} $x_{out} = x$. \added{From now on, we will refer to the described assignments as input and output information respectively.} \sergio{\{}The parameters are only passed back if the value set by the called method can be read by the callee\sergio{\} no entiendo a que se refiere esta frase}. Finally, in method calls the same values must be packed and unpacked: each statement containing a function called is relabeled to contain \added{its related} input (of the form $par_{in} = \textnormal{exp}$ for parameters or $x_{in} = x$ for global variables) and output (always of the form $x = x_{out}$) \added{information}. \sergio{no hay parameter\_out? asumo entonces que no hay paso por valor?} \item[PDG.] Each node \added{augmented with input or output information}\deleted{modified} in the CFG is \added{now} split into multiple nodes: the original \deleted{label}\added{node} \added{(Start, End or function call)} is the main node and each assignment \added{contained in the input and output information} is represented as a new node, which is control--dependent on the main one. Visually, \added{new nodes coming from the input information}\deleted{input is} \added{are} placed on the left and \added{the ones coming from the output information}\deleted{output} on the right; with parameters sorted accordingly. \item[SDG.] Three kinds of edges are introduced: parameter input (param--in), parameter output (param--out) and summary edges. Parameter input edges are placed between each method call's input node and the corresponding method definition input node. Parameter output edges are placed between each method definition's output node and the corresponding method call output node. Summary edges are placed between the input and output nodes of a method call, according to the dependencies inside the method definition: if there is a path from an input node to an output node, that shows a dependence and a summary method is placed in all method calls between those two nodes.\sergio{Tengo la sensacion de que la explicacion de que es un summary llega algo tarde y tal vez deberia estar en alguna definicion previa. Que opine Josep que piensa} Note: \deleted{parameter input and output}\added{param-in and param-out} edges are separated because the traversal algorithm traverses them only sometimes (the output edges are excluded in the first pass and the input edges in the second).\sergio{delicado mencionar lo de las pasadas sin haber hablado antes de nada del algoritmo de slicing, a los que no sepan de slicing se les quedara el ojete frio aqui. Plantearse quitar esta nota.} \end{description} \begin{example}[Variable packing and unpacking] Let it be a function $f(x, y)$ with two integer parameters \added{which modifies the argument passed in its second parameter}, and a call $f(a + b, c)$, with parameters passed by reference if possible. The label of the method call node in the CFG would be ``\texttt{x\_in = a + b, y\_in = c, f(a + b, c), c = y\_out}''; method $f$ would have \texttt{x = x\_in, y = y\_in} in the ``Start'' node and \texttt{y\_out = y} in the ``End'' node. The relevant section of the SDG would be: \begin{center} \includegraphics[width=0.5\linewidth]{img/parameter-passing} \end{center} \end{example} \sergio{Esta figura molaria mas evolutiva si diera tiempo, asi seria casi autoexplicativa: CFG $\rightarrow$ PDG $\rightarrow$ SDG. La actual seria el SDG, las otras tendrian poco mas que un nodo y una etiqueta.} \section{Unconditional control flow} Even though the initial definition of the SDG was \deleted{useful}\added{adequate} to compute slices, the language covered was not enough for the typical language of the 1980s, which included (in one form or another) unconditional control flow. Therefore, one of the first \added{proposed upgrades}\deleted{additions contributed} to the algorithm to build \deleted{system dependence graphs}\added{SDGs} was the inclusion of unconditional jumps, such as ``break'', ``continue'', ``goto'' and ``return'' statements (or any other equivalent). A naive representation would be to treat them the same as any other statement, but with the outgoing edge landing in the corresponding instruction (outside the loop, at the loop condition, at the method's end, etc.). An alternative approach is to represent the instruction as an edge, not a vertex, connecting the previous statement with the next to be executed. \sergio{Juntaria las 2 propuestas anteriores (naive y alternative) en 1 frase, no las separaria, porque despues de leer la primera ya me he mosqueado porque no deciamos ni quien la hacia ni por que no era util.} Both of these approaches fail to generate a control dependence from the unconditional jump, as the definition of control dependence (see definition~\ref{def:ctrl-dep}) requires a vertex to have more than one successor for it to be possible to be a source of control dependence. From here, there stem two approaches: the first would be to redefine control dependency, in order to reflect the real effect of these instructions ---as some authors~\cite{DanBHHKL11} have tried to do--- and the second would be to alter the creation of the SDG to ``create'' those dependencies, which is the most widely--used solution \cite{BalH93}. The most popular approach was proposed by Ball and Horwitz~\cite{BalH93}, classifying instructions into three separate categories: \begin{description} \item[Statement.] Any instruction that is not a conditional or unconditional jump. It has one outgoing edge in the CFG, to the next instruction that follows it in the program. \item[Predicate.] Any conditional jump instruction, such as \texttt{while}, \texttt{until}, \texttt{do-while}, \texttt{if}, etc. It has two outgoing edges, labeled \textit{true} and \textit{false}; leading to the corresponding instructions. \item[Pseudo--predicates.] Unconditional jumps (e.g. \texttt{break}, \texttt{goto}, \texttt{continue}, \texttt{return}); are like predicates, with the difference that the outgoing edge labeled \textit{false} is marked as non--executable, and there is no possible execution where such edge would be possible, according to the definition of the CFG (as seen in \sergio{definition o Definition?}definition~\ref{def:cfg}). Originally the edges had a specific reasoning backing them up: the \textit{true} edge leads to the jump's destination and the \textit{false} one, to the instruction that would be executed if the unconditional jump was removed, or converted into a \texttt{no op}\sergio{no op o no-op?} (a blank operation that performs no change to the program's state). \sergio{\{}This specific behavior is used with unconditional jumps, but no longer applies to pseudo--predicates, as more instructions have used this category as means of ``artificially'' \carlos{bad word choice} generating control dependencies.\sergio{\}No entrar en este jardin, cuando se definio esto no se contemplaba la creacion de nodos artificiales. -Quita el originally, ahora es originally.} \end{description} \carlos{Pseudo--statements now have been introduced and are used to generate all control edges (for now just the Start method to the End).} As a consequence of this classification, every instruction after an unconditional jump $j$ is control--dependent (either directly or indirectly) on $j$ and the structure containing it (a conditional statement or a loop), as can be seen in the following example. \begin{figure} \centering \begin{minipage}{0.3\linewidth} \begin{lstlisting} static void f() { int a = 1; while (a > 0) { if (a > 10) break; a++; } System.out.println(a); } \end{lstlisting} \end{minipage} \begin{minipage}{0.6\linewidth} \includegraphics[width=0.4\linewidth]{img/breakcfg} \includegraphics[width=0.59\linewidth]{img/breakpdg} \end{minipage} \caption{A program with unconditional control flow, its CFG (center) and PDG(right).} \label{fig:break-graphs} \end{figure} \begin{example}[Control dependencies generated by unconditional instructions] \label{exa:unconditional} Figure~\ref{fig:break-graphs} showcases a small program with a \texttt{break} statement, its CFG and PDG with a slice in grey. The slicing criterion (line 5, variable $a$) is control dependent on both the unconditional jump and its surrounding conditional instruction (both on line 4); even though it is not necessary to include it\sergio{a quien se refiere este it?} (in the context of weak slicing). Note: the ``Start'' node $S$ is also categorized as a pseudo--statement, with the \textit{false} edge connected to the ``End'' node, therefore generating a dependence from $S$ to all the nodes inside the method. This removes the need to handle $S$ with a special case when converting a CFG to a PDG, but lowers the explainability of non--executable edges as leading to the ``instruction that would be executed if the node was absent or a no--op''. \end{example} The original paper~\cite{BalH93} does prove its completeness, but disproves its correctness by providing a counter--example similar to example~\ref{exa:nested-unconditional}. This proof affects both weak and strong slicing, so improvements can be made on this proposal. The authors postulate that a more correct approach would be achievable if the slice's restriction of being a subset of instructions were lifted. \begin{example}[Nested unconditional jumps] \label{exa:nested-unconditional} In the case of nested unconditional jumps where both jump to the same destination, only one of them (the out--most one) is needed. Figure~\ref{fig:nested-unconditional} showcases the problem, with the minimal slice \carlos{have not defined this yet} in grey, and the algorithmically computed slice in light blue. Specifically, lines 3 and 5 are included unnecessarily. \begin{figure} \begin{minipage}{0.15\linewidth} \begin{lstlisting} while (X) { if (Y) { if (Z) { A; break; } B; break; } C; } D; \end{lstlisting} \end{minipage} \begin{minipage}{0.84\linewidth} \includegraphics[width=0.4\linewidth]{img/nested-unconditional-cfg} \includegraphics[width=0.59\linewidth]{img/nested-unconditional-pdg} \end{minipage} \caption{A program with nested unconditional control flow (left), its CFG (center) and PDG (right).} \label{fig:nested-unconditional} \end{figure} \end{example} \carlos{Add proposals to fix both problems showcased.} \section{Exceptions} Exception handling was first tackled in the context of Java program slicing by Sinha et al. \cite{SinH98}, with later contributions by Allen and Horwitz~\cite{AllH03}. There exist contributions for other programming languages, which will be explored later (chapter~\ref{cha:state-art}) and other small contributions. The following section will explain the treatment of the different elements of exception handling in Java program slicing. As seen in section~\ref{sec:intro-exception}, exception handling in Java adds two constructs: \texttt{throw} and \texttt{try-catch}. Structurally, the first one resembles an unconditional control flow statement carrying a value ---like \texttt{return} statements--- but its destination is not fixed, as it depends on the dynamic typing of the value. If there is a compatible \texttt{catch} block, execution will continue inside it, otherwise the method exits with the corresponding value as the error. The same process is repeated in the method that called the current one, until either the call stack is emptied or the exception is successfully caught. If the exception is not caught at all, the program exits with an error ---except in multi--threaded programs, in which case the corresponding thread is terminated. The \texttt{try-catch} statement can be compared to a \texttt{switch} which compares types (with \texttt{instanceof}) instead of constants (with \texttt{==} and \texttt{Object\#equals(Object)}). Both structures require special handling to place the proper dependencies, so that slices are complete and as correct as can be. \subsection{\texttt{throw} statement} The \texttt{throw} statement compounds two elements in one instruction: an unconditional jump with a value attached and a switch to an ``exception mode'', in which the statement's execution order is disregarded. The first one has been extensively covered and solved; as it is equivalent to the \texttt{return} instruction, but the second one requires a small addition to the CFG: there must be an alternative control flow, where the path of the exception is shown. For now, without including \texttt{try-catch} structures, any exception thrown will exit its method with an error; so a new ``Error end'' node is needed. The pre-existing ``End'' node is renamed ``Normal end'', but now the CFG has two distinct sink nodes; which is forbidden in most slicing algorithms. To solve that problem, a general ``End'' node is created, with both normal and exit ends connected to it; making it the only sink in the graph. In order to properly accommodate a method's output variables (global variables or parameters passed by reference that have been modified), variable unpacking is added to the ``Error exit'' node; same as the ``Exit'' node in previous examples. This change constitutes an increase in precision, as now the outputted variables are differentiated; for example a slice which only requires the error exit may include less variable modifications than one which includes both. This treatment of \texttt{throw} statements only modifies the structure of the CFG, without altering the other graphs, the traversal algorithm, or the basic definitions for control and data dependencies. That fact makes it easy to incorporate to any existing program slicer that follows the general model described. Example~\ref{exa:throw} showcases the new exit nodes and the treatment of the \texttt{throw} as if it were an unconditional jump whose destination is the ``Error exit''. \begin{example}[CFG of an uncaught \texttt{throw} statement] Consider the simple Java method on the right of figure~\ref{fig:throw}; which performs a square root if the number is positive, throwing otherwise a \texttt{RuntimeError}. The CFG in the centre illustrates the treatment of \texttt{throw}, ``normal exit'' and ``error exit'' as pseudo--statements, and the PDG on the right describes the control dependencies generated from the \texttt{throw} statement to the following instructions and exit nodes. \label{exa:throw} \begin{figure}[h] \begin{minipage}{0.3\linewidth} \begin{lstlisting} double f(int x) { if (x < 0) throw new RuntimeException() return Math.sqrt(x) } \end{lstlisting} \end{minipage} \begin{minipage}{0.69\linewidth} \includegraphics[width=\linewidth]{img/throw-example-cfg} \end{minipage} \caption{A simple program with a \texttt{throw} statement, its CFG (centre) and its PDG (left).} \label{fig:throw} \end{figure} \end{example} \subsection{\texttt{try-catch-finally} statement} The \texttt{try-catch} statement is the only way to stop an exception once it is thrown. It filters exception by its type; letting those which do not match any of the catch blocks propagate to another \texttt{try-catch} surrounding it or outside the method, to the previous one in the call stack. On top of that, the \texttt{finally} block helps programmers guarantee code execution. It can be used replacing or in conjunction with \texttt{catch} blocks. The code placed inside a \texttt{finally} block is guaranteed to run if the \texttt{try} block has been entered. This holds true whether the \texttt{try} block exits correctly, an exception is caught, an exception is left uncaught or an exception is caught and another one is thrown while handling it (within its \texttt{catch} block). \carlos{This would be useful to explain that the new dependencies introduced by the non-executable edges are not ``normal'' control dependencies, but ``presence'' dependencies. Opposite to traditional control dependence, where $a \ctrldep b$ if and only if the number of times $b$ is executed is dependent on the \textit{execution} of $a$ (e.g. conditional blocks and loops); this new control dependencies exist if and only if the number of times $b$ is executed is dependent on the \textit{presence} or \textit{absence} of $a$; which introduces a meta-problem. In the case of exceptions, it is easy to grasp that the absence of a catch block alters the results of an execution. Same with unconditional jumps, the absence of breaks modifies the flow of the program, but its execution does not control anything. A differentiation seems appropriate, even if only as subcategories of control dependence: execution control dependence and presence control dependence.} The main problem when including \texttt{try-catch} blocks in program slicing is that \texttt{catch} blocks are not always strictly necessary for the slice (less so for weak slices), but introduce new styles of control dependence; which must be properly mapped to the SDG. The absence of \texttt{catch} blocks may also be a problem for compilation, as Java requires at least one \texttt{catch} or \texttt{finally} block to accompany each \texttt{try} block; though that could be fixed after generating the slice, if it is required that the slice be executable. A typical representation of the \texttt{try} block is as a pseudo-predicate, connected to the first statement inside it and to the instruction that follows the \texttt{try} block. This generates control dependencies from the \texttt{try} node to each of the instructions it contains. \carlos{This is not really a ``control'' dependency, could be replaced by the definition of structural dependence.} Inside the \texttt{try} there can be four distinct sources of exceptions: \begin{description} \item[Method calls.] If an exception is thrown inside a method and it is not caught, it will surface inside the \texttt{try} block. As \textit{checked} exceptions must be declared explicitly, method declarations may be consulted to see if a method call may or may not throw any exceptions. On this front, polymorphism and inheritance present no problem, as inherited methods must match the signature of the parent method ---including exceptions that may be thrown. If \textit{unchecked} exceptions are also considered, method calls could be analysed to know which exceptions may be thrown, or the documentation be checked automatically for the comment annotation \texttt{@throws} to know which ones are thrown. \item[\texttt{throw} statements.] The least common, but most simple, as it is treated as a \texttt{throw} inside a method. The type of the exception may be obvious, as most \carlos{this is a weird claim to make without backup} exceptions are built and thrown in the same instruction; but it also may be hidden: e.g., \texttt{throw (Exception) o} where \texttt{o} is a variable of type Object. \item[Implicit unchecked exceptions.] If \textit{unchecked} exceptions are considered, many common expressions may throw an exception, with the most common ones being trying to call a method or accessing a field of a \texttt{null} object (\texttt{NullPointerException}), accessing an invalid index on an array (\texttt{ArrayIndexOutOfBoundsException}), dividing an integer by 0 (\texttt{ArithmeticException}), trying to cast to an incompatible type (\texttt{ClassCastException}) and many others. On top of that, the user may create new types that inherit from \texttt{RuntimeException}, but those may only be explicitly thrown. Their inclusion in program slicing and therefore in the method's CFG generates extra dependencies that make the slices produced bigger. \item[Errors.] May be generated at any point in the execution of the program, but they normally signal a situation from which it may be impossible to recover, such as an internal JVM error. In general, most programs will not attempt to catch them, and can be excluded in order to simplify implicit unchecked exceptions (any instruction at any moment may throw an Error). \end{description} All exception sources are treated very similarly: the statement that may throw an exception is treated as a predicate, with the true edge connected to the next instruction were the statement to execute without raising exceptions; and the false edge connected to all the possible \texttt{catch} nodes which may be compatible with the exception thrown. The case of method calls that may throw exceptions is slightly different, as there may be variables to unpack, both in the case of a normal or erroneous exit. To that end, nodes containing method calls have an unlimited number of outgoing edges: one to leads to a node labelled ``normal return'', after which the variables produced by any normal exit of the method are unpacked; and all the others to any possible catch that may catch the exception thrown. Each catch must then unpack the variables produced by the erroneous exits of the method. The ``normal return'' node is itself a pseudo-statement; with the \textit{true} edge leading to the following instruction and the \textit{false} one to the first common instruction between all the paths of length $\ge 1$ that start from the method call ---which translates to the instruction that follows the \texttt{try} block if all possible exceptions thrown by the method are caught or the ``Exit'' node if there are some left uncaught. \deleted{Carlos: CATCH Representation doesn't matter, it is similar to a switch but checking against types. The difference exists where there exists the chance of not catching the exception; which is semantically possible to define. When a \texttt{catch (Throwable e)} is declared, it is impossible for the exception to exit the method; therefore the control dependency must be redefined.} \deleted{The filter for exceptions in Java's \texttt{catch} blocks is a type (or multiple types since Java 8), with a class that encompasses all possible exceptions (\texttt{Throwable}), which acts as a catch-all. In the literature there exist two alternatives to represent \texttt{catch}: one mimics a static switch statement, placing all the \texttt{catch} block headers at the same height, all pending from the exception-throwing exception and the other mimics a dynamic switch or a chain of \texttt{if} statements. The option chosen affects how control dependencies should be computed, as the different structures generate different control dependencies by default.} \deleted{\begin{description} \item[Switch representation.] There exists no relation between different \texttt{catch} blocks, each exception-throwing statement is connected through an edge labelled false to each of the \texttt{catch} blocks that could be entered. Each \texttt{catch} block is a pseudo-statement, with its true edge connected to the end of the \texttt{try} and the As an example, a \texttt{1 / 0} expression may be connected to \texttt{ArithmeticException}, \texttt{RuntimeException}, \texttt{Exception} or \texttt{Throwable}. If any exception may not be caught, there exists a connection to the ``Error exit'' of the method. \item[If-else representation.] Each exception-throwing statement is connected to the first \texttt{catch} block. Each \texttt{catch} block is represented as a predicate, with the true edge connected to the first statement inside the \texttt{catch} block, and the false edge to the next \texttt{catch} block, until the last one. The last one will be a pseudo-predicate connected to the first statement after the \texttt{try} if it is a catch-all type or to the ``Error exit'' if it \added{is not}\deleted{isn't}. \end{description}} \begin{example}[Catches.] Consider the following segment of Java code in figure~\ref{fig:try-catch}, which includes some statements that do not use data (X, Y and Z), and method call to \texttt{f} that uses \texttt{x} and \texttt{y}, two global variables. \texttt{f} may throw an exception, so it has been placed inside a \texttt{try-catch} structure, with a statement in the \texttt{catch} that logs the error when it occurs. Additionally, when \texttt{f} exits without an error, only \texttt{x} is modified; but when an error occurs, only \texttt{y} is modified. Note how the \begin{figure}[h] \begin{minipage}{0.35\linewidth} \begin{lstlisting} try { X; f(); Y; } catch (Exception e) { System.out.println("error"); } Z; \end{lstlisting} \end{minipage} \begin{minipage}{0.64\linewidth} \includegraphics[width=\linewidth]{img/try-catch-example} \end{minipage} \caption{A simple example of the representation of \texttt{try-catch} structures and method calls that may throw exceptions.} \label{fig:try-catch} \end{figure} \end{example} % \delete{From here to the end, move to solution chapter (CARLOS)} Regardless of the approach, when there exists a catch--all block, there is no dependency generated from the \texttt{catch}, as all of them will lead to the next instruction. However, this means that if no data is outputted from the \texttt{try} or \texttt{catch} block, the catches will not be picked up by the slicing algorithm, which may alter the results unexpectedly. If this problem arises, the simple and obvious solution would be to add artificial edges to force the inclusion of all \texttt{catch} blocks, which adds instructions to the slice ---lowering its score when evaluating against benchmarks--- but are completely innocuous as they just stop the exception, without running any extra instruction. Another alternative exists, though, but slows down the process of creating a slice from a SDG. The \texttt{catch} block is only strictly needed if an exception that it catches may be thrown and an instruction after the \texttt{try-catch} block should be executed; in any other case the \texttt{catch} block is irrelevant and should not be included. However, this change requires analysing the inclusion of \texttt{catch} blocks after the two--pass algorithm has completed, slowing it down. In any case, each approach trades time for accuracy and vice--versa, but the trade--off is small enough to be negligible. Regarding \textit{unchecked} exceptions, an extra layer of analysis should be performed to tag statements with the possible exceptions they may throw. On top of that, methods must be analysed and tagged accordingly. The worst case is that of inaccessible methods, which may throw any \texttt{RuntimeException}, but with the source code unavailable, they must be marked as capable of throwing it. This results on a graph where each instruction is dependent on the proper execution of the previous statement; save for simple statements that may not generate exceptions. The trade--off here is between completeness and correctness, with the inclusion of \textit{unchecked} exceptions increasing both the completeness and the slice size, reducing correctness. A possible solution would be to only consider user--generated exceptions or assume that library methods may never throw an unchecked exception. A new slicing variation that annotates methods or limits the unchecked exceptions to be considered. Regarding the \texttt{finally} block, most approaches treat it properly; representing it twice: once for the case where there is no active exception and another one for the case where it executes with an exception active. An exception could also be thrown here, but that would be represented normally. % vim: set noexpandtab:ts=2:sw=2:wrap