locked
How to use call graph

    Question

  • Hi Andy,

        As you know, I am counting the number of some kind of operations, such as integer add for a whole program.  And  I  solve the loop iteration problem for it. However, there is another problem about Nested function, which I guess that have to use call graph.
        For example:

        funcion A()
       {
            D();
            for  //iterates 50 times
            {
                B ()
             }
       }

       funcion B()
       {
            for  //iterates 50 times
            {
                c ()
             }
       }

       So function B is called 50 times, and function c is actually called 2500 times.  However, right now, I just calcuate the number 50 for in each for loop in one function.

       Now I want to consider the whole program, which may use the call graph for entire program. I think each function call times inside another funciton can be recorded as the weight of an edge in the call graph.  In this case, the weight of the edge from A to B is 50, and the edge of B to C is 50, and the edge of A to D is 1.  If I want to caculate the total number of add operations for the whole program, then I have to traverse the call graph DFS order, and for each node, e.g A  and one of its child, e.g. B , it has to add its cost  to the multiplication  of the  weight of the edge and its child's number of operation.  A's total number of operation = A's + 50* B. and then this number will be propagated to the root of the call graph.

      There are some questions about the implementation.
    1) How to use call graph, is it the same as flow graph? but flow graph is for only 1 function, so how can I use call graph, during "execute" method?

    2) How to record the weigh of the edge? Do I need to add an extension object to each edge to record it?

    3) I think, when compiling each function, I can add this weight information to each edge that the function is concerned.  But I can only do the DFS visit after all the edge's weights have been computed.  How should I implement this? Use a sparse bit vectore for each node in the call graph?

    4) To record the number of operations within a funciton, I think I have to record it in the each node of call graph, so  I also have to  add an extension object to  each node in the call graph. Is that right?

    Thank you very much!

    Regards,
    Ray
    Friday, September 28, 2007 4:29 AM

Answers

  • If you compile with optimization enabled (eg -O2) the compiler will compile functions in "bottom-up" order on the call graph, visiting (for the most part) callees before callers. This is what you need to solve this problem.

     

    Given this compilation order, you can add a phase to the compiler that analyzes a function, and the "publishes" the analysis results for later use by attaching an extension object to the function symbol. Then when analyzing the caller, you can look for this information on the function symbols of any callees, and if it's there, you can do the accumulation, since you know the cost of a call to the callee, and you know from analyzing the caller how many calls are made. You can also attach the information to the call graph nodes, as you mentioned.

     

    Compiling -O2 will only get you this data across the functions in one compilation unit (typically a single .c or .cpp file). To get a broader look you can use what we call Link Time Code Generation (aka LTCG). The same exact plugin will work but the details are little different -- for starters try looking at the notes from the Summer Workshop, particularly sections 5 and 6 where we talk about LTCG.

     

    Friday, September 28, 2007 6:52 AM
  • FunctionSymbols are also "global" in a sense -- let me make this more precise. Suppose function A calls function B. In the compiler there is just one function symbol for B [but see footnote]. This symbol is both the one pointed at by the call operand in A's IR, and also the one pointed at by the functionUnit for B. So if you put an extension on that symbol when compiling B, you can retrieve it when compiling A.

     

    As for where the root of the call graph lies -- because the compiler usually does not have visibility into the entire program, the call graph often will have many roots (nodes with no apparent caller). I believe the only way to find these roots is to search for them -- look for call graph nodes with no predecessor edges.

     

    Now, if what you are really asking is "how can I do something once all functions have been compiled" then you might look at registering a handler for the termination event. The steps behind this are a bit more involved than we'd like, but we have a working example of this in the IR-Longevity sample.

     

    [Footnote]

     

    At some point in the future we may introduce "proxy" symbols for functions like we have for global variables. That is, suppose functions A and B reference global variable X. In the compiler there will be (over time) three separate symbols for X: the "true X" in the moduleUnit symbol table, and "proxy Xs" in the symbol tables for A and B. These proxies are of type NonLocalVariableSymbol and point at the true X.

     

    You might ask why we bother with proxies for variables but not functions. It's partly a historical accident -- the design calls for proxies for both, but the variable case was more pressing and got done long ago, while the function case was left to be done later, and we haven't quite gotten around to it yet, though we may soon.

     

    You might also ask why we bother with proxies at all. The main rationale is that they give us a simple way of enumerating the resources that a functionUnit references -- we can just walk the functionUnit's symbol table and see them all, rather than having to walk the IR and check each operand.

     

    Saturday, September 29, 2007 4:30 AM

All replies

  • If you compile with optimization enabled (eg -O2) the compiler will compile functions in "bottom-up" order on the call graph, visiting (for the most part) callees before callers. This is what you need to solve this problem.

     

    Given this compilation order, you can add a phase to the compiler that analyzes a function, and the "publishes" the analysis results for later use by attaching an extension object to the function symbol. Then when analyzing the caller, you can look for this information on the function symbols of any callees, and if it's there, you can do the accumulation, since you know the cost of a call to the callee, and you know from analyzing the caller how many calls are made. You can also attach the information to the call graph nodes, as you mentioned.

     

    Compiling -O2 will only get you this data across the functions in one compilation unit (typically a single .c or .cpp file). To get a broader look you can use what we call Link Time Code Generation (aka LTCG). The same exact plugin will work but the details are little different -- for starters try looking at the notes from the Summer Workshop, particularly sections 5 and 6 where we talk about LTCG.

     

    Friday, September 28, 2007 6:52 AM
  • Hi Andy,

        As you suggested, I add the extension object like this:

        public ref class FunSymExtensionObject : Phx:Tongue Tiedymbols:Tongue TiedymbolExtensionObject

        when I deal with the "call" instruction, in a function, I use it like this:

        PhxTongue Tiedymbols:Tongue Tiedymbol ^funsym = instruction->AsCallInstruction->CallTargetOperand->Symbol->AsFunctionSymbol;
        FunSymExtensionObject ^ FunSym_Obj = FunSymExtensionObject::Get(funsym);

        But when I attach the extension object, I do like this:
        Phx:Tongue Tiedymbols::FunctionSymbol ^ functionSym = functionUnit->FunctionSymbol;
        FunSymExtensionObject ^ MyFunSym = FunSymExtensionObject::Get(functionSym);
        if (MyFunSym == nullptr) {
           MyFunSym = gcnew FunSymExtensionObject();
           functionSym->AddExtensionObject(MyFunSym);
        }
     

         Actually, I am a little confused about this extension object.  As the above lines, used for call instruction, the symbol is an operand of call instruction, so this symbol should be in the symbol table of this function.  The same thing for the function name itself, I think it should be in its symbol table.  So, when I set up an extension object for this function before leaving the funtion,  this extension object is associated with the symbol of this function itself, and it is in its symbol table, right?

        However, when I enter another function, say its parent calling function, the operand of call instruction 's symbol is however in its parent's symbol table.

        I can understand if I associated an extention object with each node in the call graph, since it is a global struction. but I dont' understand this. 



        Another questions, is how can I know that the function is the last one, I mean, the root of the call graph?
      

       Thank you very much!

    Regards,
    Ray
        

    Saturday, September 29, 2007 12:56 AM
  • FunctionSymbols are also "global" in a sense -- let me make this more precise. Suppose function A calls function B. In the compiler there is just one function symbol for B [but see footnote]. This symbol is both the one pointed at by the call operand in A's IR, and also the one pointed at by the functionUnit for B. So if you put an extension on that symbol when compiling B, you can retrieve it when compiling A.

     

    As for where the root of the call graph lies -- because the compiler usually does not have visibility into the entire program, the call graph often will have many roots (nodes with no apparent caller). I believe the only way to find these roots is to search for them -- look for call graph nodes with no predecessor edges.

     

    Now, if what you are really asking is "how can I do something once all functions have been compiled" then you might look at registering a handler for the termination event. The steps behind this are a bit more involved than we'd like, but we have a working example of this in the IR-Longevity sample.

     

    [Footnote]

     

    At some point in the future we may introduce "proxy" symbols for functions like we have for global variables. That is, suppose functions A and B reference global variable X. In the compiler there will be (over time) three separate symbols for X: the "true X" in the moduleUnit symbol table, and "proxy Xs" in the symbol tables for A and B. These proxies are of type NonLocalVariableSymbol and point at the true X.

     

    You might ask why we bother with proxies for variables but not functions. It's partly a historical accident -- the design calls for proxies for both, but the variable case was more pressing and got done long ago, while the function case was left to be done later, and we haven't quite gotten around to it yet, though we may soon.

     

    You might also ask why we bother with proxies at all. The main rationale is that they give us a simple way of enumerating the resources that a functionUnit references -- we can just walk the functionUnit's symbol table and see them all, rather than having to walk the IR and check each operand.

     

    Saturday, September 29, 2007 4:30 AM
  • Hi Andy,

        Thank you very much for your answers!

        I implemented it as you suggested with -O2 first, and now assume the program only has a simple root "main" function.Well,  later I will look at the IR longevity example.

       Here I have another question.

       How can we deal with recursive situation? It seems not be able to do. It will just consider the recursive function appear once.

        Thank you very much!

    Regards,
    Ray

      
    Sunday, September 30, 2007 12:42 AM
  • Recursion (either direct, as in A calls A, or indirect, as in A calls B calls A) is a tricky thing. To deal with it in general usually induces some kind of iterative structure into your summary equations, and these may become too expensive to solve in an iterated manner. And trying to solve these symbolically is not easy either (consider the closed-form expressions for the computation cost of GCD or fibonnacci numbers).

     

    What we often do is just somewhat arbitrarily break cycles in the call graph and make worst-case assumptions about those broken edges. This leave the call graph as a DAG and means equations can be solved in one pass without iteration. Yes, it's an approximation, but optimizing recursion into something that is not recursion is hard and in practice the approximations don't hurt that much.

     

    Tuesday, October 02, 2007 6:31 AM