http://www.spinellis.gr/pubs/jrnl/2005-IEEESW-TotT/html/v24n2.html
This is an HTML rendering of a working paper draft that led to a publication. The publication should always be cited in preference to this draft using the following reference:

This document is also available in PDF format.

The document's metadata is available in BibTeX format.

Find the publication on Google Scholar

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Diomidis Spinellis Publications


© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Tools of the Trade: I Spy

Diomidis Spinellis

Knowledge is power.

—Sir Francis Bacon

The ultimate source of truth regarding a program is its execution. When a program runs everything comes to light: correctness, CPU and memory utilization, even interactions with buggy libraries, operating systems, and hardware. Yet, this source of truth is also fleeting, rushing into oblivion at the tune of billions of instructions per second. Worse, capturing that truth can be a tricky, tortuous, or downright treacherous affair.

Often to be able to peek into a program's operation we need to prepare a special version of it: compile it with specific flags or options, link it with appropriate libraries, or run it with suitable arguments. Often, we can't easily reproduce a problem, so our carefully crafted program version must be shipped to a customer waiting for the problem to appear again. Irritatingly, some of the ways we use for instrumenting programs make the program too slow for production use, or obfuscate and hide the original problem.

A family of tools ...

There's no shortage of techniques for spying on a program. If we care about CPU utilization, we can run our program under a statistical profiler that will interrupt its operation many times every second and note where the program spends most of its time. Alternatively, we can arrange for the compiler or runtime system to plant code setting a time counter at the beginning and end of each function, and examine the time difference between the two points. In extreme cases we can even have the compiler instrument each basic block of our code with a counter. Some tools that use these approaches are gprof , and gcov under Unix, EJP and the Eclipse and NetBeans profiler plugins for Java programs, and NProf, and the CLR profiler for .NET code. Memory utilization monitors typically modify the runtime system's memory allocator to keep track of our allocations. Valgrind under Unix and the Java SDK JConsole are two players in this category.

When we try to locate a bug the approach we use involves either inserting logging statements in key locations of our program, or running the code under a debugger, which allows us to dynamically insert breakpoint instructions. We discussed both approaches in the May/June 2006 column.

Nowadays, however, most performance problems and quite a number of bugs involve the use of third party libraries or interactions with the operating system. One way to resolve these issues is to look at the calls from our code to that other component. By examining the timestamp of each call or looking for am abnormally large number of calls we can pinpoint performance problems. The arguments to a function can also often reveal a bug. Tools in this category include ltrace, strace, ktrace, and truss under dialects of Unix, and APIS32 or TracePlus under Windows. These tools typically work by using special APIs or code patching techniques to hook themselves between our program and its external interfaces.

Finally there are cases where our program works fine, but the operating system acts up. In these cases we must put a stethoscope on the operating system's chest to see what is going on. Fortunately, modern operating systems zealously monitor their operation, and expose those figures through tools like vmstat, netstat, and iostat under Unix or the Event Tracing for Windows framework.

Most of the tools we've examined so far have been around for ages, and can be valuable for solving a problem once we've located its approximate cause. They also have a number of drawbacks: they often require us to take special actions to monitor our code, they can decrease the performance of our system, their interfaces are idiosyncratic and incompatible with each other, each one shows us only a small part of the overall picture, and sometimes important details are simply missing.

... and one tool to rule them all

The gold winner in The Wall Street Journal's 2006 Technology Innovation Awards contest was a tool that addresses all the shortcomings I outlined. DTrace, Sun's dynamic tracing framework, provides uniform mechanisms for spying comprehensively and unobtrusively on the operating system, the application servers, the runtime environments, libraries, and application programs. It is available in open source form under Sun's fairly liberal Common Development and Distribution License. At the time of writing DTrace is part of Sun's Solaris 10, and it is also being ported to Apple's Mac OS X version 10.5 and the FreeBSD operating systems. If you don't have access to DTrace, an easy way to experiment with it is to install a freely downloadable version of Solaris Express on an unused x86 machine. I must warn you however that I've found DTrace to be seriously addictive.

Unsurprisingly, DTrace is not a summer holiday hack. The three Sun engineers behind it worked for a number of years to develop mechanisms for safely instrumenting all operating system kernel

functions, any dynamically linked library, any application program function or specific CPU instruction, and the Java virtual machine. They also developed a safe interpreted language that we can use to write sophisticated tracing scripts without damaging the operating system's functioning, and aggregating functions that can summarize traced data in a scalable way without excessive memory overhead. DTrace integrates technologies and wizardry from most existing tools and some notable interpreted languages to provide an all-encompassing platform for program tracing.

We typically use the DTrace framework through the dtrace command-line tool. To this tool we feed scripts we write in a domain-specific language called D; dtrace installs the traces we've specified, executes our program, and prints its results. D programs can be very simple: they consist of pattern/action pairs like those find in the awk and sed Unix tools and many declarative languages. A pattern (called a predicate in the DTrace terminology) specifies a probe—an event we want to monitor. DTrace comes with thousands of pre-defined probes (49,979 on my system). In addition, system programs (like application servers and runtime environments) can define their own probes, and we can also set a probe anywhere we want in a program or in a dynamically linked library. For example, the command

dtrace -n syscall:::entry
will install a probe at the entry point of all operating system calls, and the (default) action will be to print the name of each system call executed and the process id of the calling process. We can combine predicates and other variables together using Boolean operators to specify more complex tracing conditions.

The name syscall in the previous invocation specifies a provider—a module providing some probes. Predictably, the syscall provider provides probes for tracing operating system calls; 463 probes on my system. syscall::open:entry is one of these probes—the entry point to the open system call. DTrace contains tens of providers, providing access to statistical profiling, all kernel functions, locks, system calls, device drivers, input and output events, process creation and termination, the network stack's management information bases (MIBs), the scheduler, virtual memory operations, user program functions and arbitrary code locations, synchronization primitives, kernel statistics, and Java virtual machine operations.

Together with each predicate we can define an action. This action specifies what DTrace will do when a predicate's condition is satisfied. For example, the following command

dtrace -n 'syscall::open:entry {trace(copyinstr(arg0));}'
will list the name of each opened file.

Actions can be arbitrarily complex: they can set global or thread-local variables, store data in associative arrays, and aggregate data with functions like count, min, max, avg, and quantize. For instance, the following program will summarize the number of times each process gets executed over the lifetime of the dtrace invocation.

proc:::exec-success { @proc[execname] = count()}

In typical use DTrace scripts span the space from one-liners, like the ones we saw, to tens of lines containing multiple predicate action pairs. A particularly impressive example listed on a DTrace blog illustrates the call sequence from a Java method invocation, through the C libraries, an operating system call, down to the operating system device drivers. As software engineers we've spent a lot of effort creating abstractions and building walls around our systems; more impressively it looks like we've also found ways to examine our isolated blocks holistically as one system.