Section: Scientific Foundations
Visual Graph Mining
Visually mining data requires astutely combining data analysis with visual graphics and interaction. Mining itself draws not only on statistics but in a rather astute mixture of mathematical rigor and heuristic procedures. As David Hand puts it  ,  :
“To many, the essence of data mining is the possibility of serendipitous discovery of unsuspected but valuable information. This means the process is essentially exploratory.”
From Hand's perspective, we see that visualization has much to share with data mining because visualization often comes as an aid to exploratory analysis. The analysis task we are concerned with however differs from that conducted by dataminers, in that we seek to be able to produce readable and interactive visualizations rather than coming up with reasonable, arguable and final conclusions on the data. The perspective to adopt is a combination of (semi) automated data processing together with human analytical and perceptual capabilities. Although relying on technology, the analysis task remains in total control of the human user. The NVAC research agenda  clearly states:
“[The] analysis process requires human judgment to make the best possible evaluation of incomplete, inconsistent, and potentially deceptive information [...]”
later calling for the development of
[...] visually based methods to support the entire analytic reasoning process, [...].
That is, in ideal cases the visualization should be designed in order not only to assist the analysis but to also actively contribute to its progress. Visualization thus appears as a multi-disciplinary field embracing a large spectrum of competencies. This partly comes from the need to cover all processes involved in the so-called “Visualization pipeline” as depicted here:
A decade ago, Ben Shneiderman(Ben Shneiderman is professor in the Department of Computer Science, and Founding Director (1983-2000) of the Human-Computer Interaction Laboratory at the University of Maryland (USA).) - who definitely helped Information Visualization to gain scientific visibility - suggested that visualization scenarios should obey his now celebrated mantra “Overview first, zoom and filter, then details on demand”  . The pipeline is coherent with Shneiderman's mantra which actually provides an excellent framework applying to almost any visualization environment. The back arrows correspond to the user interacting on the view, asking for details or zooming in on a particular subset of the data.
Daniel Keim has recently proposed a revised mantra, changing the focus towards data analysis(See the Event Summary of the Workshop on Visual Analytics held at Konstanz University in June 2005: http://infovis.uni-konstanz.de/index.php?region=events&event=VisAnalyticsWs05 ):
Analyse First - Show the Important - Zoom, Filter and Analyse Further - Details on Demand
Keim's mantra is closer to our perspective, merging graph mining together with visualization resulting in effective visual analytics for relational data. However, the visualization process is not a linear one as might suggest the plain reading of the mantras and pipeline. The analyst exercises its exploration cyclically iterating through Shneiderman and Keim's analysis/overview/zoom/details process. This is what makes visualization so different from graphical statistics and presents a real challenge. The back arrows in Fig. 1 actually encapsulate a complex process through which the user gains insight and understanding on the visualized data. A more user-centred depiction of the same visualization process is given in the NVAC document:
More recently, van Wijk suggested how to measure the effectiveness and benefits of a visualization in terms of learning efforts and acquired knowledge  .