next up previous contents
Next: System Design Up: No Title Previous: Overview

Background

Chapter Background

Why Visualization ?

Visualization wants to shift cognitive processing load to the perceptual systemgif.

Visualization engages the primary human sensory apparatus, vision, and the processing power of human mind.

To shift to the visual system takes advantage of the special features of the visual system, especially of the powerful cognitive performance.

Visualization of data, information and concepts is getting more and more important. The fast expanding computer manufacturer Silicon Graphics Inc. has 'Visual Computing' as its main mission.

Arnheim says in [4], pp. 13: ''There is no basic difference in this respect between what happens when a person looks at the world directly and when he sits with his eyes closed and 'thinks'.'' When seeing is thinking, visualization does not need to be ending in an concrete image!

Benefiting from changes in one domain to another is often done and useful in general. Metaphors are an example for a shift to a domain that is often visual. Metaphors are everybody's visualizations from a problem domain into a pseudo-visual domain. Great use of pictures in language are an everyday example of using the visual system in its broadest terms: without getting as concrete as a real picture it is visual thinking. Often conferences are mapped to the visual domain, using its vocabulary of shapes, colors, motion and all visual perceived attributes.

The auditive perceptual system is included in this approach using computers for mapping data to sound. This area is called Auditory Display or Sonificationgif. The auditive perceptual system can be used as well. Here the ability to perceive parallel structures in time is used.

Requirements

Because the intra-media character of this framework, it shares at least requirements for multimedia frameworks.

In [1] p. 11/12, Ackermann gives a list of requirements and problems of a multimedia framework. I added my annotations in normal type face to show how these items relate to this project:

Software libraries that support the development of multimedia applications have to provide structures and functions that meet the following requirements:

Additionally, there are problems that complicate the development of multimedia software systems:

Related Work

This section gives a short overview of related systems.

Open Inventor Applications and Extensions

The following projects are applications or extensions of Open Inventor:

ChemKit - Molecular Visualization

ChemKit is an extension of the standard Open Inventor library on SGI workstations. It is similar to the tsKit, but it differs in the subject of visualization. ChemKit provides specialized nodes for storing and visualizing molecular data. ChemKit is the source of many of my ideas of how to extend Open Inventor.

   figure244
Figure 3.1: A screenshot of a standard Inventor viewer showing a scene graph with ChemKit nodes for molecular data and a dialog for editing visualization parameters.

Soft Primitives - Modeling Organic Forms

 

Modeling Organic Forms Using Soft Primitives
Thesis of Scott Peterson, Cal Poly State University, San Luis Obispo, USA.
See http://macabre.lib.calpoly.edu/projects/csc/Peterson_Scott_Brandon/contents .html

From the abstract:

'' A library of tools, dubbed Softies, has been implemented for the Open Inventor 2.1 architecture. When combined with the transformations and manipulators provided by the Open Inventor architecture, these elements become powerful organic modeling tools.''

VrTool - A Virtual Reality Developers Toolkit

Virtual Environment Technology Laboratory at the The University of Houston, USA.
See http://www.vetl.uh.edu/ tex2html_wrap_inline2185 lincom/VrTool/vrtool.html

From the abstract:

''VrTool is the newest Open Inventor Virtual Reality toolkit to provide a rapid prototyping capability to enable VR users to quickly get their application running with the minimum amount of effort.''

MET++ - A Multimedia Application Framework

In his works [1], [2] and [3] Philipp Ackermann describes a object-oriented multimedia application framework called MET++gif. This framework handles the synchronization, viewing and editing of time-dependent data (audio, music, 2D and 3D graphics animation, video). This system is built on top of the application framework ET [9]. Although it is platform and window system independent (Silicon Graphics, Hewlett-Packard, Sun, Linux), it is free available and offers interesting multimedia building blocks, there are two reasons against using this framework for this thesis. First of all, as being a research project, the available version was not stable and far from being efficient, in terms of performance. Second, being a application framework, rather than a visualization framework, the complete ET and MET++ application infrastructure (including a complete GUI and window system layer) has to be reused. This implication would make my work hard to reuse and integrate in other systems. Although the novel direct manipulation interaction on temporal structures in the so-called Time Composition View and Event Graph (see fig. 3.2) is an interesting concept, to be introduced in future work.

   figure273
Figure 3.2: The MET++ event graph viewer.

MAX - A Graphical Music-Programming Environment

 

MAX is is a graphical programming environment for event and signal processing. Building blocks of a MAX application are objects with typed input and output connectors. The object incorporates the mapping from the input to the output connections. MAX offers an editor for patching these objects and their connections while the application is running. MAX schedules the processing so that the programmer can use it like a parallel machine.

MAX is heavily used in the are of musical signal processing and is used as a system control for live performances and installations. See [16] or WEB site from the maufactuor Opcode Systemsgif

Open Inventors concept of engines that can be created and patched on run-time is very similar to MAX, except there is no editor for patching engines (and nodes) in Open inventor.

VRML2.0 - Virtual Reality Meta Language

VRML stands for Virtual Reality Meta Language and is a meta language for 3D worlds, like HTML (Hyper Text Meta Language) is a metalanguage for hypertext. VRML2.0 is the extension of VRML1.0 with an event model allowing scripting with JAVA. VRML1.0 was at least a file format for describing static and non interactive scenes in 3D. Actually it was a subset of the Open Inventor file format.

VRML2.0's architecture has great similarity with that of Open Inventor and shares the concepts of a scene graph with nodes containing the relevant data in typed members called fields that can be connected. It comes as a interpreter for VRML files optional containing JAVA scripts and is platform independent.

VRML offers extensive support for 3D graphics, a limited one for audio, video is nonsupported at all. To implement part of the VRML by extensions in native code (C/C++, called plug-ins) to circumvent limitations in performance or system integration (using other libraries) foils its advantage of being platform independent.

IRIS Explorer - A Scientific Visualization Toolkit

IRIS Explorer is an application creation system and user environment that provides visualization and analysis functionality for computational scientists, engineers, and other investigators. Internally IRIS Explorer makes extensively usage of Open Inventor.It is especially useful for those whose needs are not met by commercial software packages. Also, IRIS Explorer's Graphical User Interface (GUI) allows users to build custom applications without having to write a single line of code.

Explorer is a system for creating visualization maps, each of which comprises a series of small software tools, called modules. A map is a collection of modules that carries out a series of related operations on a dataset and produces a visual representation of the result.

See [13] and [14] or the IRIS Explorer Centergif for detailed information.

'When Timbre Comes Apart'

This work from Jøran Rudi, Norwegian network for Technology, Acoustics and Music (NoTAM), University of Oslo used IRIS Explorer for visualizing sounds in a art work.

See http://www.notam.uio.no/ tex2html_wrap_inline2185 joranru/wtca.html for a paper and images.

Thesis A Real-Time 3D Signal Analysis/Synthesis Tool Based on the Short Time Fourier Transform

Alan Wesley Peevers, Master thesis, Department of Electrical Engineering, University of California, Berkeley.

See: http://www.CNMAT.Berkeley.EDU/ tex2html_wrap_inline2185 alan/MS-html/MSv2_ToC.html

From the Introduction:

This paper describes a system for audio analysis, modification, and synthesis, based on the Short Time Fourier Transform (STFT). The system is intended both as a tool for sound manipulation, and as a means to reinforce people's intuitions regarding the relationships between timbre and the harmonic structure of music and other audio signals, as conveyed via their spectrograms. This is done by creating a 3D spectrogram which shows a sound's harmonic structure in great detail as it is sampled. Similar systems in the past (for example, David Shipman's SPIRE system) have often sought to convey harmonic structure via two-dimensional spectrograms, sometimes in conjunction with wave form or other displays. By adding the third dimension (amplitude in dB mapped to surface height), it is hoped that a greater apprehension of the detailed structure can be achieved.

   figure298
Figure 3.3: Image from [19], Copyright 1995, UC Regents, Universisty of California, Berkeley. All Rights Reserved.

Weseley's work is related to this thesis in using 3D graphics (OpenGL) to visualize spectral audio data. His work stresses performance and is not designed as an open framework.


next up previous contents
Next: System Design Up: No Title Previous: Overview

Andreas Luecke
Mon Sep 15 10:08:08 PDT 1997