next up previous contents
Next: Results Up: No Title Previous: System Design

Conclusions

Chapter Conclusions

Conclusions

At the end of this thesis the most goals of the project are reached. Concepts are formulated and implemented in a running prototype, proofing their quality. The work is a good foundation for further development cycles.

The following subjects are covered conceptual but have not been implemented yet:

To summarize my experience made during the project, I observed a shift in cost of development. This work is the first time, I did intensive white box reuse of software and design of the framework Open Inventor. The time I spend less by reusing software, I might spend more figuring out, how to map the problem on the given concepts and mechanisms most efficiently and by debugging my code debugging. The more complexity one imports in a project, the more it gets difficult to handle its implications. This problem becomes important with reuse of architecture and collaboration mechanisms of a framework, rather than with using localized functionality by calling functions of a library. It takes a long time to build robust class libraries for basic data types, that are generic enough for practicegif.

I am still fascinated by the idea of Open Inventor's object oriented software technology, the way Open Inventor [1] ,,[...] separates the generic and the specific parts of a solution and structure the generic parts as collaborating objects''. But I see a demand for improvements of the user interface of frameworks. Frameworks can have very complex contracts and protocols between objects that builds its semantic. A new object class can easily violate this semantic, because the framework user can hardly know all of its constrains. I think frameworks suffers from the limited way they can verify if the user satisfies the contracts of the framework's semantic.

Although there is a debug version of the Open Inventor library (i.e. the dynamic library called DSO (dynamic shared object)), that has build in additional verifications, it showed that this mechanism is no protection of runtime errors, that could only be located or worked around with external background knowledge.

I follow Ackermann in [1], p. 44:

''In a framework, typical parts of a problem-domain are modeled as abstract classes. The abstract classes operate with each other, which means the framework defines the control flow. Developers who use frameworks need to know where to add their application-specific extensions, which class must derived, and which method should be overwritten so that the control flow of the framework calls them. Otherwise, they will not benefit from reuse and will implement their own solution from scratch. Unfortunately, the discussion of frameworks is complicated by the fact that no really good documentation techniques have been found for them, something that makes it difficult to learn a new framework [...]. Design Patterns are a first attempt to inform programmers about the abstractions used in a framework.''

See [9] or the 'Patterns Home Page'gif for more about Design Patterns.

It seems that frameworks need a semantic check of their use, not only a syntactic check on the level of the programming language (C/C++).

I wished there were two ways the extension of Open Inventor, i.e. deriving of new classes, would be better supported:

  1. Explicit description of the abstractions, e.g., in terms of design patterns [9]. I needed a long time to factor out the generic abstractions, to extend the problem domain of non graphic data and behavior.
  2. Example code for all Open Inventor classes. Source code of standard classes would be the best demo code, because their behavior is well known and they are realistic in using the complete framework's infrastructure. The examples given for Open Inventor extensions in [24] mostly deal with very simple cases only. I could find useful example code in the work mentioned in 3.3.1, a Open Inventor extension done as a masters thesis.

    The source code of classes of a framework is still the most complete documentation of a framework's usage. Source code is unhandy for documentation purposes because it requires a detailed understanding of all mechanisms, wheather they are importand for application or not.

An unrenouncable help is the news-group comp.graphics.api.inventor. I think it is not possible to extend Inventor without this source of tips and discussion.

Limitations, known Bugs and Future Work

 

This section summarizes all known bugs, limitations and ideas for future work. Because all bugs are limitations, and all limitations are possible subjects for future work, this is all listed in one section.

Limitations and known Bugs

Normal-Field:
The tsNormal node is implemented with the field of type SoSFUInt32 to store a vector of 4 signed bytes (although only 3 bytes are used). Future work should implement a new field class for this data, called SoSFVec4b. Because the user usually never has to deal with normals, this fact stays hidden.

tsLine:
When audio data is subsampled, rendered period is of wrong length.

Antialiasing:
tsRenderer does not render with Antialiasing enabled! Inventor bug?

Render-Caching of tsSurface:
The tsSurface is never or always cached. Problem lies in the tsTimeElement.

Camera's near-distance:
There must be a camera in the .iv file, to set the 'nearDistance' to 0.0, otherwise parts of the model are clipped (default setting is 1.0)!

Conversion to standard file format:
There are several converters from the standard Open Inventor file format to many CAD and raytracing systems. To use these the tsKit-specific nodes must be converted to a standard representation on Open Inventor standard nodes.

Future Work

Algorithm Editing while Runtime:
Algorithms of nodes or engines cannot be changed while runtime. Because they are written in C++, they must be compiled and linked between editing and usage. With the dynamic linker (rld) objects can be reloaded and linked at runtime. Using dynamic linking a) all ts nodes and engines could be reloaded or b) only special classes.

Automatic Layout:
The temporal and spatial layout of feedback nodes can be controlled automatically for a group of nodes. New nodes derived from SoGroupNode can automatically control the feedback dimensions space and time by controlling the properties ( SoTranslation and tsTime) of their children. There could be nodes that sequence their children or render them parallel in space or time.

In combination with editors for these layout-nodes tsKit became a general sequencer and authoring tool for time signals.

Using SoComplexity:
Feedback nodes should take the property SoComplexity into account.

Two-dimensional Rendering:
It might be useful to have graphical feedback nodes rendering only a two-dimensional shape of a signal. Signals represented in a tsData2 node can be rendered as a simple line. Combined with the next extension they could be viewed in a SoXtPlaneViewer. A SoXtPlaneViewer lets the user view a scene only orthogonal to the X-Y plane.

Viewer-Nodes:
It might be useful to view a part of the scene graph in a separate viewer, parallel to viewing it with the complete scene graph. To allow this feature by scene graph layout, a new (group) node tsViewer could be created, that would build a viewer to view the subsequent subtree. A field can contain the kind of viewer to be used.

Using Overlay Planes:
Viewers can render a second scene graph overlayed on the normal scene graph. Its background is transparencte. This scene graph is limited in use of colors and should be very simple. This overlay scene graph can be used to show data like current time and speed or simple 2 dimensional displays of signals (see item 'Two-dimesnional rendering' above). A part of the scene graph can be marked with a label node (class SoLabel) as the overlay scene graph.

Extending gview with engines:
It should be able to create and delete engines in gview the same way it is possible for nodes now. They also should be visualized as icons like nodes in a schematic scene graph. Those input fields of engines that are not connected to other fields should be editable like node fields. This way engines could be configured easily.

Time Cursor:
Feedback nodes normally visualize an interval of time. Therefore they have to mark where the actual point in time is visualized. The actual time in point is determined by the field 'time' of the tsTime node and is hearable, when a audio feedback is generated with a node of class tsAudio. There could be a new node/element combination storing the way the actual point in time is visualized: Node: tsTimeCursor with field 'mode' of type SFEnum, element name tsTimeCursorElement.

Video and Image Processing:
A new node type 'tsVideo' rendering a sequence of frames could be introduced: tsDataImage and its file version tsDataImageFile storing a sequence of frames in memory. tsDataImageVideoIn could contain the actual frame from a video source. Becasue movie file images are very memory intensive, only a couple of frames around the actual point in time (tsTime.time) are held in memory. A feedback node tsVideo shows the image-stream in a window and/or sends it to video out.

New engines (tsImageProcessingEngine) could take one or more nodes of type tsDataImage and apply various kinds of image processing to them. The algorithm is described as a chain of ImageVisiongif or OpenGL operators. OpenGL operators can be processed in real-time. The chain is described in an input field of type string. For convenience this field is edited in a SoXtComponent SoXtImgeProcessingChainEditor or read from a file. The Editor uses ImageLabgif as a comfortable environment for editing image processing chains.

Once the image engine is evaluated in off-real-time the result in the output field can be played back in real-time. There could be a separate action SoImageRenderAction for rendering all nodes related to images.

Generalized Audio Rendering:
For audio there could be an extra action AudioRenderAction for rendering different kinds of audio represenation nodes as sounds (this would replace the tsAudio node!). Here are possible new nodes for storing audio that are rendered by the audio render action:

tsDataTracks:
This node contains tracks describing frequencies and amplitudes of sinusoidal partials of a sound over time. The real-time additive synthesizer softcastgif developed at CNMAT is used to render this handy represenation of audio. A new feedback node tsTracks renders this data to a graphical representation, e.g., as multiple line strips.

tsDataResonance:
This node contains resonance modelsgif. This data is also synthesized using the softcast synthesizer mentioned above.

tsDataMIDI:
This node contains MIDIgif data. This data is rendered by the SGI's build-in MIDI-synthesizer or send to connected Synthesizers.

In combination with new manipulators for these nodes, the tsKit became a 3D editor for audio.

Appendix: Results


next up previous contents
Next: Results Up: No Title Previous: System Design

Andreas Luecke
Mon Sep 15 10:08:08 PDT 1997