Hi Daniel,
This is a significant issue for any visualization program that needs to display
large datasets. Depending on the structure of your data, there are a variety of
ways you could address it.
For example, my VisBio application deals with large collections of images. In
order for the software to obtain smooth browsing speed, and to be scalable to
data of extremely large size, it creates a collection of thumbnails, displaying
the currently visible image(s) in low resolution during quick browsing. When
idle, it goes to disk, loads the full resolution data, and displays it. When
the user starts browsing again, any full resolution data is dumped from memory.
If your data can be expressed as FlatFields, you could check out
visad.data.FileFlatField, which conserves memory by performing disk caching for
large FlatFields.
In order to obtain only part of a data object from a large data file, some file
adapters implement visad.data.FormBlockReader, which breaks up data files into
logical blocks. The most common examples of FormBlockReader are in adapters
like visad.data.tiff.TiffForm, where each TIFF file can contain multiple pages
(i.e., images), with each block constituting one image.
Also, Bill has written a package called visad.cluster that allows for
distributed data objects across a cluster of machines. Maybe Bill could comment
on that in more detail if you're interested.
Of course, the dataset you described is ~100MB in size. Since VisAD does things
in either 32-bit (float) or 64-bit (double) precision, this becomes ~200MB in
practice. Factoring in display transform logic, user interface, etc., your
program might require several hundred megabytes of memory to run. But machines
today can easily have 1GB or 2GB of memory, so your 100MB dataset shouldn't be
a problem. But since Java has a practical limit of ~1.5GB of usable memory on
most architectures, a 500-1000MB dataset is probably too large to completely
visualize, and you'd have to start using one of the techniques I mentioned
above.
In any case, visualizing that much data at once may not be useful, since it's
unlikely the screen resolution is good enough to actually display it all. Even
if it were, the human eye and brain probably can't interpret such a multitude
of data at such fine detail all at once. It would likely work just as well to
"play tricks," displaying data at low detail until the user focuses on a
specific area.
Cheers,
Curtis
At 10:33 AM 4/16/2004, Cunningham, Daniel wrote:
>Hello everyone,
>
>I am considering using VisAD to develop some neuroimaging applications. I was
>curious as to how well VisAD deals with large datasets. For example, a 16 bit
>dataset with dimensions 64 x 64 x 24 x 576 is what I consider to be large.
>
>If anybody has experience using data of this size or larger, please comment.
>
>Thanks,
>Daniel Cunningham