次の方法で共有


2B0ST0N6 day four

The first thing that I saw on day four was the paper "Photo Tourism: Exploring Photo Collections in 3D". It demonstrated a system for taking a collection of images that were taken in a similar area (such as in a town square), extracting features common across the photos, and using those features to estimate the position of those features in space.

An application for efficiently browsing this space was also presented. This technology can be seen in Photosynth, from Live Labs. This is an awesome technology, and it combines several different aspects of image processing and 3D rendering.

At this point I was interested in seeing a sketch session (basically a mechanism at SIGGRAPH for showing work in progress) about unstructured progressive graphics. I caught the end of a presentation that looked interesting about real-time multi-perspective rendering on graphics hardware, which was seemed to combine several viewpoints of the scene dynamically to produce some effects such as reflection that are difficult to recreate in real time. I have wondered about using this approach before - it seems to have the bonus of doing rendering the way that GPUs want to do it, which is a good way to make things fast.

There was also a presentation on compressing dynamically generated textures on the GPU. This was a very novel technique for essentially doing the compression of S3TC on the fly on the GPU to get 6x compression. I think that techniques like this will become very handy when DX10 is commonplace, because DX10 has more support for changing the interpretation of data on the fly, as well as integer operation support.

  • I also tried to see more of the exhibition on this day. There are a lot of exhibits to cover, but I did get to spend more time looking over various things. Some interesting highlights in a totally random order:
  • Seeing a *huge* multitude of displays all showing volumetric medical data.
    A system for combining multiple low resolution projectors into a larger high resolution display, without having to precisely align things.
  • Lots of cool 3D printers.
  • Lots of high quality car renderings (ray tracing + modern CPU power == photorealism)
  • Many different 3D modelling packages.

After lunch there was a sketch session called "In the Shadows". I was keen to attend this because I have done a lot of reading about shadows in real time, and it is an especially hard problem. The first talk in the sketch presented a method of measuring what shadows are perceived more than others (using real people looking at large quantities of images) and using these metrics to decide whether to render high or low quality shadows across a scene. It was refreshing to see human perception measurement applied to the problem.

Another talk in this sketch was about using real-time shadows with cone culling, by Louis Bavoil and Claudio T. Silva. The basic idea here was to do a shadow map pass, and interpret the pixels in the shadow map as spheres. The volume between the shadow map spheres and the light can be thought of as a cone in space, and by adding up the contributions of all of the cones. The technique is fast enough to run on a GPU, and can give plausible soft shadows.

In the afternoon there was the precomputed transfer set of papers. It seems that the variety of techniques that use PRT in some fashion multiplies more and more every year. I am personally going to research this more because of the power of the techniques. For those who are not familiar with PRT, it leverages 2D -> 1D signal transforms to transform and approximate 2D signals (such as environment maps and radiance transfer maps) with 1D vectors. For those familar with fourier transforms, imagine taking an input signal, taking the fourier transform, and removing the higher frequency contribution. In this way you can represent a lower quality version of the signal with fewer numbers than the original number of samples. The common thread in most precomputed transfer techniques is to preprocess some quantity of the transformation, and to reverse the transformation after doing lighting calculation in the transformed space.

The first paper presented was "Real-time BRDF Editing in Complex Lighting". The idea of this technique was to fix the lighting and the camera, but to vary the BRDF. This changes the typical assumptions that are made with the rendering equation, and a cascade of optimizations follows from the initial assumptions. Even more interesting was the notion that this technique was incremental, so changing only a small number of lighting parameters at once increases the framerate of rendering. Since in editing scenarios people typically only change one parameter at a time, you can interactively edit the material of an object with realistic all frequency lighting.

The next paper was "Generalized Wavelet Product Integral for Rendering Dynamic Glossy Objects". The mathematical constructs in this paper appeared very complex, but the results were very interesting. The technique computes illumination per vertex, and has precomputation transforms done using wavelets and Haar integral coefficients. I need to re-read this paper in more detail to understand the transform mathematics better.

After that a paper was presented titled "All-Frequency Precomputed Radiance Transfer Using Spherical Radial Basis Functions and Clustered Tensor Approximation". This paper showed a combination of using spherical radial basis functions (SRBFs) for the transformation and representation of lighting data, with a compression technique called "Clustered Tensor Approximation" to compact the data down to a reasonable size. The Efficiency of the SRBFs in representing the lighting environment was particularly impressive, meaning this paper is due for a second read as well.

The last paper that I saw on the day was "Real-Time Soft Shadows in Dynamic Scenes Using Spherical Harmonic Exponentiation". The demonstration of what this technique could achieve was very impressive. The idea was to use a collection of spheres as an approximation for the blockers, and to accumulate the effect of the blockers in log space to make the calculation more efficient. For applications already using precomputed transfer, this provides a solid technique for adding soft shadows to the scene.

Comments