次の方法で共有


2B0ST0N6 day two

The second day was very different to the first, with some papers being presented, a trip through the art gallery and the animation theatre at night.

Due to some traffic snarling, I missed the first couple of papers in the morning (a situation that I rectified the next day by walking past the Boston Common to get to the convention center instead of taking a bus). I did manage to pick up two paper presentations from the morning session, though.

The first one was "A Spatial Structure for Fast Poisson-Disk Sample Generation" which showed a very good technique for generating Poisson sample sets. This was great to see because I have had the need to generate a Poisson set before (it comes up all the time in computer graphics, and even more in real time graphics these days with the increasing GPU complexity) so seeing a better method of generating the sets was very interesting.

The next paper was "Recursive Wang Tiles for Real-Time Blue Noise" which showed an interesting technique for generating blue noise of varying densities that could be arbitarily zoomed. The combination of techniques in the paper to achieve the result was interesting, and the method showing for combining blue noise sample sets was novel. I want to read more about this later just to understand how the components of the paper work.

At this point I met up with a friend who is here for the conference as well and we checked out the art gallery some. There was a mix of static prints, videos and stuff built from scratch that was all very interesting. One of the coolest things there was something that used some kind of oil saturated with iron particles and manipulated with magnets. Some of the renderings shown on the screens here were very impressive too.

After I checked out the gallery I went to see the presentation of "Perfect Spatial Hashing" which was a great paper that showed a technique for compressing sparse texture data with a hashing. The compression process looks too slow for allowing modification of the distribution of texels, but it does allow for changing the texel values themselves. What was especially cool was that the decompressing of the texture can be done later with a pixel shader, so this technique can be readily applied to real-time rendering, and allow techniques that would otherwise waste a lot of texture memory. Considering how valuable texture memory is these days this is an important property of the work.

In the afternoon I attended the image manipulation paper set. The first of the papers was "Color Harmonization", which showed a technique for taking an image and shifting the colors to make the colors more harmonious. It was interesting to see the work on attempting to formalize color harmony, and using that to create the algorithm.

The next paper was "Drag-and-Drop Pasting", which showed an application that was able to take a very roughly outlined object from one image and drop it into another image. The application could find the detailed outline of the image (with alpha, from what I could tell) and merge it with the target image in a way that showed no seams. Some of the examples were very good - this kind of thing could put some photo-chopping people out of work :)

After that they showed "Two-Scale Tone Management for Photographic Look" which demonstrated a technique for taking the 'look' of one photo and applying it to another photo. Again, in this case the components of the technique were as interesting to me as the end result itself. There were some very cool image transforms in this that could be probably applied to other problems in a similar area.

Next up was "Interactive Local Adjustment of Tonal Values", which showed a method for identifying regions of an image and performing local adjustments of the image tone in those regions. This allowed for doing things like changing just the tone of the sky in a photo, for example, without changing the ground. The identification of the regions that you want to modify is way easier than masking things yourself using a lasoo or something similar, and a comparison of using this technique instead of a conventional photo editing program shows somebody with no lasoo skills able to do better work in way less time.

The last paper of the day was "Image-Based Material Editing", which showed a way to take a photo, choose an object in the photo, and then change the material of the object. The algorithms shown could identify the shape of the object, estimate the content of the photo behind the object, estimate the lighting of the object, and then use these estimations to replace the material of the object. Some examples were changing something from porcelain into metal or glass, for example. Despite the amount of information being estimated (or created from nearly nothing) the effect was quite convincing.

Later on at night there was the animation theatre. This was basically a collection of shorts from a large group of people. There were short films made by independents as well as short documentary style clips from studios showing how some of their work was done. One of the coolest ones was "In a New York Minute" by Weta (mentioned here). 458nm and One Rat Short were very very cool, and looked especially good on the 4k SXRD projector that they were using.

Onto the next day...

Comments