CS791 - Assignment 1 - Stippling
Anton Lopyrev - January 2010
Most of the images used for testing of this assignment that are displayed on this page were pulled from public domain. However, if you reserve any rights for any of the images and you would like me to take them down, please let me know via email: anton at lopyrev dot com. I will try accommodate your request as quick as possible. I apologize in advance for unauthorized use of any copyrighted material.
- 1.0 Contents
- 2.0 Introduction
- 3.0 Tools Used
- 4.0 Implementation Details
--- 4.1 User Interface
--- 4.2 Initial Sampling Technique
--- 4.3 Resolution of Voronoi Calculation
--- 4.4 Efficient Centroid Computation
- 5.0 Results
- 6.0 Extensions
--- 6.1 Importance Sampling
--- 6.2 Stipple Sizes
--- 6.3 Color Stippling
- 7.0 Source Code
- 8.0 Conclusion
This is a write up for the Assignment 1 of CS 791 Non-photorealistic rendering course that I took in Winter of 2010. See the full assignment specifications here.
In summary, the assignment involved implementation of a basic stippling algorithm from Adrian Secord's paper Weighted Voronoi Stippling. Stippling is a techique of simulation of varying degrees of shading and tone using a number of small dots. The dots can potentially vary in size, color and shape. Weighted Voronoi stippling uses cetroidal Voronoi diagrams and a weighted variation of the iterative Lloyd's method to distribute the dots according to the tone of the image.
My implementation of the algorithm uses an interesting importance sampling technique to place the initial set of stipples. It also involves the tiling of the source image to a higher resolution and efficient computation of centroids in order to achieve better results in a shorter time. Both of these techniques are described in the original paper.
In order to make my life simple in terms reading/writing image files, I've decided to implement the assignment in Java and use JOGL to provide Java binding for OpenGL. I used Eclipse to do the developement, which also made things easier in terms of debugging.
To generate the output SVG images, I use Apache's Batik toolkit, which has a built-in SVG generator class SVGGraphics2D.
To make my implementation more usable, I've opted to design a simple user interface for my program instead of making it a pure command-line based tool. This also helped me to visualize the iterations of the Lloyd's algorithm as they were going and, thus, debugging any issues was quite painless. The basic user interface is shown in the image below:
As you can see, the interface allows the user to open a new file (File -> Open), choose an appropriate number of stipples and run the stippler. Once the algorithms conculudes, the user can save the result into a SVG file (File -> Save).
There also 3 extra options that the user can specify:
Instead using random samping to general the initial set of stipples, I've decided to use a certain approximation of importance sampling. The basic idea of importance sampling is to try to generate a set of points that are close to the final result. This way the number of iterations of the Lloyd's method required to achieve a steady state is significantly lower.
- Show voronoi regions: this option allows the user to see the Voronoi regions around the stipples as the algorithm is running.
- Use constant stipple sizes: this option allows the user to change the size of the stipples from constant to one dependent on color intensity. Read more about the section 6.2.
- Use importance sampling: this option can be turned on before the stippler is applied to the source image. It enables importance sampling, which is discussed in detail in the section 6.1.
- Use color stipples: this option allows the user to change the color of the stipples from black to one dependent on color intensity. Read more about the section 6.3.
Naturally, in stippling we should try to place more points in the darker regions of the image instead of randomly scattering points all over the image. Moreover, regions of the image that are particularly white should not have any initial points at all.
To randomly generate points in darker regions of the image with higher probability than brighter regions I used my own simple method. It's mainly based on creating a histogram of intensities of the source image. Read the details in the Extensions section 6.1.
To reduce the effects of the relative error of the calculated centroid locations, I used the tiling technique mentioned in the paper. I split the source image into 16 tiles and then rendered each tile at full resolution of the image, thus, effectively increasing the virtual resolution of the Voronoi diagram by 16 times.
This helped to represent the darker tones and gradients much better, especially for the images with 5000+ stipples.
I also used the efficient computation of centroids algorithm described in the paper. I found that the addition of this technique made the overall stippling algorithm twice as fast.
All of the results were rendered on my laptop (2.53 Ghz Intel Core 2 Duo), so the timings are not very impressive.
This image of a futuristic car that you have already seen above was rendered with 7000 stipples in about 20 minutes with importance sampling on.
The original image was grabbed from: http://blog.silive.com/sinotebook/2008/12/have_the_big_3_automakers_come.html
This image of an old man was rendered with 15000 stipples in about 30 minutes with importance sampling on.
The original coloured image was grabbed from: http://www.flickr.com/photos/84859785@N00/394609508/
This image of Big Ben and the Westminister Bridge was rendered with 7000 stipples in about 15 minutes with importance sampling on.
The original image was grabbed from: http://en.easyart.com/art-prints/Anonymous/Bridge-with-Big-Ben-105708.html
I've implemented three extensions to the stippling algorithm described in Adrian's paper. The first extension uses an approximation of imporance sampling to create a better initial set of stipples. The second one aims for a better relationship between the input image density and the output ink coverage by varying the stipple size. The final extension attempts to approximate color images by varying the color of the stipples.
My importance sampling approach introducted in section 4.2 can be described using a number of simple steps:
Generate a histogram of intensities of the source image. Consider intensity as an integer number from 0 to 255 representing the pixel color in the image (0 being black and 255 being white). Therefore, our histogram will have 256 bins. However, given that we wish to ignore the regions with high intensity, we will reduce the number of bins to 250 and ignore all pixels with intensity higher than 250. Here is a sample histogram of the "prototype car" image:
Now that the pixels of the image are distributed between the bins, to generate a random point, we could first generate a random bin number and then pick a random point within that bin. However, what we really want is to pick the dark bins more often, so we assign a weight to each bin. If the intesity of a bin is I, then the weight is 255 - I.
We map the weigthed histogram to a linear function of a random variable X. To do so, for each bin we compute the lower and upper bound of X values:
For bin 0:
lower bound = 0
upper bound = number of pixels in that bin * 255
For bin i:
lower bound = upper bound of the previous (i - 1) bin + 1
upper bound = number of pixels in that bin * bin weight
Now to generate a random point on the image:
- generate a random number X in the range [0, upper bound of bin 250],
- use binary search to find, which bin does that number belong too,
- pick random pixel [x, y] in that bin,
- pick a random floating point within that pixel's cell [x, y] - (x + 1, y + 1).
The images below demostrate the difference between the general random sampling (2nd image) and the approximation of importance sampling (3rd image). Note that both images have equal number of 7000 stipples and were generated over the same period of time - 20 minutes. Clearly, the last image carries over the tone of the source image much better.
According to Adrian's Master's thesis, the main reason for "non-linearity between input image density and output ink coverage is that any particular fixed primitive size cannot faithfully reproduce the input image tones".
My extension attemps to improve the relationship by applying a simple fact stated by Deussen et al. in his Floating Points paper. Deussen's observations show that in real stipple drawings the largest dot's are at most twice as large as the smallest ones.
I use this fact and the image density function to linearly scale the stipples. In the regions of the input image, where the density function is equal to 1, the radius of the stipples is R. When the density function is zero, the radius of the stipples is R/2.
This simple technique doesn't compensate for overlap or any other issues mentioned in Adrian's thesis. However, it produces interesting results and it definitely improves the algorithm's ability to highlight intricate details. Notice how you can now distinguish more detail on the wheels, the headlights and the interior of the car:
As I primary extension, I implemented color stippling. Due to a lack of time, I did not make the color selection algorithm very complicated. My approach to color stippling can be considered a simple proof on concept, which can potentially be enchanced to achieve better results. However, nevermind the simplicity of my algorithm, I believe the results that you see below are still quite pleasing.
The idea behind my color stippling algorithm is to simply select the color for the stipples based on the color intensity of the source image. This works quite well, especially with a large number of stipples. There are also a number of interesting ideas that I wanted to implement to enchance my images, but didn't have time:
Below are some of my final results:
- Make the stipples partially transparent and account for overlap to represent the colors better in the regions of the image, where there are lots of stipples.
- Pick a limited number of base colors (3 or more) and generate multiple layers of partially transparent stipples based on different density functions obtained by looking at color components of the source image separately.
- Make the the color and size of stipples to work together to better represent tone changes. For instance, small dark stipple should have a similar effect a large light stipple.
This the color version of the old man image rendered using 15000 constant-size stipples with imporance sampling on.
Download Black&White SVG Download Color SVG
This picture of the coliseum was rendered using 10000 constant-size stipples with importance sampling on.
The original coloured image was grabbed from: http://www.tour-europe.org/italy/rome/the-coliseum.php
Download Black&White SVG Download Color SVG
This picture of Martine Theeuwen (lead singer of the Belgian electronic music group AnnaGrace formerly known as Ian Van Dahl) was rendered using 13500 constant-size stipples with importance sampling on.
The original coloured image was grabbed from: http://www.caratulas.info/musica/Ian-Van-Dahl-Lost-And-Found-Del-2004-Delantera-jpg-9792.htm
Download Black&White SVG Download Color SVG
I don't mind sharing my code. If you are interested to have a look at it, send me an email: anton at lopyrev dot com.
This has been one of the most interesting assignments I have ever worked on in my university career. It's amazing, when you can learn how a very technical concent like Voronoi diagrams can be applied to create works of art.
Thank you for reading!
- This assignment takes a lot of time. Nature of the algorithm causes it to have many painful pitfalls that are hard to debug.
- If your write your own Weighted Voronoi stippling algorithm and your points are slowly moving to one of the corners of the image, make sure that when you calculate the area of the centroids you use the center of the pixels.
- If you wish to have many stipples in your image, you almost HAVE to implement the tiling method described in the paper to increase the virtual resolution of the Voronoi diagram.