SlideShare a Scribd company logo
We are firstborn.
It’s nice to meet you.
Morgan Villedieu
Hector Arellano
We are a strategic design and
technology agency.
We create interactive experiences, digital products and content that
build brands, grow businesses and transform categories.
So Why Are
We Here ?
Short-Term Experimentation:
Drawing using fragment shader
• All you need is a simple quad
• Drawing using the built-in math functions
• Deformation of the space
• Post-processing
• Going further: Using Raymarching and distance fields
Extend the Usage of Those Methods to a Production Project
• Mountain Dew x Titanfall
• How the WebGL world is setup
• Dom to WebGL texture
Long-Term Exploration:
• O(n2) complexity particle animations
• Geometry generation with Marching Cubes
• Raytracing dielectric materials
• Photon mapping (caustics)
WebGL Usages
-
Drawing using
a fragment
shader
• Using two triangles forming a quad, we’ll use the fragment
shader to operate on a per-pixel basis and evaluate each pixel’s
color.
• The fragment shader operates this way:
For each pixel p do something to modify the pixel’s color.
Then update the current pixel color.
• Multiple passes can be used (but with precaution) to gather and
scatter operations.
• This is really simple to setup and the code is really compact and
fast, only consisting of a few kilobytes for the engine and the
shaders.
All you need is
a simple quad.
VERTICES
VERTEX SHADER
VERTEX PRIMITIVES GENERATION
RASTERIZATION
FRAGMENT SHADER
BLENDING
FRAME-BUFFER
1. Drawing using the fragment shader
• Using a formula to generate a pattern: WebGL is going
to call this function once for every pixel on the screen. The
only things that will change are gl_FragCoord, which is
the location of the pixel being drawn and the uniforms
you're passing (mouse position, time, resolution, etc.)
• The data you are drawing can be used as a visual output
or as input values to create or modify another effect.
Drawing using the built
in math functions.
1. Drawing using the fragment shader
• By calculating your “UVs” gl_FragCoord.xy / resolution.xy you can
then use them to draw or deform your visual space.
• Example: Deforming a simple plane can be used to imply depth or
simulate a 3D world when combined with a gradient.
Deformations of
the space.
GRID USING MOD()
BEND GRID
USING / ABS()
CENTRAL GRADIENT (FAKE
FOG) USING / ABS() TO AVOID
ALIASING
1. Drawing using the fragment shader
• Post-Processing is a technique used in graphics that allows
you to take a current input texture, and manipulate it’s pixels
to produce a transformed image.
• This can be used to apply many effects in real time like
volumetric lighting, or any other filter type effect you’ve seen
in applications like Photoshop, Instagram, or After Effects.
Post
Processing
RENDER SCENE INTO OUR FBO
(PASS 1)
EMPTY 2D TEXTURE
TEXTURE CONTAINING THE FRAME
BUFFER
INPUT TEXTURE AND ADD EFFECTS
USING THE FRAGMENT SHADER
SHOW UPDATED CURRENT FRAME
BUFFER
1. Drawing using the fragment shader
From Experimentation to Production: The Future of WebGL
• Raymarching: In raymarching we “march” a point along the
ray until we find that the point intersects an object.
• Distance Field: A distance field is a function that takes in a
point as an input and returns the shortest distance from that
point to the surface of any object in the scene. It limits how
often we need to sample when marching along the ray.
• Why it is useful:
• Allows you to render complex shapes not using geometry
• You cannot raytrace through volumetric materials such as
clouds and water. But you can use ray march to create
volumetric effects like fire, water, clouds etc.
Going further:
Raymarching using
distance field
RAYTRACING
RAYMARCHER USING
SIGNED DISTANCE
FIELDS
1. Drawing using the fragment shader
BASIC
IMPLEMENTATION OF
A RAYMARCHER WITH
A FIXED MARCHING
INTERVAL.
• You can combine distance fields using union or
intersection to create complex shapes
• You can transform them using domain
transformations
• You can analytically calculate the surface normal
by using the gradient of the distance field and
generate shading using a shading model
• You can also analytically calculate the UVs to map a
volume texture
Distance field
1. Drawing using the fragment shader
From Experimentation to Production: The Future of WebGL
• Understand the way GPUs work: Understanding what the
GPU is doing when you use a fragment shader is important
and helps guide your imagination when generating your
creation.
• Iterative process: Its an iterative process. You start with
something simple and build toward the final vision with
modifications and experiments (or let the power of the GPU
surprise you while you “make mistakes”.)
• Performance: There are many methods and tricks for
creating effects that will get you to your final product but
everything comes with a price. Always think in terms of
performances first.
It’s All About Pixel
Evaluation & Love
1. Drawing using the fragment shader
Production
Usages
From Experimentation to Production: The Future of WebGL
• The idea: we came up with this original idea
of trying to render the DOM elements within
a texture to allow us to use them within the
browser with graphic acceleration while
creating unique visuals using fragment
shaders - even on the DOM elements.
• About the effects: Smoke, lightning, noise,
glitches, graphic bending and displacement
all worked together to create the illusion of
3D effects without the weight of complex
geometry. All done with a simple quad and
the fragment shader pixel evaluation.
Our Approach • Why this approach? Using the GPU’s
extremely powerful and fast parallel
architecture let us add many visual effects
to the site, all rendered in real-time.
• Add Interactivity: Because the effects
were procedurally generated, they were
able to react in real-time when moused
over/clicked/keyed — an impossible task
for pre-rendered or video-based assets.
• Ultra quick loading: The site loads quickly
because everything was generated through
code with a small amount of 2D assets —
no video or 3D files that would normally
slow down loading time and functionality.
2. Apply this method to production
How we set up our WebGL world
ratio and
mlessly to
w to the
ing
2. Apply this method to production
From Experimentation to Production: The Future of WebGL
Some Wizardry: 

DOM to WebGL
Texture.
01
We create an SVG with a foreign
object containing our markup and
styles.
02 Convert the SVG data to a blob
03 Convert our blob to a base64 data url
04
Using this base64 data we generate
an image
05 Apply the image to a WebGL Texture
2. Apply this method to production
From Experimentation to Production: The Future of WebGL
From Experimentation to Production: The Future of WebGL
From Experimentation to Production: The Future of WebGL
There’s More 

to WebGL
From Experimentation to Production: The Future of WebGL
GPGPU in
WebGL
1. Particle Animation
2. Geometry Generation
3. Raytracing
4. Photon-Mapping
GPGPU WebGL
From Experimentation to Production: The Future of WebGL
From Experimentation to Production: The Future of WebGL
• The steering behavior uses and combines simple forces to
produce life like animations.
• Simulates repulsion, attraction, alignment forces and path
following, among others.
• It has a O(n2) complexity, which means using lots of particles is
really slow without an acceleration structure.
• Flocking animations are based on the steering behavior.
• Combining the different forces introduces a simple blending
function based on distance among particles.
Particles Animations
(Steering Behaviors)
1. Particle Animation
Example of a SPH animation (skull falling).
From Experimentation to Production: The Future of WebGL
• The Smoothed Particles Hydrodynamics is a Lagrange
method used to solve numerically the Navier Stokes equation.
• Simulate weakly compressive fluids with pressure and
viscosity, surface tension and gravity effects.
• Uses different blending functions for each force that are
dependent on the distance among the particles.
• Also has O(n2) complexity.
Particles Animations
(SPH)
1. Particle Animation
• Blending functions limit the interaction between
particles to those close to the evaluated one.
• Grid space partitioning is used to allocate close
particles (neighborhood search) and reduce O(n2)
complexity.
• Space partitioning is not trivial in the GPU. Grid cells
can’t save/allocate a variable amount of particles
(really hard in webGL).
Particles Animations
(neighborhood
search)
1. Particle Animation
• Harada et al. proposed a 4 GPU draw calls
neighborhood generation. It allocates up to 4
particles on each “cell” in the “grid”.
• The technique is useful for weakly compressed
fluids (SPH).
• This grid partitioning method changes the
complexity of the animations from O(n2) to O(kn).
Particles Animations
(GPU neighborhood
search)
1. Particle Animation
• Use representative (impostor) particles, only one particle saved
per cell.
• The position/velocity/density of the impostor particle is the median
value of all the particles allocated in the corresponding cell.
• The accounted cell should save the amount of the total particles
allocated inside. That amount increments the forces between the
particle to evaluate and the impostor particle.
• Complexity order O(kn) is reduced since only one particle is
evaluated per cell.
• There’s a trade off between speed and precision in the
animations.
Particles Animations
(faster neighborhood
generation)
1. Particle Animation
From Experimentation to Production: The Future of WebGL
• Algorithm to generate a triangle mesh from a potential field.
• It is evaluated on a discrete grid, where the potential is evaluated
on each corner of the cell to treat.
• If there are changes in the sign from the potential, triangles are
generated in the evaluated cell.
• The triangles to generate in the cell are defined in a look up table,
there are 256 different options with up to 5 triangles per cell.
Geometry Generation
(Marching Cubes)
2. Geometry Generation
Geometry Generation
(Marching Cubes)
2. Geometry Generation
• Pack the potential field using the 4 RGBA channels,
each one represents a depth range:
• R: [0 - 64)
• G: [64 - 128)
• B: [128 - 192)
• A: [192 - 256)
• 3d blur is done over the packed potential field texture.
There’s a 4X speed up in the blur process since it’s done
in the four channels (all the different depths) at the same
time.
• After blur is done the resulting texture is expanded as a
conventional one channel texture.
Marching Cubes
(Faster Potential
Generation)
2. Geometry Generation
• Working with scattered data in the GPU brings poor
performance.
• Stream compaction takes a sparse populated array and
places all the elements together.
• Use Histopyramids for 3D stream compaction.
Marching Cubes
(Stream Compaction)
2. Geometry Generation
• The histopyramid algorithm is separated in two steps: reductions and
expansion (with offsets).
• The reduction process starts with a binary representation of the
potential field(1 there’s data, 0 there’s no data). On each reduction step
the new pixel represents the sum of the lower 4 previous pixels. Much
like mip mapping.
• The highest level (1x1) contains the total amount of cells active. Don’t
use the gl.readPixel function to know that amount in Javascript (quite
slow).
• The final result is saved in a single texture with all the levels allocated
one next to each other.
Marching Cubes
(Histopyramids)
2. Geometry Generation
• Once the reduction is done, the expansion process
walks over the pyramid texture to reposition the
scattered data.
• The resulting texture contains all the data grouped
together.
• Compaction is done in the fragment shader, where
every fragment finds its corresponding scattered
data in the pyramid using an unique 1D index
derived from the 2D position of the fragment.
Marching Cubes
(Histopyramids)
2. Geometry Generation
Marching Cubes
(Histopyramids)
• Compacting the amount of active voxels:
• Generate a binary texture from the potential field.
• Apply the reduction process, the highest level represents the total amount of cells required to evaluate (where triangles will be generated).
• Preallocate up to 15 vertices (5 triangles max) for each cell in the compacted texture (15 fragments per active cell). Discard the non needed fragments if the
triangles defined in the tables are less than five.
• Generate the vertices positions and normals with the resulting compacted previous texture, the discarded fragments among the 15 fragments preallocated
can be removed with a second histopyramid.
• Compacting the amount of vertices needed:
• In the discrimination texture each active cell should represent the amount of vertices needed to generate the triangles defined in the tables. Values won’t be
binary, instead an amount between 0 and 15 vertices on each cell.
• Apply the reduction process, the highest level represents the total amount of vertices to generate.
• The expansion process offsets all the needed vertices for each active cell. Vertices positions and normals can be generated while doing the compaction
process (in the same shader).
• This process avoids preallocation since all the data is compacted with no empty fragments in between (no need for a second histopyramid). Use the highest
level of the pyramid in the rasterized to discard the non needed fragments.
2. Geometry Generation
Marching Cubes
(GPU Steps)
1. Allocate the particles inside a 3D grid texture. Remember to pack the data depending on the depth using the RGBA channels.
Particle size can be done making different passes. (draw calls depends on the size of the particle, usually 1 to 3 draw calls).
2. Generate a potential using a 3D blur, the blur can be done separated on each axis. (3 draw calls). Expand the resulting packed
data.
3. Generate a texture with the values of the corners for each cell. This can be done evaluating the median value of all the adjacent
cells for each corner, use the fragment shader for this (1 draw call).
4. Calculate the active cells over the grid, use the fragment shader to evaluate all the 3D space (all the texture). Each active cell
should output the amount of vertices needed, use the marching cubes tables for this (1 draw call).
5. Make the reduction of previous step (amount of vertices per cell), generate a pyramid texture to use for the compaction with
offsets (13 draw calls).
6. Make the stream compaction of the vertices, in the same shader generate the position and normal for each vertex reallocated.
Use the highest level texture of the pyramid to discard the fragments that are not needed (1 draw call).
7. Use the vertices positions and normals to render the triangles with any type of shading (1 draw call).
2. Geometry Generation
Show the 256^3 bunny with 2048 screen space in real time.
• It’s an algorithm used to evaluate the color of each pixel by
launching rays from the camera. The ray passes “through”
the pixel, and finds what objects are intersected in the
space along that vector. The point of intersection is used to
make the proper shading to obtain the color to render.
• Two types of rays are handled in a retracer, primary rays
(launched from the camera), and secondary rays
(generated by the effects of reflections and transmission).
• It’s a very efficient technique to render complex effects like
depth of field, transparency, reflections and accurate
shadows.
• Performance depends on the screen resolution, and since
the rays are based on a discrete resolution aliasing
problems arise.
Raytracing
3. Raytracing
• Water or glass materials are hard to render correctly
using the rasterizer. Rendering requires many bounces
and a ray tree generation.
• The marching cubes meshes are suitable to be
accelerated since they’re allocated inside a 3D grid.
• The retracer only handles secondary rays (for reflection
and refraction effects). They are emitted from deferred
g-buffers.
• The ray tree greedy model, and the two level grid
acceleration technique are two suitable methods to
implement in webGL.
Raytracing dielectric
materials (from Marching
Cubes Meshes)
3. Raytracing
• Dielectric materials generate a ray tree of big complexity
that it’s difficult to handle in the GPU. It’s an recursive
process.
• The Greedy Model uses only two initial rays (one for
reflection and one for refraction), the rays directions are
modified depending on the interactions with the objects.
The process avoids the recursion making it suitable for
the GPU.
• The fresnel equation can be used to allocate the paths
that give more energy to each ray.
Raytracing dielectric
materials (Ray Tree
Greedy Model)
3. Raytracing
• Evaluating ray collisions against objects means that
all the triangles of the mesh have to be evaluated
(really really slow).
• The marching cubes triangles are allocated inside a
3D grid, this data arrangement is suitable for an
acceleration structure using the 3DDDA algorithm.
• A second low resolution grid is used to avoid
traversing the high resolution grid when there is no
information in the low resolution cell.
Raytracing dielectric 

materials (Two Levels
Grid Acceleration)
3. Raytracing
• Apply the required animations to the particles and update its positions.
• Generate the mesh using the marching cubes algorithm.
• Allocate the triangles indexes in the high resolution grid, Use the resulting texture to generate a low resolution binary representation of where the high resolution
cells containing information are placed in the low resolution grid.
• Use a for loop to run the 3DDDA algorithm. Rays start from the position defined in the deferred gBuffer, start with the high resolution and traverse until there’s a
change in the voxels from the low resolution one. Change to the low resolution if there’s information cells in the new low voxel.
• When a high res cell is found, change the 3DDDA parameters to work with the high resolution, keep running the 3DDDA to evaluate which cells contain triangles.
• When triangles are found, use the Moller Trumbore triangle intersection algorithm, it’s a fast way to evaluate collisions against a ray. Change the ray direction
depending on the greedy model (reflected rays are always reflected, refracted are always refracted if possible), or use the Fresnel equation on every bounce to
decide the new direction (finds max energy in bounce).
• Raytracing execution can be stopped using the following rules:
• The for loop tops the max amount of steps that the ray can use for the 3DDDA algorithm. This is useful to avoid rays marching too many steps.
• The ray hits a maximum amount of bounces. Useful when there are many bounces among geometry.
• The ray finds the limits of the high/low resolution grid structure.
• The ray uses more steps that the expected between two bounces. Useful if the user does not want to show far away geometry.
Raytracing dielectric materials
(from Marching Cubes Meshes)
3. Raytracing
show the 128^3 SPH simulation over a 1024^2 screen space.
• Technique used to render indirect illumination.
• Similar to raytracing, but rays are emitted from light
sources.
• The algorithm works in two steps, photon emission
and final gathering (radiance evaluation).
• Really good for evaluating caustics.
Photon Mapping
4. Photon Mapping
Caustics in
WebGL
Show an example of the bounding box shadow in the plane.
• Once the ray directions are defined, the raytracing algorithm is run on a fragment shader to determine where the rays hit the plane
(the shader is quite heavy to compile in a vertex shader on PC).
• The positions are used on a vertex shader to scatter particles over the plane generating the caustics. Particles are blended using
the blending function in WebGL.
• The color of the particles is defined by the absorption color of the geometry. A dispersion effect can be made by separating all the
particles in three different colors (R, G, B). Each color will have slightly different index of refractions and the resulting blending
effect generates the dispersion effects.
• The final gathering step (radiance evaluation) is done over the plane using a gaussian filter (like blur). The resulting texture is used
in the final composition.
• Since caustics are calculated using texture maps, the size of the texture defines the quality of the lighting effect generated. This
allows us to adapt the quality of the caustics to the GPU processing power.
• Caustics can be calculated over different frames, using an accumulator (remember that we are splitting and blending over a
texture), or be evaluated on a simple frame. This way really complex caustics can be calculated over 3 -10 frames (when light
sources and geometry won’t change, and the quality should be really good). Alternatively, use a low resolution texture and only
one frame to evaluate simple caustics for moving light/objects.
4. Photon Mapping
Caustics in WebGL
Show an example of caustics working in real time, show how they are modified by the absorption color, show the dispersion
effect.
MTN DEW BALL
Our Vision
• Raymarching and 2D quad rendering technics and ressources :
• Inigo Quilez’s blog: resource on Raymarching Distance fields.
• This Article by 9bit Science: writeup on the theory behind raymarching.
• This Gamedev Stackexchange: information about how raymarching shaders work fundamentally.
• Shadertoy With ShaderToy you can experiment screen-space based shader development online. From simple texture deformation to complex
raytracers and distance field raymarchers, you have the complete power of the GPU to experiment with all sort of techniques thru GLSL
shaders.
• Dom to WebGL :
• Morgan’s Technical review Mountain Dew x Titanfall, render Dom within WebGL
• Drawing DOM objects into a canvas Mozilla Developer Network
• Raymarching examples :
• 80’s raymarching
• Fibonacci Ballet
Resources
Particles Animations:
• Steering Behaviors For Autonomous Characters : http://www.red3d.com/cwr/steer/
• Lagrangian Fluid Dynamics Using Smoothed Particle Hydrodynamics: http://glowinggoo.com/sph/bin/kelager.06.pdf
• Advanced Procedural Rendering in DirectX 11: https://directtovideo.wordpress.com/2012/03/15/get-my-slides-from-gdc2012/
• Real-Time Rigid Body Simulation on GPUs: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch29.html
Marching Cubes:
• Poligonising a Scalar Field (Marching Cubes): http://paulbourke.net/geometry/polygonise/
• Histopyramid Stream Compaction and Expansion: https://folk.uio.no/erikd/histo/hpmarchertalk.pdf
• DirectToVideo: https://directtovideo.wordpress.com/2011/05/03/numb-res/
Raytracing:
• Real Time Ray Tracing part 2: https://directtovideo.wordpress.com/2013/05/08/real-time-ray-tracing-part-2/
• Ray Tracing From The Ground Up: http://www.raytracegroundup.com/
• Realistic Reflections and Refractions on Graphic Hardware With Hybrid Rendering and Layered Environment Maps: https://
www.microsoft.com/en-us/research/wp-content/uploads/2017/01/hybrid.pdf
• Two-Level Grids for Ray Tracing on GPUs: http://www.kalojanov.com/data/two_level_grids.pdf
Caustics:
• Progressive Photon Mapping: http://www.ci.i.u-tokyo.ac.jp/~hachisuka/ppm.pdf
Resources
Thanks.
@VilledieuMorgan
@hector_arellano
@firstborn_nyc

More Related Content

From Experimentation to Production: The Future of WebGL

  • 1. We are firstborn. It’s nice to meet you. Morgan Villedieu Hector Arellano
  • 2. We are a strategic design and technology agency. We create interactive experiences, digital products and content that build brands, grow businesses and transform categories.
  • 3. So Why Are We Here ?
  • 4. Short-Term Experimentation: Drawing using fragment shader • All you need is a simple quad • Drawing using the built-in math functions • Deformation of the space • Post-processing • Going further: Using Raymarching and distance fields Extend the Usage of Those Methods to a Production Project • Mountain Dew x Titanfall • How the WebGL world is setup • Dom to WebGL texture Long-Term Exploration: • O(n2) complexity particle animations • Geometry generation with Marching Cubes • Raytracing dielectric materials • Photon mapping (caustics) WebGL Usages -
  • 6. • Using two triangles forming a quad, we’ll use the fragment shader to operate on a per-pixel basis and evaluate each pixel’s color. • The fragment shader operates this way: For each pixel p do something to modify the pixel’s color. Then update the current pixel color. • Multiple passes can be used (but with precaution) to gather and scatter operations. • This is really simple to setup and the code is really compact and fast, only consisting of a few kilobytes for the engine and the shaders. All you need is a simple quad. VERTICES VERTEX SHADER VERTEX PRIMITIVES GENERATION RASTERIZATION FRAGMENT SHADER BLENDING FRAME-BUFFER 1. Drawing using the fragment shader
  • 7. • Using a formula to generate a pattern: WebGL is going to call this function once for every pixel on the screen. The only things that will change are gl_FragCoord, which is the location of the pixel being drawn and the uniforms you're passing (mouse position, time, resolution, etc.) • The data you are drawing can be used as a visual output or as input values to create or modify another effect. Drawing using the built in math functions. 1. Drawing using the fragment shader
  • 8. • By calculating your “UVs” gl_FragCoord.xy / resolution.xy you can then use them to draw or deform your visual space. • Example: Deforming a simple plane can be used to imply depth or simulate a 3D world when combined with a gradient. Deformations of the space. GRID USING MOD() BEND GRID USING / ABS() CENTRAL GRADIENT (FAKE FOG) USING / ABS() TO AVOID ALIASING 1. Drawing using the fragment shader
  • 9. • Post-Processing is a technique used in graphics that allows you to take a current input texture, and manipulate it’s pixels to produce a transformed image. • This can be used to apply many effects in real time like volumetric lighting, or any other filter type effect you’ve seen in applications like Photoshop, Instagram, or After Effects. Post Processing RENDER SCENE INTO OUR FBO (PASS 1) EMPTY 2D TEXTURE TEXTURE CONTAINING THE FRAME BUFFER INPUT TEXTURE AND ADD EFFECTS USING THE FRAGMENT SHADER SHOW UPDATED CURRENT FRAME BUFFER 1. Drawing using the fragment shader
  • 11. • Raymarching: In raymarching we “march” a point along the ray until we find that the point intersects an object. • Distance Field: A distance field is a function that takes in a point as an input and returns the shortest distance from that point to the surface of any object in the scene. It limits how often we need to sample when marching along the ray. • Why it is useful: • Allows you to render complex shapes not using geometry • You cannot raytrace through volumetric materials such as clouds and water. But you can use ray march to create volumetric effects like fire, water, clouds etc. Going further: Raymarching using distance field RAYTRACING RAYMARCHER USING SIGNED DISTANCE FIELDS 1. Drawing using the fragment shader BASIC IMPLEMENTATION OF A RAYMARCHER WITH A FIXED MARCHING INTERVAL.
  • 12. • You can combine distance fields using union or intersection to create complex shapes • You can transform them using domain transformations • You can analytically calculate the surface normal by using the gradient of the distance field and generate shading using a shading model • You can also analytically calculate the UVs to map a volume texture Distance field 1. Drawing using the fragment shader
  • 14. • Understand the way GPUs work: Understanding what the GPU is doing when you use a fragment shader is important and helps guide your imagination when generating your creation. • Iterative process: Its an iterative process. You start with something simple and build toward the final vision with modifications and experiments (or let the power of the GPU surprise you while you “make mistakes”.) • Performance: There are many methods and tricks for creating effects that will get you to your final product but everything comes with a price. Always think in terms of performances first. It’s All About Pixel Evaluation & Love 1. Drawing using the fragment shader
  • 17. • The idea: we came up with this original idea of trying to render the DOM elements within a texture to allow us to use them within the browser with graphic acceleration while creating unique visuals using fragment shaders - even on the DOM elements. • About the effects: Smoke, lightning, noise, glitches, graphic bending and displacement all worked together to create the illusion of 3D effects without the weight of complex geometry. All done with a simple quad and the fragment shader pixel evaluation. Our Approach • Why this approach? Using the GPU’s extremely powerful and fast parallel architecture let us add many visual effects to the site, all rendered in real-time. • Add Interactivity: Because the effects were procedurally generated, they were able to react in real-time when moused over/clicked/keyed — an impossible task for pre-rendered or video-based assets. • Ultra quick loading: The site loads quickly because everything was generated through code with a small amount of 2D assets — no video or 3D files that would normally slow down loading time and functionality. 2. Apply this method to production
  • 18. How we set up our WebGL world ratio and mlessly to w to the ing 2. Apply this method to production
  • 20. Some Wizardry: 
 DOM to WebGL Texture. 01 We create an SVG with a foreign object containing our markup and styles. 02 Convert the SVG data to a blob 03 Convert our blob to a base64 data url 04 Using this base64 data we generate an image 05 Apply the image to a WebGL Texture 2. Apply this method to production
  • 27. 1. Particle Animation 2. Geometry Generation 3. Raytracing 4. Photon-Mapping GPGPU WebGL
  • 30. • The steering behavior uses and combines simple forces to produce life like animations. • Simulates repulsion, attraction, alignment forces and path following, among others. • It has a O(n2) complexity, which means using lots of particles is really slow without an acceleration structure. • Flocking animations are based on the steering behavior. • Combining the different forces introduces a simple blending function based on distance among particles. Particles Animations (Steering Behaviors) 1. Particle Animation
  • 31. Example of a SPH animation (skull falling).
  • 33. • The Smoothed Particles Hydrodynamics is a Lagrange method used to solve numerically the Navier Stokes equation. • Simulate weakly compressive fluids with pressure and viscosity, surface tension and gravity effects. • Uses different blending functions for each force that are dependent on the distance among the particles. • Also has O(n2) complexity. Particles Animations (SPH) 1. Particle Animation
  • 34. • Blending functions limit the interaction between particles to those close to the evaluated one. • Grid space partitioning is used to allocate close particles (neighborhood search) and reduce O(n2) complexity. • Space partitioning is not trivial in the GPU. Grid cells can’t save/allocate a variable amount of particles (really hard in webGL). Particles Animations (neighborhood search) 1. Particle Animation
  • 35. • Harada et al. proposed a 4 GPU draw calls neighborhood generation. It allocates up to 4 particles on each “cell” in the “grid”. • The technique is useful for weakly compressed fluids (SPH). • This grid partitioning method changes the complexity of the animations from O(n2) to O(kn). Particles Animations (GPU neighborhood search) 1. Particle Animation
  • 36. • Use representative (impostor) particles, only one particle saved per cell. • The position/velocity/density of the impostor particle is the median value of all the particles allocated in the corresponding cell. • The accounted cell should save the amount of the total particles allocated inside. That amount increments the forces between the particle to evaluate and the impostor particle. • Complexity order O(kn) is reduced since only one particle is evaluated per cell. • There’s a trade off between speed and precision in the animations. Particles Animations (faster neighborhood generation) 1. Particle Animation
  • 38. • Algorithm to generate a triangle mesh from a potential field. • It is evaluated on a discrete grid, where the potential is evaluated on each corner of the cell to treat. • If there are changes in the sign from the potential, triangles are generated in the evaluated cell. • The triangles to generate in the cell are defined in a look up table, there are 256 different options with up to 5 triangles per cell. Geometry Generation (Marching Cubes) 2. Geometry Generation
  • 40. • Pack the potential field using the 4 RGBA channels, each one represents a depth range: • R: [0 - 64) • G: [64 - 128) • B: [128 - 192) • A: [192 - 256) • 3d blur is done over the packed potential field texture. There’s a 4X speed up in the blur process since it’s done in the four channels (all the different depths) at the same time. • After blur is done the resulting texture is expanded as a conventional one channel texture. Marching Cubes (Faster Potential Generation) 2. Geometry Generation
  • 41. • Working with scattered data in the GPU brings poor performance. • Stream compaction takes a sparse populated array and places all the elements together. • Use Histopyramids for 3D stream compaction. Marching Cubes (Stream Compaction) 2. Geometry Generation
  • 42. • The histopyramid algorithm is separated in two steps: reductions and expansion (with offsets). • The reduction process starts with a binary representation of the potential field(1 there’s data, 0 there’s no data). On each reduction step the new pixel represents the sum of the lower 4 previous pixels. Much like mip mapping. • The highest level (1x1) contains the total amount of cells active. Don’t use the gl.readPixel function to know that amount in Javascript (quite slow). • The final result is saved in a single texture with all the levels allocated one next to each other. Marching Cubes (Histopyramids) 2. Geometry Generation
  • 43. • Once the reduction is done, the expansion process walks over the pyramid texture to reposition the scattered data. • The resulting texture contains all the data grouped together. • Compaction is done in the fragment shader, where every fragment finds its corresponding scattered data in the pyramid using an unique 1D index derived from the 2D position of the fragment. Marching Cubes (Histopyramids) 2. Geometry Generation
  • 44. Marching Cubes (Histopyramids) • Compacting the amount of active voxels: • Generate a binary texture from the potential field. • Apply the reduction process, the highest level represents the total amount of cells required to evaluate (where triangles will be generated). • Preallocate up to 15 vertices (5 triangles max) for each cell in the compacted texture (15 fragments per active cell). Discard the non needed fragments if the triangles defined in the tables are less than five. • Generate the vertices positions and normals with the resulting compacted previous texture, the discarded fragments among the 15 fragments preallocated can be removed with a second histopyramid. • Compacting the amount of vertices needed: • In the discrimination texture each active cell should represent the amount of vertices needed to generate the triangles defined in the tables. Values won’t be binary, instead an amount between 0 and 15 vertices on each cell. • Apply the reduction process, the highest level represents the total amount of vertices to generate. • The expansion process offsets all the needed vertices for each active cell. Vertices positions and normals can be generated while doing the compaction process (in the same shader). • This process avoids preallocation since all the data is compacted with no empty fragments in between (no need for a second histopyramid). Use the highest level of the pyramid in the rasterized to discard the non needed fragments. 2. Geometry Generation
  • 45. Marching Cubes (GPU Steps) 1. Allocate the particles inside a 3D grid texture. Remember to pack the data depending on the depth using the RGBA channels. Particle size can be done making different passes. (draw calls depends on the size of the particle, usually 1 to 3 draw calls). 2. Generate a potential using a 3D blur, the blur can be done separated on each axis. (3 draw calls). Expand the resulting packed data. 3. Generate a texture with the values of the corners for each cell. This can be done evaluating the median value of all the adjacent cells for each corner, use the fragment shader for this (1 draw call). 4. Calculate the active cells over the grid, use the fragment shader to evaluate all the 3D space (all the texture). Each active cell should output the amount of vertices needed, use the marching cubes tables for this (1 draw call). 5. Make the reduction of previous step (amount of vertices per cell), generate a pyramid texture to use for the compaction with offsets (13 draw calls). 6. Make the stream compaction of the vertices, in the same shader generate the position and normal for each vertex reallocated. Use the highest level texture of the pyramid to discard the fragments that are not needed (1 draw call). 7. Use the vertices positions and normals to render the triangles with any type of shading (1 draw call). 2. Geometry Generation
  • 46. Show the 256^3 bunny with 2048 screen space in real time.
  • 47. • It’s an algorithm used to evaluate the color of each pixel by launching rays from the camera. The ray passes “through” the pixel, and finds what objects are intersected in the space along that vector. The point of intersection is used to make the proper shading to obtain the color to render. • Two types of rays are handled in a retracer, primary rays (launched from the camera), and secondary rays (generated by the effects of reflections and transmission). • It’s a very efficient technique to render complex effects like depth of field, transparency, reflections and accurate shadows. • Performance depends on the screen resolution, and since the rays are based on a discrete resolution aliasing problems arise. Raytracing 3. Raytracing
  • 48. • Water or glass materials are hard to render correctly using the rasterizer. Rendering requires many bounces and a ray tree generation. • The marching cubes meshes are suitable to be accelerated since they’re allocated inside a 3D grid. • The retracer only handles secondary rays (for reflection and refraction effects). They are emitted from deferred g-buffers. • The ray tree greedy model, and the two level grid acceleration technique are two suitable methods to implement in webGL. Raytracing dielectric materials (from Marching Cubes Meshes) 3. Raytracing
  • 49. • Dielectric materials generate a ray tree of big complexity that it’s difficult to handle in the GPU. It’s an recursive process. • The Greedy Model uses only two initial rays (one for reflection and one for refraction), the rays directions are modified depending on the interactions with the objects. The process avoids the recursion making it suitable for the GPU. • The fresnel equation can be used to allocate the paths that give more energy to each ray. Raytracing dielectric materials (Ray Tree Greedy Model) 3. Raytracing
  • 50. • Evaluating ray collisions against objects means that all the triangles of the mesh have to be evaluated (really really slow). • The marching cubes triangles are allocated inside a 3D grid, this data arrangement is suitable for an acceleration structure using the 3DDDA algorithm. • A second low resolution grid is used to avoid traversing the high resolution grid when there is no information in the low resolution cell. Raytracing dielectric 
 materials (Two Levels Grid Acceleration) 3. Raytracing
  • 51. • Apply the required animations to the particles and update its positions. • Generate the mesh using the marching cubes algorithm. • Allocate the triangles indexes in the high resolution grid, Use the resulting texture to generate a low resolution binary representation of where the high resolution cells containing information are placed in the low resolution grid. • Use a for loop to run the 3DDDA algorithm. Rays start from the position defined in the deferred gBuffer, start with the high resolution and traverse until there’s a change in the voxels from the low resolution one. Change to the low resolution if there’s information cells in the new low voxel. • When a high res cell is found, change the 3DDDA parameters to work with the high resolution, keep running the 3DDDA to evaluate which cells contain triangles. • When triangles are found, use the Moller Trumbore triangle intersection algorithm, it’s a fast way to evaluate collisions against a ray. Change the ray direction depending on the greedy model (reflected rays are always reflected, refracted are always refracted if possible), or use the Fresnel equation on every bounce to decide the new direction (finds max energy in bounce). • Raytracing execution can be stopped using the following rules: • The for loop tops the max amount of steps that the ray can use for the 3DDDA algorithm. This is useful to avoid rays marching too many steps. • The ray hits a maximum amount of bounces. Useful when there are many bounces among geometry. • The ray finds the limits of the high/low resolution grid structure. • The ray uses more steps that the expected between two bounces. Useful if the user does not want to show far away geometry. Raytracing dielectric materials (from Marching Cubes Meshes) 3. Raytracing
  • 52. show the 128^3 SPH simulation over a 1024^2 screen space.
  • 53. • Technique used to render indirect illumination. • Similar to raytracing, but rays are emitted from light sources. • The algorithm works in two steps, photon emission and final gathering (radiance evaluation). • Really good for evaluating caustics. Photon Mapping 4. Photon Mapping
  • 55. Show an example of the bounding box shadow in the plane.
  • 56. • Once the ray directions are defined, the raytracing algorithm is run on a fragment shader to determine where the rays hit the plane (the shader is quite heavy to compile in a vertex shader on PC). • The positions are used on a vertex shader to scatter particles over the plane generating the caustics. Particles are blended using the blending function in WebGL. • The color of the particles is defined by the absorption color of the geometry. A dispersion effect can be made by separating all the particles in three different colors (R, G, B). Each color will have slightly different index of refractions and the resulting blending effect generates the dispersion effects. • The final gathering step (radiance evaluation) is done over the plane using a gaussian filter (like blur). The resulting texture is used in the final composition. • Since caustics are calculated using texture maps, the size of the texture defines the quality of the lighting effect generated. This allows us to adapt the quality of the caustics to the GPU processing power. • Caustics can be calculated over different frames, using an accumulator (remember that we are splitting and blending over a texture), or be evaluated on a simple frame. This way really complex caustics can be calculated over 3 -10 frames (when light sources and geometry won’t change, and the quality should be really good). Alternatively, use a low resolution texture and only one frame to evaluate simple caustics for moving light/objects. 4. Photon Mapping Caustics in WebGL
  • 57. Show an example of caustics working in real time, show how they are modified by the absorption color, show the dispersion effect.
  • 60. • Raymarching and 2D quad rendering technics and ressources : • Inigo Quilez’s blog: resource on Raymarching Distance fields. • This Article by 9bit Science: writeup on the theory behind raymarching. • This Gamedev Stackexchange: information about how raymarching shaders work fundamentally. • Shadertoy With ShaderToy you can experiment screen-space based shader development online. From simple texture deformation to complex raytracers and distance field raymarchers, you have the complete power of the GPU to experiment with all sort of techniques thru GLSL shaders. • Dom to WebGL : • Morgan’s Technical review Mountain Dew x Titanfall, render Dom within WebGL • Drawing DOM objects into a canvas Mozilla Developer Network • Raymarching examples : • 80’s raymarching • Fibonacci Ballet Resources
  • 61. Particles Animations: • Steering Behaviors For Autonomous Characters : http://www.red3d.com/cwr/steer/ • Lagrangian Fluid Dynamics Using Smoothed Particle Hydrodynamics: http://glowinggoo.com/sph/bin/kelager.06.pdf • Advanced Procedural Rendering in DirectX 11: https://directtovideo.wordpress.com/2012/03/15/get-my-slides-from-gdc2012/ • Real-Time Rigid Body Simulation on GPUs: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch29.html Marching Cubes: • Poligonising a Scalar Field (Marching Cubes): http://paulbourke.net/geometry/polygonise/ • Histopyramid Stream Compaction and Expansion: https://folk.uio.no/erikd/histo/hpmarchertalk.pdf • DirectToVideo: https://directtovideo.wordpress.com/2011/05/03/numb-res/ Raytracing: • Real Time Ray Tracing part 2: https://directtovideo.wordpress.com/2013/05/08/real-time-ray-tracing-part-2/ • Ray Tracing From The Ground Up: http://www.raytracegroundup.com/ • Realistic Reflections and Refractions on Graphic Hardware With Hybrid Rendering and Layered Environment Maps: https:// www.microsoft.com/en-us/research/wp-content/uploads/2017/01/hybrid.pdf • Two-Level Grids for Ray Tracing on GPUs: http://www.kalojanov.com/data/two_level_grids.pdf Caustics: • Progressive Photon Mapping: http://www.ci.i.u-tokyo.ac.jp/~hachisuka/ppm.pdf Resources