Basic shader scene
My goto tool for creative coding is canvas-sketch. It offers a utility function that creates a full-screen GLSL shader renderer using regl. You can pass in your shader code and uniforms and it takes care of the rest. Here’s an example of a shader that renders a gradient.
const canvasSketch = require('canvas-sketch');
const createShader = require('canvas-sketch-util/shader');
const glsl = require('glslify');
const settings = {
dimensions: [1080, 1080],
context: 'webgl',
animate: true,
};
const frag = glsl(`
precision highp float;
uniform float time;
varying vec2 vUv;
void main () {
vec3 col = 0.5 + 0.5 * cos(time + vUv.xyx + vec3(0,2,4));
gl_FragColor = vec4(col, 1.0);
}
`);
const sketch = ({ gl, canvas }) => {
return createShader({
gl,
frag,
uniforms: {
resolution: ({ width, height }) => [width, height],
time: ({ time }) => time,
playhead: ({ playhead }) => playhead,
},
});
};
canvasSketch(sketch, settings);
Couple of things to note here. createShader
bootstraps a default vertex shader (see below) that provides a varying vUv
. This essentially maps the pixel coordinates to a value between 0 and 1. You can override this by specifying a custom vertex shader. But for most cases, this is sufficient.
vert.glsl
precision highp float;
attribute vec3 position;
varying vec2 vUv;
void main () {
gl_Position = vec4(position.xyz, 1.0);
vUv = gl_Position.xy * 0.5 + 0.5;
}
I’m also using a tool called glslify
to wrap the shader code. This enables us to import GLSL modules into our shader. We’ll use it to import SDF functions and other raymarching utilities.
The Raymarching Algorithm
Below is an implementation of the ray marching algorithm. The camera is positioned as the rayOrigin
, and pointed towards the rayTarget
—the center of the scene.
The rayDirection
is a vector that points from the origin towards a a pixel on the screen, while accounting for the camera’s orientation and field of view. It requires a bit of fancy math to figure out this direction. We’ll be using the glsl-camera-ray
module to run that calculation.
Once we obtain the ray direction, we proceed along it, checking for collisions. If a collision is detected, the distance to the surface is returned. Otherwise, we return -1.0
to signify that no collision was found.
precision highp float;
varying vec2 vUv;
uniform float lensLength;
#pragma glslify: camera = require('glsl-camera-ray')
float sdSphere(vec3 point, float radius) {
return length(point) - radius;
}
const int steps = 90;
const float maxdist = 20.0;
const float precis = 0.001;
float raymarch(vec3 rayOrigin, vec3 rayDir) {
float latest = precis * 2.0;
float dist = 0.0;
float res = -1.0;
for (int i = 0; i < steps; i++) {
if (latest < precis || dist > maxdist) break;
float latest = sdSphere(rayOrigin + rayDir * dist, 1.0);
dist += latest;
}
if (dist < maxdist) {
res = dist;
}
return res;
}
void main() {
vec3 color = vec3(0.0);
vec3 rayOrigin = vec3(3.5, 0., 3.5);
vec3 rayTarget = vec3(0, 0, 0);
vec2 screenPos = vUv * 2.0 - 1.;
vec3 rayDirection = camera(rayOrigin, rayTarget, screenPos, lensLength);
float collision = raymarch(rayOrigin, rayDirection);
if (collision > -0.5) {
color = vec3(0.678, 0.106, 0.176);
}
gl_FragColor = vec4(color, 1);
}
lensLength
here determines the field of view. Try changing it to see how it affects the scene.
Using GLSL modules for raymarching
Implementing your own raymarching function is cool. It’s especially useful when you want to tweak the inner workings to achieve a specific effect. However, in most cases, you can probably just use an off-the-shelf module.
Below, I’ve updated the sketch to use the glsl-raytrace
module. Additionally, I’m using a glsl-sdf-primitives
module to generate a torus and glsl-rotate
to rotate it.
The mechanics remain largely similar. The key difference is that geometry is now defined within a function called doModel
, and raymarch returns a vec2
containing the distance and material index. This is useful if you want to render multiple types of objects in a scene.
precision highp float;
varying vec2 vUv;
uniform float lensLength;
uniform float time;
vec2 doModel(vec3 p);
#pragma glslify: camera = require('glsl-camera-ray')
#pragma glslify: raymarch = require('glsl-raytrace', map = doModel, steps = 90)
#pragma glslify: sdTorus = require('glsl-sdf-primitives/sdTorus')
#pragma glslify: rotate = require('glsl-rotate/rotate')
vec2 doModel(vec3 p) {
p.xy = rotate(p.xy, time);
p.yz = rotate(p.yz, time);
float d = sdTorus(p, vec2(0.75, 0.35));
return vec2(d, 0.0);
}
void main() {
vec3 color = vec3(0.0);
vec3 rayOrigin = vec3(3.5, 0, 3.5);
vec3 rayTarget = vec3(0, 0, 0);
vec2 screenPos = vUv * 2.0 - 1.;
vec3 rayDirection = camera(rayOrigin, rayTarget, screenPos, lensLength);
vec2 collision = raymarch(rayOrigin, rayDirection);
if (collision.x > -0.5) {
color = vec3(0.678, 0.106, 0.176);
}
gl_FragColor = vec4(color, 1);
}
Check it out! We’ve got a spinning donut 🍩 But it looks kinda flat. Let’s add some depth to the scene.
Calculating normals
For the classic material and lighting combination, we need to calculate surface normals. That is, a vector that points away from the surface at a given point.
With SDFs, we calculate the normal by taking the gradient of the SDF function (f) at a specific point, denoted as ∇f. I don’t know about you, but the last time I took a gradient was in MEC E 537 – Aerodynamics. And that was a while ago 😅
Luckily for us, we can use the glsl-sdf-normal
module to compute normals for us. The module uses the same doModel
function that we defined for raymarching. If you’re curious about the underlying math, check out Jamie Wong’s explanation.
#pragma glslify: normal = require('glsl-sdf-normal', map = doModel)
if (collision.x > -0.5) {
vec3 pos = rayOrigin + rayDirection * collision.x;
vec3 nor = normal(pos);
color = nor * 0.5 + 0.5;
}
Phong lighting
My personal philosophy is very much:
It’s important to understand how things work, but I’m less focused on implementing everything from scratch and more intrigued by applying those concepts to create my own sketches and scenes. That’s why I was super excited to come across stack.gl/packages.
The stackgl ecosystem is full of little GLSL modules that you can glue these together to create all kinds of effects.
Interested in adding lighting to the scene? What type would you prefer? Lambert, Phong, Beckmann, or Specular? Just grab the associated module and plug it into the scene.
I chose glsl-specular-blinn-phong
#pragma glslify: blinnPhongSpec = require('glsl-specular-blinn-phong')
vec3 lightPos = vec3(1, 1, 1);
vec3 tint = vec3(0.05, 0.0, 0.97);
vec2 collision = raymarch(rayOrigin, rayDirection);
if (collision.x > -0.5) {
vec3 pos = rayOrigin + rayDirection * collision.x;
vec3 nor = normal(pos);
vec3 eyeDirection = normalize(rayOrigin - pos);
vec3 lightDirection = normalize(lightPos - pos);
float power = blinnPhongSpec(lightDirection, eyeDirection, nor, 0.5);
color = power * tint;
}
Iridescent material
Stackgl isn’t the only place where you can find useful code. My other favourite option is Shadertoy. I’m not going to lie, most things on shadertoy were too daunting for me. I couldn’t even begin to figure out what the code was doing.
That is, until I discovered that most work on shadertoy uses a combo of raymarching + SDF. This was certainly a lightbulb moment for me. It’s like suddenly this cryptic code was deciphered and I could understand what it said.
I’ve been obsessed with iridescence and have been bookmarking cool shaders. Once I learnt the raymarching technique, that was it. I could revisit these shaders and try to understand how they work.
One such shader was Thomas Hooper’s Crystals. It’s way more complex than our scene but the general structure is the same. There’s a function for generating the geometry, there’s raymarching loop and after checking for collision is the bit where the iridescence effect is applied.
Let’s add that to our scene.
vec3 pal( in float t, in vec3 a, in vec3 b, in vec3 c, in vec3 d ) {
return a + b*cos( 6.28318*(c*t+d) );
}
vec3 spectrum(float n) {
return pal( n, vec3(0.5,0.5,0.5),vec3(0.5,0.5,0.5),vec3(1.0,1.0,1.0),vec3(0.0,0.33,0.67) );
}
const float GAMMA = 2.2;
vec3 gamma(vec3 color, float g) {
return pow(color, vec3(g));
}
vec3 linearToScreen(vec3 linearRGB) {
return gamma(linearRGB, 1.0 / GAMMA);
}
if (collision.x > -0.5) {
vec3 pos = rayOrigin + rayDirection * collision.x;
vec3 nor = normal(pos);
vec3 eyeDirection = normalize(rayOrigin - pos);
vec3 lightDirection = normalize(lightPos - pos);
vec3 reflection = reflect(rayDirection, nor);
vec3 dome = vec3(0, 1, 0);
vec3 perturb = sin(pos * 10.);
color = spectrum(dot(nor + perturb * .05, eyeDirection) * 2.);
float specular = clamp(dot(reflection, lightDirection), 0., 1.);
specular = pow((sin(specular * 20. - 3.) * .5 + .5) + .1, 32.) * specular;
specular *= .1;
specular += pow(clamp(dot(reflection, lightDirection), 0., 1.) + .3, 8.) * .1;
float shadow = pow(clamp(dot(nor, dome) * .5 + 1.2, 0., 1.), 3.);
color = color * shadow + specular;
color = linearToScreen(color);
}
There are three layers to the iridescent material: the base layer (the funky gradients), a little bit of shadow and specular (the concentric light bands). Try toggling them on and off with the slider see their effects.
Mix Phong and Iridescence
One last little tweak with the lighting. We can actually blend the phong and iridescence effects. Which enables you to have tinted iridescent objects.
There’s not a whole lot to it. Calculate the colors for the two effects and then blend them with the mix
function.
if (collision.x > -0.5) {
float power = blinnPhongSpec(lightDirection, eyeDirection, nor, 0.5);
vec3 baseColor = power * tint;
color = color * shadow + specular;
color = mix(baseColor, color, mixBaseAndIridescent);
color = linearToScreen(color);
}
Crystal geometry
We’ve nailed the look, but what about the crystal shape?
You can file this under “stuff I don’t quite understand, but that’s not going to stop me from using it.” The crystal geometry is a Rhombic Triacontahedron, which I discovered in a The Art Of Code tutorial.
This shape is created by folding a plane onto itself using some “magic numbers” and along a “magic direction.” We repeat the process a few times until we achieve the desired crystal shape.
Try using the slider to observe how the shape changes with each fold.
float sdCrystal(vec3 p) {
float c = cos(3.1415/5.), s=sqrt(0.75-c*c);
vec3 n = vec3(-0.5, -c, s);
p = abs(p);
p -= 2.*min(0., dot(p, n))*n;
p.xy = abs(p.xy);
p -= 2.*min(0., dot(p, n))*n;
p.xy = abs(p.xy);
p -= 2.*min(0., dot(p, n))*n;
float d = p.z - 1.;
return d;
}
Basic shader scene
My goto tool for creative coding is canvas-sketch. It offers a utility function that creates a full-screen GLSL shader renderer using regl. You can pass in your shader code and uniforms and it takes care of the rest. Here’s an example of a shader that renders a gradient.
const canvasSketch = require('canvas-sketch');
const createShader = require('canvas-sketch-util/shader');
const glsl = require('glslify');
const settings = {
dimensions: [1080, 1080],
context: 'webgl',
animate: true,
};
const frag = glsl(`
precision highp float;
uniform float time;
varying vec2 vUv;
void main () {
vec3 col = 0.5 + 0.5 * cos(time + vUv.xyx + vec3(0,2,4));
gl_FragColor = vec4(col, 1.0);
}
`);
const sketch = ({ gl, canvas }) => {
return createShader({
gl,
frag,
uniforms: {
resolution: ({ width, height }) => [width, height],
time: ({ time }) => time,
playhead: ({ playhead }) => playhead,
},
});
};
canvasSketch(sketch, settings);
Couple of things to note here. createShader
bootstraps a default vertex shader (see below) that provides a varying vUv
. This essentially maps the pixel coordinates to a value between 0 and 1. You can override this by specifying a custom vertex shader. But for most cases, this is sufficient.
vert.glsl
precision highp float;
attribute vec3 position;
varying vec2 vUv;
void main () {
gl_Position = vec4(position.xyz, 1.0);
vUv = gl_Position.xy * 0.5 + 0.5;
}
I’m also using a tool called glslify
to wrap the shader code. This enables us to import GLSL modules into our shader. We’ll use it to import SDF functions and other raymarching utilities.
The Raymarching Algorithm
Below is an implementation of the ray marching algorithm. The camera is positioned as the rayOrigin
, and pointed towards the rayTarget
—the center of the scene.
The rayDirection
is a vector that points from the origin towards a a pixel on the screen, while accounting for the camera’s orientation and field of view. It requires a bit of fancy math to figure out this direction. We’ll be using the glsl-camera-ray
module to run that calculation.
Once we obtain the ray direction, we proceed along it, checking for collisions. If a collision is detected, the distance to the surface is returned. Otherwise, we return -1.0
to signify that no collision was found.
precision highp float;
varying vec2 vUv;
uniform float lensLength;
#pragma glslify: camera = require('glsl-camera-ray')
float sdSphere(vec3 point, float radius) {
return length(point) - radius;
}
const int steps = 90;
const float maxdist = 20.0;
const float precis = 0.001;
float raymarch(vec3 rayOrigin, vec3 rayDir) {
float latest = precis * 2.0;
float dist = 0.0;
float res = -1.0;
for (int i = 0; i < steps; i++) {
if (latest < precis || dist > maxdist) break;
float latest = sdSphere(rayOrigin + rayDir * dist, 1.0);
dist += latest;
}
if (dist < maxdist) {
res = dist;
}
return res;
}
void main() {
vec3 color = vec3(0.0);
vec3 rayOrigin = vec3(3.5, 0., 3.5);
vec3 rayTarget = vec3(0, 0, 0);
vec2 screenPos = vUv * 2.0 - 1.;
vec3 rayDirection = camera(rayOrigin, rayTarget, screenPos, lensLength);
float collision = raymarch(rayOrigin, rayDirection);
if (collision > -0.5) {
color = vec3(0.678, 0.106, 0.176);
}
gl_FragColor = vec4(color, 1);
}
lensLength
here determines the field of view. Try changing it to see how it affects the scene.
Using GLSL modules for raymarching
Implementing your own raymarching function is cool. It’s especially useful when you want to tweak the inner workings to achieve a specific effect. However, in most cases, you can probably just use an off-the-shelf module.
Below, I’ve updated the sketch to use the glsl-raytrace
module. Additionally, I’m using a glsl-sdf-primitives
module to generate a torus and glsl-rotate
to rotate it.
The mechanics remain largely similar. The key difference is that geometry is now defined within a function called doModel
, and raymarch returns a vec2
containing the distance and material index. This is useful if you want to render multiple types of objects in a scene.
precision highp float;
varying vec2 vUv;
uniform float lensLength;
uniform float time;
vec2 doModel(vec3 p);
#pragma glslify: camera = require('glsl-camera-ray')
#pragma glslify: raymarch = require('glsl-raytrace', map = doModel, steps = 90)
#pragma glslify: sdTorus = require('glsl-sdf-primitives/sdTorus')
#pragma glslify: rotate = require('glsl-rotate/rotate')
vec2 doModel(vec3 p) {
p.xy = rotate(p.xy, time);
p.yz = rotate(p.yz, time);
float d = sdTorus(p, vec2(0.75, 0.35));
return vec2(d, 0.0);
}
void main() {
vec3 color = vec3(0.0);
vec3 rayOrigin = vec3(3.5, 0, 3.5);
vec3 rayTarget = vec3(0, 0, 0);
vec2 screenPos = vUv * 2.0 - 1.;
vec3 rayDirection = camera(rayOrigin, rayTarget, screenPos, lensLength);
vec2 collision = raymarch(rayOrigin, rayDirection);
if (collision.x > -0.5) {
color = vec3(0.678, 0.106, 0.176);
}
gl_FragColor = vec4(color, 1);
}
Check it out! We’ve got a spinning donut 🍩 But it looks kinda flat. Let’s add some depth to the scene.
Calculating normals
For the classic material and lighting combination, we need to calculate surface normals. That is, a vector that points away from the surface at a given point.
With SDFs, we calculate the normal by taking the gradient of the SDF function (f) at a specific point, denoted as ∇f. I don’t know about you, but the last time I took a gradient was in MEC E 537 – Aerodynamics. And that was a while ago 😅
Luckily for us, we can use the glsl-sdf-normal
module to compute normals for us. The module uses the same doModel
function that we defined for raymarching. If you’re curious about the underlying math, check out Jamie Wong’s explanation.
#pragma glslify: normal = require('glsl-sdf-normal', map = doModel)
if (collision.x > -0.5) {
vec3 pos = rayOrigin + rayDirection * collision.x;
vec3 nor = normal(pos);
color = nor * 0.5 + 0.5;
}
Phong lighting
My personal philosophy is very much:
It’s important to understand how things work, but I’m less focused on implementing everything from scratch and more intrigued by applying those concepts to create my own sketches and scenes. That’s why I was super excited to come across stack.gl/packages.
The stackgl ecosystem is full of little GLSL modules that you can glue these together to create all kinds of effects.
Interested in adding lighting to the scene? What type would you prefer? Lambert, Phong, Beckmann, or Specular? Just grab the associated module and plug it into the scene.
I chose glsl-specular-blinn-phong
#pragma glslify: blinnPhongSpec = require('glsl-specular-blinn-phong')
vec3 lightPos = vec3(1, 1, 1);
vec3 tint = vec3(0.05, 0.0, 0.97);
vec2 collision = raymarch(rayOrigin, rayDirection);
if (collision.x > -0.5) {
vec3 pos = rayOrigin + rayDirection * collision.x;
vec3 nor = normal(pos);
vec3 eyeDirection = normalize(rayOrigin - pos);
vec3 lightDirection = normalize(lightPos - pos);
float power = blinnPhongSpec(lightDirection, eyeDirection, nor, 0.5);
color = power * tint;
}
Iridescent material
Stackgl isn’t the only place where you can find useful code. My other favourite option is Shadertoy. I’m not going to lie, most things on shadertoy were too daunting for me. I couldn’t even begin to figure out what the code was doing.
That is, until I discovered that most work on shadertoy uses a combo of raymarching + SDF. This was certainly a lightbulb moment for me. It’s like suddenly this cryptic code was deciphered and I could understand what it said.
I’ve been obsessed with iridescence and have been bookmarking cool shaders. Once I learnt the raymarching technique, that was it. I could revisit these shaders and try to understand how they work.
One such shader was Thomas Hooper’s Crystals. It’s way more complex than our scene but the general structure is the same. There’s a function for generating the geometry, there’s raymarching loop and after checking for collision is the bit where the iridescence effect is applied.
Let’s add that to our scene.
vec3 pal( in float t, in vec3 a, in vec3 b, in vec3 c, in vec3 d ) {
return a + b*cos( 6.28318*(c*t+d) );
}
vec3 spectrum(float n) {
return pal( n, vec3(0.5,0.5,0.5),vec3(0.5,0.5,0.5),vec3(1.0,1.0,1.0),vec3(0.0,0.33,0.67) );
}
const float GAMMA = 2.2;
vec3 gamma(vec3 color, float g) {
return pow(color, vec3(g));
}
vec3 linearToScreen(vec3 linearRGB) {
return gamma(linearRGB, 1.0 / GAMMA);
}
if (collision.x > -0.5) {
vec3 pos = rayOrigin + rayDirection * collision.x;
vec3 nor = normal(pos);
vec3 eyeDirection = normalize(rayOrigin - pos);
vec3 lightDirection = normalize(lightPos - pos);
vec3 reflection = reflect(rayDirection, nor);
vec3 dome = vec3(0, 1, 0);
vec3 perturb = sin(pos * 10.);
color = spectrum(dot(nor + perturb * .05, eyeDirection) * 2.);
float specular = clamp(dot(reflection, lightDirection), 0., 1.);
specular = pow((sin(specular * 20. - 3.) * .5 + .5) + .1, 32.) * specular;
specular *= .1;
specular += pow(clamp(dot(reflection, lightDirection), 0., 1.) + .3, 8.) * .1;
float shadow = pow(clamp(dot(nor, dome) * .5 + 1.2, 0., 1.), 3.);
color = color * shadow + specular;
color = linearToScreen(color);
}
There are three layers to the iridescent material: the base layer (the funky gradients), a little bit of shadow and specular (the concentric light bands). Try toggling them on and off with the slider see their effects.
Mix Phong and Iridescence
One last little tweak with the lighting. We can actually blend the phong and iridescence effects. Which enables you to have tinted iridescent objects.
There’s not a whole lot to it. Calculate the colors for the two effects and then blend them with the mix
function.
if (collision.x > -0.5) {
float power = blinnPhongSpec(lightDirection, eyeDirection, nor, 0.5);
vec3 baseColor = power * tint;
color = color * shadow + specular;
color = mix(baseColor, color, mixBaseAndIridescent);
color = linearToScreen(color);
}
Crystal geometry
We’ve nailed the look, but what about the crystal shape?
You can file this under “stuff I don’t quite understand, but that’s not going to stop me from using it.” The crystal geometry is a Rhombic Triacontahedron, which I discovered in a The Art Of Code tutorial.
This shape is created by folding a plane onto itself using some “magic numbers” and along a “magic direction.” We repeat the process a few times until we achieve the desired crystal shape.
Try using the slider to observe how the shape changes with each fold.
float sdCrystal(vec3 p) {
float c = cos(3.1415/5.), s=sqrt(0.75-c*c);
vec3 n = vec3(-0.5, -c, s);
p = abs(p);
p -= 2.*min(0., dot(p, n))*n;
p.xy = abs(p.xy);
p -= 2.*min(0., dot(p, n))*n;
p.xy = abs(p.xy);
p -= 2.*min(0., dot(p, n))*n;
float d = p.z - 1.;
return d;
}