Bake a pretty — or computationally challenging — shader into a texture (Unity)

Sneha Belkhale
7 min readAug 7, 2019

--

Texture baking is very common technique in computer graphics to transfer the details of your shader into a texture. This is useful if your shader is computationally heavy, but produces a static result eg. complex noises.

Many modeling applications have this functionality down. If you have a pipeline of transferring assets from Blender/Houdini/Maya, then you may want to do the texture baking there. However, if you primarily work out of Unity, this tutorial is for you. With this technique, you can effectively save a custom shader’s results that were manipulated in real-time, and use the generated texture in any another application with a default shader ( Sketchfab? etc. ) .

So first, let’s start off with laying out the problems:

  • Unwrap a mesh with shader applied, into uv coordinate space.
  • Render the unwrapped mesh to a texture.
  • Save texture to disk.

FYI, if you just wanna see some code (as I always do), it’s here.

Unwrapping a mesh

Unwrapping a mesh is the process of mapping 3D vertex coordinates to their 2D uv coordinate space. To do this in a vertex shader, we can set the x & y components of the vertex to be its uv coordinates. Setting the z component to 0 will leave us with an unwrapped mesh in the XY plane, at z=0.

To try this out, duplicate the shader you want to bake, and adjust the vertex positions with the following line.

v.vertex = float4(v.uv.xy, 0.0, 1.0);

This new shader applied to your original mesh will show you the unwrapped mesh.

unwrapped unity sphere, with a fbm noise shader applied
unwrapped cool tree from sketchfab, with a fbm noise shader applied

How to render this unwrapped mesh to a texture?

Orthographic cameras! With orthographic projection, there is no depth cues. An object’s size in the rendered image stays constant regardless of its distance from the camera. This will allow us to draw a perfect 2D UV grid without distortion.

Perspective vs. Orthographic projection

To start the baking process (we add two cups of flour?), we need to attach a script to the camera that is going to handle the baking logic. We attach it to a camera because we want to draw our mesh, using Graphics.DrawMeshNow, after we are done rendering the scene. For this, we need access to theOnPostRender function.

In the example project, I have the following code happen in ShaderBaker.cs on n the “M” key down event ( because binding things to random keys is just so easy lol).

Mesh M = objectToBake.GetComponent<MeshFilter>().mesh;
// create a new render texture
RenderTexture rt = RenderTexture.GetTemporary(width, height);

// set the active render target
Graphics.SetRenderTarget(rt);
// save the last camera state
GL.PushMatrix();
// load an orthographic camera
GL.LoadOrtho();
// set the active material to be the unwrapping material we made earlier
uvMaterial.SetPass(0);
// draw the mesh, the matrix does not matter because we are not using it for any projection in the shader
Graphics.DrawMeshNow(M, Matrix4x4.identity);
// ** save to disk here **// reset state
Graphics.SetRenderTarget(null);
RenderTexture.ReleaseTemporary(rt);
GL.PopMatrix();

There is a lot to say about what’s going on above, but how to say it depends on your level of familiarity with the Graphics pipeline in general… I guess the two most interesting things here are loading the Orthographic camera and activating the proper UV Unwrapping material for rendering using SetPass .

The UV Unwrapping shader will be identical to the shader you want to bake, except for the two line changes in the vertex shader. The first change mentioned earlier is the remapping of vertex coordinates to UV space.

v.vertex = float4(v.uv.xy, 0.0, 1.0);

Secondly, we will need to replace the standard line :

o.vertex = UnityObjectToClipPos(v.vertex)

with :

o.vertex = mul(UNITY_MATRIX_P, v.vertex);

UnityObjectToClipPos essentially multiples the vertex point by its world matrix and projection matrix. We do not care about the world matrix of this object, since we are rendering UV coordinates now that should always be centered around 0. However we do still need to project the UV coordinates to screen space, which we can do with just the projection matrix UNITY_MATRIX_P .

Another thing to note is that Unity docs specify that DrawMeshNow does not include lighting information, so that means this implementation will only work with unlit shaders. If want to bake a shader with full integration of lighting and shadows, you would have to look into using theDrawMesh function.

Save texture to disk

At this point, we should have the baked shader texture map in the render texture. A few googles were all I needed to figure this out, as this part of the process is quite common. The steps involve converting the RenderTexture to a Texture2D , and then encoding the result to a PNG. This code can be found in ShaderBaker.cs .

Woo! Now we have a .png texture representing the unlit shader of choice. However — as usual — there are always fundamental problems that arise. If you try using your new texture as _MainTex on any default Unity material, you may see black artifacts around the edges of the UV islands—

Artifacts caused by blackness around exact uv seams

This is because the texture was baked using exact UV coordinates, and does not account for the sampling variances that occur when sampling a texture at the edges. To combat this problem, I found that other modeling softwares implement “island border expansion” on new baked textures, which is simply a blending operation to expand the border.

I decided to go about this “island border expansion” by implementing a second dilation pass on output texture.

Dilation is a method of expanding shapes that is originally defined for binary ( black or white ) images.

The algorithm takes a structuring element, in this case a 3x3 tile, and iterates over every pixel of the image by centering the tile at the pixel. If there is a white element within the tile, we convert the tile to white.

To expand this to colored images, we need a different factor to convert the tile. I decided to have the structuring element operate on the difference of the pixel color to a predefined background color. If there is a pixel within the structuring element who’s color value is far enough away from the background color, we can convert the given pixel.

This implementation can be found in Dilate.shader . To incorporate this into our existing setup, we add the following lines to the baking code:

...Graphics.DrawMeshNow(M, Matrix4x4.identity);
Graphics.SetRenderTarget(null);
...// create a second render target
RenderTexture rt2 = RenderTexture.GetTemporary(width, height);
// use the dilate shader on our first render target, output to rt2
Graphics.Blit(rt, rt2, dilateMat);
// save rt2 to png
SaveTexture(rt2, objectToBake.name);
// reset
RenderTexture.ReleaseTemporary(rt);
RenderTexture.ReleaseTemporary(rt2);
GL.PopMatrix();
Before & after dilation

This expands the textures nicely! One caveat is that this implementation of dilation only works if the background color of the texture is distinct from the colors of the shader. I added a property in ShaderBaker.cs to set the background color :)

Also, there still are veryyy small artifacts at the seams, and I have yet to find out what’s going on there -__-

Since the code is pretty choppy in this article, I would really recommend looking through the code in the github project. Here is a summary to navigate the code:

UVUnwrap.shader — this shader is a duplicate of the one you want to bake, except with the vertex shader modified to render the vertices in UV space.

Dilate.shader — this shader is responsible for the dilation post processing of the output texture.

ShaderBaker.cs — attach this script to the camera, it is responsible for rendering the mesh to a texture. It has public fields where you should place the object you want to bake, a material with the unwrapped version of the shader you want to bake (UVUnwrap.shader), and a material with the dilate shader (Dilate.shader). You should also set the backgroundColor property to be a color that is distinct from the colors in your shader.

That should be all~~ was a fun exploration for me and hope it helps a dev out there :)

snayz

Check out other related repos @ https://github.com/sneha-belkhale

Other VR/Graphics projects @ https://codercat.tk

And renders / screenshots @ https://www.instagram.com/snayss/

--

--

Sneha Belkhale

We are a group of computer enthusiasts, artist, designers building experiences in cutting edge fields ranging from WebVR to VFX. https://patreon.com/codercat