banner
IWSR

IWSR

我永远喜欢志喜屋梦子!

Three.js —— Texture

P5r The best in the world! The junior is really cute!

Mesh is an important concept in Three.js, serving as a key component for rendering three-dimensional objects in a scene. A Mesh creates a visual object that can be displayed in the renderer by combining Geometry and Material.

In Three.js, Material can contain multiple Textures to describe the details of an object's surface, such as using normal maps or displacement maps for surface bumps, environment maps for surface reflections, the kd (diffuse coefficient, also part of the material's properties) and shadows (Ambient Occlusion map) that do not require real-time calculations, all of which can be implemented using Textures.

Common Types of Textures#

Textures can be classified into various types based on different usage scenarios. Below are some of the more common types.

Color Map#

Color Map is used to apply color information to the surface of a 3D model. It is part of the object's material, used to give the model a specific color. Color Texture can include the base color of the object and can also be used to simulate the texture, patterns, and details of the object.

Its functions include:

  • Base Color: The most common use is to provide a basic color for the object. You can apply a solid color texture to the object, giving it a specific appearance. This can be used to simulate various types of materials, such as metal, plastic, wood, etc.
  • Texture: You can apply a texture that contains patterns, textures, or details to the object, providing more control when simulating the visual details of the object's surface. This can make the object look more realistic, such as wood grain or stone patterns.
  • Color Variation: By using different color areas in the texture, you can make the object's color vary in different parts, achieving artistic effects or emphasizing specific parts of the object.
  • Custom Effects: You can use Color Texture to achieve various visual effects, such as adding a brand logo to a vehicle model or giving a character model a unique appearance.

Alpha Map#

Alpha Map is used to specify the transparency value (Alpha channel) for each pixel on the surface of a 3D model. Transparency maps allow you to control the transparency of the object during rendering, achieving transparent, semi-transparent, and opaque effects.

image

As shown in the image, the texture of the leaves on the left will have the black areas cropped out after processing with the black-and-white Alpha Texture (where white areas are visible and black areas are not), resulting in the clean leaves on the right.

Its functions include:

  • Transparent Effects: Transparency maps can make certain areas of the object transparent, allowing you to achieve transparent effects like glass, water, smoke, etc.
  • Semi-Transparent Effects: With transparency maps, you can make certain areas of the object semi-transparent, visually simulating the semi-transparent characteristics of materials, such as thin fog or clouds.
  • Transparent Parts of Opaque Objects: Even opaque objects may have areas that need to be transparent, such as a glass window on an object.
  • Complex Patterns and Textures: Transparency maps can be combined with color maps to achieve complex patterns and textures, where some areas are transparent.

Normal Map#

Normal Map is used to simulate surface details during rendering, enhancing the visual effects of the object. Normal maps do not change the actual geometric structure of the object but alter the lighting calculations by storing normal information at each pixel, making the object appear to have more detail and depth during rendering. Since it does not actually change the geometric structure, the shadows generated under real-time lighting will only reflect the original geometric structure.

Its functions include:

  • Adding Detail: Normal maps can add detail to the object's surface without increasing the polygon count. This makes the object look more realistic, as the lighting effects change due to the normals, creating a bump effect.
  • Simulating Bump Effects: Normal maps can be used to simulate bump effects on the object's surface, such as dents, protrusions, wrinkles, etc. These details will create shadows and highlights in lighting calculations, making the object appear more textured.
  • Reducing Polygon Count: Using normal maps can avoid the need for a large number of polygons to represent the object's details, thereby reducing computational load and improving performance.

Ambient Occlusion Map#

Ambient Occlusion Map is used to simulate ambient occlusion effects during rendering. It is a grayscale image where each pixel represents the degree of occlusion or relative shading on the object's surface, allowing for adjustments to the lighting effects during rendering, enhancing the sense of detail and depth.

Its functions include:

  • Simulating Occlusion Effects: Ambient Occlusion Maps can simulate occlusion and shading in the environment, making dark corners and recessed areas appear darker. This increases the depth and visual detail of the object.
  • Enhancing Surface Texture: By storing bump information in the ambient occlusion map, the object's surface can appear more textured and detailed.
  • Increasing Realism: Using ambient occlusion maps can enhance the realism of the rendering, making the object appear closer to the lighting and occlusion effects found in the real world.

Metalness Map#

Metalness Map is used to control the metallic property of the object's surface during rendering. Metalness is a property that determines whether an object is metallic or non-metallic (insulating). Metalness maps are very common in PBR (Physically Based Rendering) to achieve more realistic rendering effects.

  • Controlling Metalness: Each pixel of the metalness map can represent the metalness property of the corresponding area. For metallic objects, the metalness value is high, while for non-metallic objects, the metalness value is low.
  • Influencing Reflection: Metalness affects how the object reflects light. Metallic objects have a high reflectivity, while non-metallic objects have relatively lower reflectivity.
  • Increasing Realism: By using metalness maps, you can make different parts of the object exhibit different metallic properties, thereby increasing the realism of the rendering.

Roughness Map#

Roughness Map is used to control the roughness property of the object's surface during rendering. Roughness is a property that determines the smoothness of the object's surface; rough surfaces scatter light, making reflections blurrier, while smooth surfaces produce sharper reflections.

Its functions include:

  • Controlling Smoothness: Each pixel of the roughness map can represent the roughness property of the corresponding area. A high roughness value means a rougher surface, while a low roughness value indicates a smoother surface.
  • Influencing Reflection: The roughness of the object's surface affects the degree of light scattering, thus influencing the object's reflection behavior. Smooth surfaces produce clear reflections, while rough surfaces produce blurred reflections.
  • Increasing Realism: By using roughness maps, you can set different roughness levels for different parts of the object, thereby enhancing the realism and visual detail of the rendering.

Code#

Here I wrote a demo to further understand the performance of the corresponding textures. Below, let's discuss some important points I encountered during my learning.

Texture Magnification & Texture Minification#

Before discussing texture magnification/minification, it is important to clarify a premise—the smallest unit of texture, texels, is different from pixels. The former is determined by the texture's own resolution, while the latter is determined by the physical device, so it is possible for one texel to contain multiple pixels or for one pixel to contain multiple texels. In other words, texels and pixels do not necessarily overlap perfectly, which can lead to anomalies (such as distortion or moiré patterns) when textures are applied to the model's surface.

image

Texture Magnification#

For example, if there is a 200 * 200 texture image that needs to be applied to a 500 * 500 plane, distortion will inevitably occur. We need to understand that each texel of the texture needs to undergo (u, v) transformation when applied to the model. In this case, the texture will be stretched, and the content of one texture pixel will be applied to 6.25 pixels (meaning these 6.25 pixels will share the same properties—whether color or normal direction), which will obviously make the texture appear blurry (as shown in the left image below).

image

Although the above image is not the example I provided, the effect on the left is caused by multiple pixels using the same texel value. In Three.js, there is a corresponding property (Three.NearestFilter).

NearestFilter returns the value of the texture element that is nearest (in Manhattan distance) to the specified texture coordinates.

The effect of NearestFilter is visibly poor, but fortunately, it does not require additional calculations, making it suitable for some unimportant content.

In addition, Three.js also provides the option of LinearFilter for texture magnification, which is based on Bilinear Interpolation.

LinearFilter is the default and returns the weighted average of the four texture elements that are closest to the specified texture coordinates, and can include items wrapped or repeated from other parts of a texture, depending on the values of wrapS and wrapT, and on the exact mapping.

As mentioned in the introduction, the value at the corresponding pixel is the weighted average of the four nearest texture pixels, which embodies the idea of bilinear interpolation (detailed introduction can be found in Games101 p9 0:28, not expanded here), and this method will give the image a softer gradient (see the middle part of the above image).

Texture Minification and Mipmap#

In contrast to the situation of magnification, if multiple texels are rendered within a single pixel (as illustrated in the image below).

image

From the image, it can be seen that there are multiple texels within one pixel. If we apply the nearest texel value to the pixel center as before, it is clear that most information will be lost, leading to distortion (Three.NearestFilter). So, can using bilinear interpolation (Three.LinearFilter) solve the problem? If there are only 4 texels within a pixel, it can solve the distortion issue, but if there are more than 4 texels within a pixel, the distortion will still occur unless we continuously increase the number of texels participating in the linear interpolation—Bicubic Interpolation, for example, increases precision by averaging the surrounding 16 points, but it is easy to see that increasing the computational load to prevent distortion places a significant demand on performance. Therefore, to reduce the performance overhead on the client, a technique called mipmap is introduced in graphics—pre-computed results from other machines are stored in the Texture, allowing mipmap's pre-computed results to be used directly during Minification, thus reducing the computational requirements on the client (this is a trade-off of space for time, detailed introduction can be found in Games101 p9 0:43 or in the reference material Real-Time Rendering 4th).

image

In Three.js, the options related to mipmap include:

  • THREE.NearestMipmapNearestFilter: Selects the mipmap that best matches the size of the pixel to be shaded and uses NearestFilter conditions (the nearest texel to the pixel center) to generate the texture value.

  • THREE.NearestMipmapLinearFilter: Selects the two mipmaps that best match the size of the pixel to be textured and uses NearestFilter standards to generate texture values from each mipmap. The final texture value is a weighted average of these two values.

  • THREE.LinearMipmapNearestFilter: Selects the mipmap that best matches the size of the pixel to be textured and uses LinearFilter standards (the weighted average of the four texels nearest to the pixel center) to generate the texture value.

  • THREE.LinearMipmapLinearFilter: (default) Selects the two mipmaps that best match the size of the pixel to be textured and uses LinearFilter standards to generate texture values from each mipmap. The final texture value is a weighted average of these two values.

For official examples related to texture magnification/minification in Three.js, you can refer to.

References#

GAMES101 - Introduction to Modern Computer Graphics - Yan Lingqi

Discussing Texel Density that is Easily Overlooked

Notes on "Real-Time Rendering 4th" - Chapter 6 Texture Mapping

Direct3D Graphics Learning Guide

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.