次の方法で共有


Get started with WebGL

Use WebGL to create incredibly fast graphics.

WebGL basics

Using the WebGL API you can create high performance 2D and 3D graphics by directly programming and controlling the computer's graphics processing unit (GPU). WebGL renders on a canvas element as a drawing context much like the 2D context, but provides very low-level access to your computer's GPU.

WebGL is a little different than traditional web programming, as you're using two languages to write every app. To use WebGL, part of the code is written in JavaScript, and the other part is written in GLSL (OpenGL Shading Language), a low-level C-like language. The JavaScript portion is where images are loaded, colors set, and objects are described. The GLSL code translates the images, colors, and vectors to run on the GPU through shader programs. The combination of the two gives WebGL incredibly fast graphics.

The example we're using doesn't contain any WebGL libraries. In practice most developers use libraries for tough or repetitive tasks, such as matrix math or creating basic geometric shapes. For example, WebGL doesn't offers matrix functions like scale or rotate. As an essential part of using WebGL to create 3D graphics, the glMatrix library provides standard matrix transformations. As another example, you describe shapes in 3D graphics as a series of triangles. For a sphere, you need to create an array of coordinates for every vertex point to describe the triangles, calculating each point using trigonometry. Using a general purpose library such as Three.js library (used by approximately half the WebGL webpages), you create a sphere by simply specifying the radius and number of triangles to use.

Many developers use libraries, and only write specialized shader code that the libraries don't provide. See the WebGL demos, references, and resources section for links to libraries, tutorials, and other info.

The Warp example

The example we dig into is called Warp. Warp displays a photo that you import, and stretches and squeezes the image based on input from your mouse or touch. The example's fun to play with and the code gives you a basic jump-start into using WebGL.

Warp introduces you to:

  • Basic WebGL setup in an app.
  • How to create an array of vector points representing triangles and lines, and applying a color or a photo to the surface.
  • How to use mouse events and a little trigonometry to achieve a cool effect when stretching or squeezing the photo.

While WebGL can model 3D objects, this example uses only 2D objects for the sake of simplicity.

WebGL describes shapes by arrays of vertices or coordinates that represent the object broken down into a set of triangles.

WebGL graphics can range from a simple 2D geometric shape to a complex rendering such as a realistic image of an automobile, skyscraper, or anatomical surface. The surface of the objects can be rendered with a few or with many triangles, with more triangle giving greater detail. It's only a matter of the size and number of triangles. The Warp example uses 800 triangles, or 400 squares arranged into a 20 x 20 grid. The size gives a good resolution to the distortion, but isn't too complex. Experiment with the grid size to see the effect on the results.

Warp applies the photo as a texture. In WebGL, textures are defined as images, colors, or patterns used to cover the surface of the vector objects you create.

Get started writing WebGL apps

An app passes a shape to the GPU as a vector array, which typically represents a collection of triangles. Triangles can be described to the GPU:

  • by three vertexes for individual triangles
  • triangle strips, which after the first triangle only add one vertex more per triangle

You can also describe lines, line strips, or points to the GPU as well. When you pass a vector array to the GPU, you specify how the array should be read, such as individual triangles, lines, or strips.

To describe triangles and other shapes, you set vertices for each point using x, y, and z coordinates. The GPU uses a right-handed 3D Cartesian coordinate system of floating point numbers that range between -1 to 1, regardless of the size of the canvas. Zero is the origin, or center of the display area, with three axis, x, y, and z. A right handed coordinate system is one where the positive X values go to the right, positive Y values go up, and the positive Z values come out toward the viewer. It's called a right handed system because if you can hold your right hand with your thumb along the positive X axis, index finger along the positive Y axis, and the rest of your fingers open onto the positive Z axis (toward the viewer).

This diagram shows the Cartesian coordinate system:

The WebGL rendering pipeline

Modern GPUs use a programmable rendering pipeline. Early graphics cards had built-in functions for rotating and scaling an object, but wouldn't let you change them. The programmable rendering pipeline makes it possible to write your own functions to control how shapes and images are rendered using vertex and fragment shaders. A vertex shader controls the points or vertices on a shape. To rotate an object in space, your vertex shader is responsible applying a matrix (that you provide) to rotate coordinates. To add an image to the shape, the fragment shader, or pixel shader controls how the image is applied to the shape in relation to info it gets from the vertex shader. The GPU provides the pipeline that routes data in and out of the shaders, and does the final rendering on the screen. The basic rendering pipeline looks like this:

The rendering pipeline uses these general steps:

  • An app passes in coordinates with a vector array that points to a vector buffer. The vector coordinates are processed one at a time to a vertex shader.
  • The vertex shader processes the vertex in relation to other vertices, moving coordinates, adding color references, and other actions.
  • The triangles are assembled and passed to the rasterizer, which interpolates the pixels between the vertices in the triangles.
  • The depth testing operation verifies that the pixel is viewable. Pixels (and objects) can be out of the viewing area, too far forward or back (based on their Z coordinate), or blocked by another object. If they're not visible, they're forgotten.
  • The fragment shader colors the pixels. Colors or an image references can be passed into the fragment shader from the vertex shader, or set to colors from within the fragment shader.
  • Finally the pixel is sent to the framebuffer, where it displays on the screen.

This is a simplified version of all the GPU does, but it gives you an idea of the process of creating a graphic.

Up next

In WebGL context and setup, you'll see how to get a WebGLRenderingContext from the canvas element, and the basic setup needed to render your graphics.

WebGL context and setup

Shader programs

Loading photos into a texture

UI support

WebGL demos, references, and resources