WebGPU is the new WebGL. That means it is the new way to draw 3D in web browsers. It is, in my opinion, very good actually. It is so good I think it will also replace Canvas and become the new way to draw 2D in web browsers. In fact it is so good I think it will replace Vulkan as well as normal OpenGL, and become just the standard way to draw, in any kind of software, from any programming language. This is pretty exciting to me. WebGPU is a little bit irritating— but only a little bit, and it is massively less irritating than any of the things it replaces.

WebGPU goes live… today, actually. Chrome 113 shipped in the final minutes of me finishing this post and should be available in the “About Chrome” dialog right this second. If you click here, and you see a rainbow triangle, your web browser has WebGPU. By the end of the year WebGPU will be everywhere, in every browser. (All of this refers to desktop computers. On phones, it won’t be in Chrome until later this year; and Apple I don’t know. Maybe one additional year after that.)

If you are not a programmer, this probably doesn’t affect you. It might get us closer to a world where you can just play games in your web browser as a normal thing like you used to be able to with Flash. But probably not because WebGL wasn’t the only problem there.

If you are a programmer, let me tell you what I think this means for you.

Sections below:

  • A history of graphics APIs (You can skip this)
  • What’s it like?
  • How do I use it?
    • Typescript / NPM world
    • I don’t know what a NPM is I Just wanna write CSS and my stupid little script tags
    • Rust / C++ / Posthuman Intersecting Tetrahedron

A history of graphics APIs (You can skip this)

Yo! Yogi

1991

Back in the dawn of time there were two ways to make 3D on a computer: You did a bunch of math; or you bought an SGI machine. SGI were the first people who were designing circuitry to do the rendering parts of a 3D engine for you. They had this C API for describing your 3D models to the hardware. At some point it became clear that people were going to start making plugin cards for regular desktop computers that could do the same acceleration as SGI’s big UNIX boxes, so SGI released a public version of their API so it would be possible to write code that would work both on the UNIX boxes and on the hypothetical future PC cards. This was OpenGL. `color()` and `rectf()` in IRIS GL became `glColor()` and `glRectf()` in OpenGL.

1995

When the PC 3D cards actually became a real thing you could buy, things got real messy for a bit. Instead of signing on with OpenGL Microsoft had decided to develop their own thing (Direct3D) and some of the 3D card vendors also developed their

own API standards, so for a while certain games were only accelerated on certain graphics cards and people writing games had to write their 3D pipelines like four times, once as a software renderer and a separate one for each card type they wanted to support. My perception is it was Direct3D, not OpenGL, which eventually managed to wrangle all of this into a standard, which really sucked if you were using a non-Microsoft OS at the time. It really seemed like DirectX (and the “X Box” standalone console it spawned) were an attempt to lock game companies into Microsoft OSes by getting them to wire Microsoft exclusivity into their code at the lowest level, and for a while it really worked.

Shrek

2000

It

is the case though it wasn’t very long into the Direct3D lifecycle before you started hearing from Direct3D users that it was much, much nicer to use than OpenGL, and OpenGL quickly got to a point where it was literally

years behind Direct3D in terms of implementing critical early features like shaders, because the Architecture Review Board of card vendors that defined OpenGL would spend forever bickering over details whereas Microsoft could just implement stuff and expect the card vendor to work it out.

Let’s talk about shaders. The original OpenGL was a “fixed function renderer”, meaning someone had written down the steps in a 3D renderer and it performed those steps in order.

[API] → Primitive Processing → (1) Transform and Lighting → Primitive Assembly → Rasterizer → (2) Texture Environment → (2) Color sum → (2) Fog → (2) Alpha Test → Depth/Stencil → Color-buffer Blend → Dither → [Frame Buffer]

Modified Khronos Group image

Each box in the “pipeline” had some dials on the side so you could configure how each feature behaved, but you were pretty much limited to the features the card vendor gave you. If you had shadows, or fog, it was because OpenGL or an extension had exposed a feature for drawing shadows or fog. What if you want some other feature the ARB didn’t think of, or want to do shadows or fog in a unique way that makes your game look different from other games? Sucks to be you. This was obnoxious, so eventually “programmable shaders” were introduced. Notice some of the boxes above are yellow? Those boxes became replaceable. The (1) boxes got collapsed into the “Vertex Shader”, and the (2) boxes became the “Fragment Shader”². The software would upload a computer program in a simple C-like language (upload the actual text of the program, you weren’t expected to compile it like a normal program)³ into the video driver at runtime, and the driver would convert that into configurations of ALUs (or whatever the card was actually doing on the inside) and your program would become that chunk of the pipeline. This opened things up a lot, but more importantly it set card design on a kinda strange path. Suddenly video cards weren’t specialized rendering tools anymore. They ran software.

Time Magazine,

2004

Pretty shortly after this was another change. Handheld devices were starting to get to the point it made sense to do 3D rendering on them (or at least, to do 2D compositing using 3D video card hardware like desktop machines had started doing). DirectX was never in the running for these applications. But implementing OpenGL on mid-00s mobile silicon was

rough. OpenGL was kind of… large, at this point. It had all these leftover functions from the SGI IRIX era, and then it had this new shiny OpenGL 2.0 way of doing things with the shaders and everything and not only did this mean you basically had two unrelated APIs sitting side by side in the same API, but also a lot of the OpenGL 1.x features were

traps. The spec said that every video card had to

support every OpenGL feature, but it didn’t say it had to support them

in Hardware, so there were certain early-90s features that 00s card vendors had decided nobody really uses, and so if you used those features the driver would

render the screen, copy the entire screen into regular RAM, perform the feature on the CPU and then copy the results back to the video card. Accidentally activating one of these trap features could easily move you from 60 FPS to 1 FPS. All this legacy baggage promised a lot of extra work for the manufacturers of the new mobile GPUs, so to make it easier Khronos (which is what the ARB had become by this point) introduced an OpenGL “ES”, which stripped out everything except the features you absolutely needed. Instead of being able to call a function for each polygon or each vertex you had to use the newer API of giving OpenGL a list of coordinates in a block in memory⁴, you had to use

either the fixed function or the shader pipeline with no mixing (depending on whether you were using ES 1.x or ES 2.x), etc. This partially made things simpler for programmers, and partially prompted some annoying rewrites. But as with shaders, what’s most important is the long-term strange-ing this change presaged:

Starting at this point, the decisions of Khronos increasingly were driven entirely by the needs and wants of hardware manufacturers, not programmers.

The Apple iPhone

2008

With OpenGL ES devices in the world, OpenGL started to graduate from being “that other graphics API that exists, I guess” and actually take off. The iPhone, which used OpenGL ES, gave a solid mass-market reason to learn and use OpenGL. Nintendo consoles started to use OpenGL or something like it. OpenGL had more or less caught up with DirectX in features, especially if you were willing to use extensions. Browser vendors, in that spurt of weird hubris that gave us the original WebAudio API, adapted OpenGL ES into JavaScript as “WebGL”, which makes

no sense because as mentioned OpenGL ES was all about packing bytes into arrays full of geometry and JavaScript doesn’t have direct memory access or even

integers, but they

added packed binary arrays to the language

and did it anyway. So with all this activity, sounds like things are going great, right?

Steven Universe

2013

No! Everything was terrible! As it matured, OpenGL fractured into a variety of slightly different standards with varying degrees of cross-compatibility. OpenGL ES 2.0 was the same as OpenGL 3.3, somehow. WebGL 2.0 is

very almost OpenGL ES 3.0 but not quite. Every attempt to resolve OpenGL’s remaining early mistakes seemed to wind up duplicating the entire API as new functions with slightly different names and slightly different signatures. A big usability issue with OpenGL was even after the 2.0 rework it had a

lot of shared global state, but the add-on systems that were supposed to resolve this (VAOs and VBOs) only wound up being

even more global state you had to keep track of. A big trend in the 10s was “GPGPU” (General Purpose GPU); programmers started to realize that graphics cards worked as well as, but were slightly easier to program than, a CPU’s vector units, so they just started accelerating random non-graphics programs by doing horrible hacks like stuffing them in pixel shaders and reading back a texture containing an encoded result. Before finally resolving on compute shaders (in other words: before giving up and copying DirectX’s solution), Khronos’s original steps toward actually catering to this were either poorly adopted (OpenCL) or just plain bad ideas (geometry shaders). It all built up. Just like in the pre-ES era, OpenGL had basically become several unrelated APIs sitting in the same header file, some of which only worked on some machines. Worse,

nothing worked quite as well as you wanted it to; different video card vendors botched the complexity, implementing features slightly differently (especially tragically, implementing slightly different versions of the shader language) or just badly, especially in the infamously bad Windows OpenGL drivers.

The way out came from, this is how I see it anyway, a short-lived idea called “AZDO“. This technically consisted of a single GDC talk⁵, and I have no reason to believe the GDC talk originated the idea, but what the talk did do is give a name to an idea incidentally underlying Vulkan, DirectX 12, and Metal. “Approaching Zero Driver Overhead”. Here is the idea: By 2015 video cards had pretty much standardized on a particular way of working and that way was known and that way wasn’t expected to change for ten years at least. Graphics APIs were originally designed around the functionality they exposed, but that functionality hadn’t been a 1:1 map to how GPUs look on the inside for ten years at least. Drivers had become complex beasts that rather than just doing what you told them tried to intuit what you were trying to do and then do that in the most optimized way, but often they guessed wrong, leaving software authors in the ugly position of trying to intuit what the driver would intuit in any one scenario. AZDO was about threading your way through the needle of the graphics API in such a way your function calls happened to align precisely with what the hardware was actually doing, such that the driver had nothing to do and stuff just happened.

Star Wars: The Force Awakens

2016

Or we could just design the graphics API to be AZDO from the start. That’s Vulkan. (And DirectX 12, and Metal.) The modern generation of graphics APIs are about basically

throwing out the driver, or rather, letting your program

be the driver. The API primitives map directly to GPU internal functionality⁶, and the GPU does what you ask without second guessing. This gives you an

incredible amount of power and control. Remember that “pipeline” diagram up top? The modern APIs let you define “pipeline objects”; while graphics shaders let you replace boxes within the diagram, and compute shaders let you replace the diagram with one big shader program, pipeline objects let you

draw your own diagram. You decide what blocks of GPU memory are the sources, and which are the destinations, and how they are interpreted, and what the GPU does with them, and what shaders get called. All the old sources of confusion get resolved. State is bound up in neatly defined objects instead of being global. Card vendors always designed their shader compilers different, so we’ll replace the textual shader language with a bytecode format that’s unambiguous to implement and easier to write compilers for. Vulkan goes so far as to allow⁷ you to write your own allocator/deallocator for GPU memory.

So this is all very cool. There is only one problem, which is that with all this fine-grained complexity, Vulkan winds up being basically impossible for humans to write. Actually, that’s not really fair. DX12 and Metal offer more or less the same degree of fine-grained complexity, and by all accounts they’re not so bad to write. The actual problem is that Vulkan is not designed for humans to write. Literally. Khronos does not want you to write Vulkan, or rather, they don’t want you to write it directly. I was in the room when Vulkan was announced, across the street from GDC in 2015, and what they explained to our faces was that game developers were increasingly not actually targeting the gaming API itself, but rather targeting high-level middleware, Unity or Unreal or whatever, and so Vulkan was an API designed for writing middleware. The middleware developers were also in the room at the time, the Unity and Epic and Valve guys. They were beaming as the Khronos guy explained this. Their lives were about to get much, much easier.

My life was about to get harder. Vulkan is weird— but it’s weird in a way that makes a certain sort of horrifying machine sense. Every Vulkan call involves passing in one or two huge structures which are themselves a forest of other huge structures, and every structure and sub-structure begins with a little protocol header explaining what it is and how big it is. Before you allocate memory you have to fill out a structure to get back a structure that tells you what structure you’re supposed to structure your memory allocation request in. None of it makes any sense— unless you’ve designed a programming language before, in which case everything you’re reading jumps out to you as “oh, this is contrived like this because it’s designed to be easy to bind to from languages with weird memory-management techniques” “this is a way of designing a forward-compatible ABI while making no assumptions about programming language” etc. The docs are written in a sort of alien English that fosters no understanding— but it’s also written exactly the way a hardware implementor would want in order to remove all ambiguity about what a function call does. In short, Vulkan is not for you. It is a byzantine contract between hardware manufacturers and middleware providers, and people like… well, me, are just not part of the transaction.

Khronos did not forget about you and me. They just made a judgement, and this actually does make a sort of sense, that they were never going to design the perfectly ergonomic developer API anyway, so it would be better to not even try and instead make it as easy as possible for the perfectly ergonomic API to be written on top, as a library. Khronos thought within a few years of Vulkan⁸ being released there would be a bunch of high-quality open source wrapper libraries that people would use instead of Vulkan directly. These libraries basically did not materialize. It turns out writing software is work and open source projects do not materialize just because people would like them to⁹.

Star Wars: The Rise of Skywalker

2019

This leads us to the other problem, the one Vulkan developed after the fact. The Apple problem. The theory on Vulkan was it would change the balance of power where Microsoft continually released a high-quality cutting-edge graphics API and OpenGL was the sloppy open-source catch up. Instead, the GPU vendors themselves would provide the API, and Vulkan would be the universal standard while DirectX would be reduced to a platform-specific oddity. But then Apple said no. Apple (who had already launched their own thing, Metal) announced not only would they never support Vulkan, they would not support

OpenGL, anymore¹⁰. From my perspective, this is just DirectX again; the dominant OS vendor of our era, as Microsoft was in the 90s, is pushing proprietary graphics tech to foster developer lock-in. But from Apple’s perspective it probably looks like— well, the way DirectX probably looked from Microsoft’s perspective in the 90s. They’re ignoring the jagged-metal thing from the hardware vendors and shipping something their developers will actually want to use.

With Apple out, the scene looked different. Suddenly there was a next-gen API for Windows, a next-gen API for Mac/iPhone, and a next-gen API for Linux/Android. Except Linux has a severe driver problem with Vulkan and a lot of the Linux devices I’ve been checking out don’t support Vulkan even now after it’s been out seven years. So really the only platform where Vulkan runs natively is Android. This isn’t that bad. Vulkan does work on Windows and there are mostly no problems, though people who have the resources to write a DX12 backend seem to prefer doing so. The entire point of these APIs is that they’re flyweight things resting very lightly on top of the hardware layer, which means they aren’t really that different, to the extent that a Vulkan-on-Metal emulation layer named MoltenVK exists and reportedly adds almost no overhead. But if you’re an open source kind of person who doesn’t have the resources to pay three separate people to write vaguely-similar platform backends, this isn’t great. Your code can technically run on all platforms, but you’re writing in the least pleasant of the three APIs to work with and you get the advantage of using a true-native API on neither of the two major platforms. You might even have an easier time just writing DX12 and Metal and forgetting Vulkan (and Android) altogether. In short, Vulkan solves all of OpenGL’s problems at the cost of making something that no one wants to use and no one has a reason to use.

The way out turned out to be something called ANGLE. Let me back up a bit.

Super Meat Boy

2010, again

WebGL was designed around OpenGL ES. But it was never

exactly the same as OpenGL ES, and also technically OpenGL ES never really ran on desktops, and also regular OpenGL on desktops had Problems. So the browser people eventually realized that if you wanted to ship an OpenGL compatibility layer on Windows, it was actually easier to

write an OpenGL emulator in DirectX than it was to use OpenGL directly and have to negotiate the various incompatibilities between OpenGL implementations of different video card drivers. The browser people also realized that if slight compatibility differences between different OpenGL drivers was hell,

slight incompatibility differences between four different browsers times three OSes times different graphics card drivers would be the worst thing ever. From what I can only assume was desperation, the most successful example I’ve ever seen of true cross-company open source collaboration emerged: ANGLE, a BSD-licensed OpenGL emulator originally written by Google but with honest-to-goodness contributions from both Firefox and Apple, which is used for WebGL support in

literally every web browser.

But nobody actually wants to use WebGL, right? We want a “modern” API, one of those AZDO thingies. So a W3C working group sat down to make Web Vulkan, which they named WebGPU. I’m not sure my perception of events is to be trusted, but my perception of how this went from afar was that Apple was the most demanding participant in the working group, and also the participant everyone would naturally by this point be most afraid of just spiking the entire endeavor, so reportedly Apple just got absolutely everything they asked for and WebGPU really looks a lot like Metal. But Metal was always reportedly the nicest of the three modern graphics APIs to use, so that’s… good? Encouraged by the success with ANGLE (which by this point was starting to see use as a standalone library in non-web apps¹¹), and mindful people would want to use this new API with WebASM, they took the step of defining the standard simultaneously as a JavaScript IDL and a C header file, so non-browser apps could use it as a library.

WGPU

2023

WebGPU is the child of ANGLE and Metal. WebGPU is the missing open-source “ergonomic layer” for Vulkan. WebGPU is in the web browser, and Microsoft and Apple are

on the browser standards committee, so they’re “bought in”, not only does WebGPU work good-as-native on their platforms but anything WebGPU can do will

remain perpetually feasible on their OSes regardless of future developer lock-in efforts. (You don’t have to worry about feature drift like we’re already seeing with MoltenVK.) WebGPU will be on

day one (today) available with perfectly equal compatibility for JavaScript/TypeScript (because it was designed for JavaScript in the first place), for C++ (because the Chrome implementation is in C, and it’s open source) and for Rust (because the Firefox implementation is in Rust, and it’s open source).

I feel like WebGPU is what I’ve been waiting for this entire time.


What’s it like?

I can’t compare to DirectX or Metal, as I’ve personally used neither. But especially compared to OpenGL and Vulkan, I find WebGPU really refreshing to use. I have tried, really tried, to write Vulkan, and been defeated by the complexity each time. By contrast WebGPU does a good job of adding complexity only when the complexity adds something. There are a lot of different objects to keep track of, especially during initialization (see below), but every object represents some Real Thing that I don’t think you could eliminate from the API without taking away a useful ability. (And there is at least the nice property that you can stuff all the complexity into init time and make the process of actually drawing a frame very terse.) WebGPU caters to the kind of person who thinks it might be fun to write their own raymarcher, without requiring every programmer to be the kind of person who thinks it would be fun to write their own implementation of malloc.

The Problems

There are three Problems. I will summarize them thusly:

  • Text
  • Lines
  • The Abomination

Text and lines are basically the same problem. WebGPU kind of doesn’t… have them. It can draw lines, but they’re only really for debugging– single-pixel width and you don’t have control over antialiasing. So if you want a “normal looking” line you’re going to be doing some complicated stuff with small bespoke meshes and an SDF shader. Similarly with text, you will be getting no assistance– you will be parsing OTF font files yourself and writing your own MSDF shader, or more likely finding a library that does text for you.

This (no lines or text unless you implement it yourself) is a totally normal situation for a low-level graphics API, but it’s a little annoying to me because the web browser already has a sophisticated anti-aliased line renderer (the original Canvas API) and the most advanced text renderer in the world. (There is some way to render text into a Canvas API texture and then transfer the Canvas contents into WebGPU as a texture, which should help for some purposes.)

Then there’s WGSL, or as I think of it, The Abomination. You will probably not be as annoyed by this as I am. Basically: One of the benefits of Vulkan is that you aren’t required to use a particular shader language. OpenGL uses GLSL, DirectX uses HLSL. Vulkan used a bytecode, called SPIR-V, so you could target it from any shader language you wanted. WebGPU was going to use SPIR-V, but then Apple said no¹². So now WebGPU uses WGSL, a new thing developed just for WebGPU, as its only shader language. As far as shader languages go, it is fine. Maybe it is even good. I’m sure it’s better than GLSL. For pure JavaScript users, it’s probably objectively an improvement to be able to upload shaders as text files instead of having to compile to bytecode. But gosh, it would have been nice to have that choice! (The “desktop” versions of WebGPU still keep SPIR-V as an option.)


How do I use it?

You have three choices for using WebGPU: Use it in JavaScript in the browser, use it in Rust/C++ in WebASM inside the browser, or use it in Rust/C++ in a standalone app. The Rust/C++ APIs are as close to the JavaScript version as language differences will allow; the in-browser/out-of-browser APIs for Rust and C++ are identical (except for standalone-specific features like SPIR-V). In standalone apps you embed the WebGPU components from Chrome or Firefox as a library; your code doesn’t need to know if the WebGPU library is a real library or if it’s just routing through your calls to the browser.

Regardless of language, the official WebGPU spec document on w3.org is a clear, readable reference guide to the language, suitable for just reading in a way standard specifications sometimes aren’t. (I haven’t spent as much time looking at the WGSL spec but it seems about the same.) If you get lost while writing WebGPU, I really do recommend checking the spec.

Most of the “work” in WebGPU, other than writing shaders, consists of the construction (when your program/scene first boots) of one or more “pipeline” objects, one per “pass”, which describe “what shaders am I running, and what kind of data can get fed into them?”¹³. You can chain pipelines end-to-end within a queue: have a compute pass generate a vertex buffer, have a render pass render into a texture, do a final render pass which renders the computed vertices with the rendered texture.

Here, in diagram form, are all the things you need to create to initially set up WebGPU and then draw a frame. This might look a little overwhelming. Don’t worry about it! In practice you’re just going to be copying and pasting a big block of boilerplate from some sample code. However at some point you’re going to need to go back and change that copypasted boilerplate, and then you’ll want to come back and look up what the difference between any of these objects is.

At init:

Context: One <canvas> or window. Exists at boot.</p>
<p>WebGPU instance: navigator.gpu. Exists at boot.</p>
<p>Adapter: If there’s more than one video card, you can pick one. Feed this to Canvas Configuration. Vends a Device. Vends a Queue.</p>
<p>Canvas Configuration: You make this. Feed to Context.</p>
<p>Queue: Executes work batches in order. You’ll use this later.</p>
<p>Device: An open connection to the adapter. Gives color format to the Canvas Configuration. Vends Buffers, Textures, and Pipelines and compiles code to Shaders.</p>
<p>Buffer: A chunk of GPU memory. You’ll use this later.</p>
<p>Texture:GPU memory formatted as an image. You’ll use this later.</p>
<p>Shader: Vertex, Fragment, or Compute program. Feed to Pipeline.</p>
<p>Buffer Layout: Describes how to interpret bytes in a Buffer. Like a C Struct definition. Describes a Buffer. Feed to Pipeline.</p>
<p>Vertex Layout: Buffer layout specialized for meshes/triangle lists. Describes a Buffer. Feed to Pipeline.” src=”https://staging.cohostcdn.org/attachment/45fea200-d670-4fab-9788-6462930f8eba/wgpu1-2.0.png” title=”Context: One <canvas> or window. Exists at boot.</p>
<p>WebGPU instance: navigator.gpu. Exists at boot.</p>
<p>Adapter: If there’s more than one video card, you can pick one. Feed this to Canvas Configuration. Vends a Device. Vends a Queue.</p>
<p>Canvas Configuration: You make this. Feed to Context.</p>
<p>Queue: Executes work batches in order. You’ll use this later.</p>
<p>Device: An open connection to the adapter. Gives color format to the Canvas Configuration. Vends Buffers, Textures, and Pipelines and compiles code to Shaders.</p>
<p>Buffer: A chunk of GPU memory. You’ll use this later.</p>
<p>Texture:GPU memory formatted as an image. You’ll use this later.</p>
<p>Shader: Vertex, Fragment, or Compute program. Feed to Pipeline.</p>
<p>Buffer Layout: Describes how to interpret bytes in a Buffer. Like a C Struct definition. Describes a Buffer. Feed to Pipeline.</p>
<p>Vertex Layout: Buffer layout specialized for meshes/triangle lists. Describes a Buffer. Feed to Pipeline.”></img></p>
<p><b>For each frame:</b></p>
<p><img class=

Some observations in no particular order:

  • When describing a “mesh” (a 3D model to draw), a “vertex” buffer is the list of points in space, and the “index” is an optional buffer containing the order in which to draw the points. Not sure if you knew that.
  • Right now the “queue” object seems a little pointless because there’s only ever one global queue. But someday WebGPU will add threading and then there might be more than one.
  • A command encoder can only be working on one pass at a time; you have to mark one pass as complete before you request the next one. But you can make more than one command encoder and submit them all to the queue at once.
  • Back in OpenGL when you wanted to set a uniform, attribute, or texture on a shader, you did it by name. In WebGPU you have to assign these things numbers in the shader and you address them by number.¹⁴
  • Although textures and buffers are two different things, you can instruct the GPU to just turn a texture into a buffer or vice versa.
  • I do not list “pipeline layout” or “bind group layout” objects above because I honestly don’t understand what they do. I’ve only ever set them to default/blank.
  • In the Rust API, a “Context” is called a “Surface”. I don’t know if there’s a difference.

Getting a little more platform-specific:

TypeScript / NPM world

The best way to learn WebGPU for TypeScript I know is Alain Galvin’s “Raw WebGPU” tutorial. It is a little friendlier to someone who hasn’t used a low-level graphics API before than my sandbag introduction above, and it has a list of further resources at the end.

Since code snippets don’t get you something runnable, Alain’s tutorial links a completed source repo with the tutorial code, and also I have a sample repo which is based on Alain’s tutorial code and adds simple animation as well as Preact¹⁵. Both my and Alain’s examples use NPM and WebPack¹⁶.

If you don’t like TypeScript: I would recommend using TypeScript anyway for WGPU. You don’t actually have to add types to anything except your WGPU calls, you can type everything “any”. But building that pipeline object involves big trees of descriptors containing other descriptors, and it’s all just plain JavaScript dictionaries, which is nice, until you misspell a key, or forget a key, or accidentally pass the GPUPrimitiveState table where it wanted the GPUVertexState table. Your choices are to let TypeScript tell you what errors you made, or be forced to reload over and over watching things break one at a time.

I don’t know what a NPM is I Just wanna write CSS and my stupid little script tags

If you’re writing simple JS embedded in web pages rather than joining the NPM hivemind, honestly you might be happier using something like three.js¹⁷ in the first place, instead of putting up with WebGPU’s (relatively speaking) hyper-low-level verbosity. You can include three.js directly in a script tag using existing CDNs (although I would recommend putting in a subresource SHA hash to protect yourself from the CDN going rogue).

But! If you want to use WebGPU, Alain Galvin’s tutorial, or renderer.ts from his sample code, still gets you what you want. Just go through and anytime there’s a little : GPUBlah wart on a variable delete it and the TypeScript is now JavaScript. And as I’ve said, the complexity of WebGPU is mostly in pipeline init. So I could imagine writing a single

This site uses Akismet to reduce spam. Learn how your comment data is processed.