Pic related, animation generated live from code at
https://www.shadertoy.com/view/XsBXWtIn modern versions of OpenGL, you start with a load of data points, and transfer them all to your graphics card. These are called vertices. These points usually contain an x, a y, and a z co-ordinate in 3d space, but they're not just limited to that - they could, in addition, store the temperature of a room at that point, or the direction and intensity of the flow of air.
Once we've uploaded our data to the graphics card we can run a program on each of the individual data points. Programs that run on the graphics card are called shaders and have to be written in a special shader language. In the case of OpenGL, that language is called GLSL.
A program that modifies individual vertices is called a vertex shader. Imagine one that randomly changes the x,y, and z values of a point ever so slightly. If you ran it on a 3d model of a ball (constructed out of triangles) the surface of the ball would get all bumpy as the points began to get out of line.
This program is run once on every data point.
But I care more about the second type of program you can run on a graphics card - fragment shaders.
At some point in the process of going from data on the graphics card to an image on your screen, all the perfectly straight lines and perfectly smooth circles that you might have told the computer to draw have to drawn on a square pixel grid. This is called rasterization.
In the process of going from basic input data to pixels on a screen, there's an intermediate stage where data is transformed into units containing all the information required to rasterize that part of the screen correctly. These are called fragments.
But we don't even care about that part. Because here's what you can do: Tell your graphics card to draw a rectangle whose corners are in the top left, top right, bottom left, and bottom right corners of you screen. That causes only one fragment to ever be created, the size of the screen. Then, you can write a program that takes position in the fragment, along with a few other variables, such as the time that's passed since you started running, or perhaps a microphone input, and spits out the value of the pixel to be drawn at that point. When you set it up that way, each point in the fragment corresponds to one pixel on your screen. With the above setup, you can see why fragment shaders are sometimes known by another name - "pixel shaders".
So now, with the advent of WebGL, a graphics API that runs in the browser, anyone can write programs that generate colorful and interesting animated visuals and share them with other people. By figuring out how to write the effect you want as a single function taking in position and time and outputting the r,g,b values of the pixel at that point in space and time, you get to run that program in real time and see the effect happen before your eyes.
So maybe check out pixel shaders over at shadertoy (
https://www.shadertoy.com/). You can also output sound as well, but I haven't tried that yet.
Demoscene in general is also on topic.