Pic related, animation generated live from code at
https://www.shadertoy.com/view/XsBXWtIn modern versions of OpenGL, you start with a load of data points, and transfer them all to your graphics card. These are called vertices. These points usually contain an x, a y, and a z co-ordinate in 3d space, but they're not just limited to that - they could, in addition, store the temperature of a room at that point, or the direction and intensity of the flow of air.
Once we've uploaded our data to the graphics card we can run a program on each of the individual data points. Programs that run on the graphics card are called shaders and have to be written in a special shader language. In the case of OpenGL, that language is called GLSL.
A program that modifies individual vertices is called a vertex shader. Imagine one that randomly changes the x,y, and z values of a point ever so slightly. If you ran it on a 3d model of a ball (constructed out of triangles) the surface of the ball would get all bumpy as the points began to get out of line.
This program is run once on every data point.
But I care more about the second type of program you can run on a graphics card - fragment shaders.
At some point in the process of going from data on the graphics card to an image on your screen, all the perfectly straight lines and perfectly smooth circles that you might have told the computer to draw have to drawn on a square pixel grid. This is called rasterization.
In the process of going from basic input data to pixels on a screen, there's an intermediate stage where data is transformed into units containing all the information required to rasterize that part of the screen correctly. These are called fragments.
Post too long. Click here to view the full text.