I got a real pet peeve in contemporary media about the term GPU.
For context, I grew up using computers right from the commodore 64 and I had an 8088 IBM PC I eventually got a 5 MB hard drive for, and I've been here for the whole ride.
The thing is, the term GPU basically came about specifically because 3D accelerator cards gained an important new feature: previously, 3D accelerator cards solely accelerated the rasterization of 3D triangles, usually with textures and some texture filtering. Upon the release of the geforce 256, a new important feature was released and that was hardware texture and lighting. Now you can argue that this is just a marketing term gpu, but it's a meaningful one. There was a distinct fault line between cards that did have GPU features, specifically texture and lighting at that point, and those that did not. After a few years, there were video games that you could not play with a 3D accelerator card. You required a GPU with hardware texturing lighting.
I saw one person claiming that a GPU is a device that has 3D and 2D at once. That's absurd and totally breaks immediately. Early 3D accelerator cards such as the S3 Virge or the matrix millennium both did both 2D and 3d, and anyone who knows what they're talking about would never call these cards a GPU. They just aren't.
Hypothetically, you can split the features into three separate buckets: 2D cards which exclusively do 2D, 3D accelerator cards which accelerate 3D rendering, and gpus which hardware accelerate parts of the mathematics in the pipeline before the 3D rendering. You could have an architecture with three separate chips doing three separate functions: one card handling 2D, a second card handling 3D acceleration, and a third card handling GPU functions like hardware t&l or later pixel in vertex shader functions or today completely wide open pipelines that can be used for general computing. It is simply because of the way that the devices evolved that today every 3D accelerator card has 2d, and every GPU has 3d and 2d.
There are certain chips that are 3D accelerators today but are not gpus. As an example, matrox licenses out their g200 graphics core which was when it was released a low end 3D accelerator card with exceptional 2D performance. However, it does not accelerate any part of the mathematics prior to the 3D rasterization, and therefore is not a GPU. In fact, the fight to attempt to build a GPU effectively took matrox and many 3D accelerator card manufacturers out of the race. That was the moment many previously marginally competitive 3D accelerator companies effectively left the business because they were able to make 3D accelerators, but they were not able to make gpus. It was a mass extinction event for 3d accelerator companies.
As a point of comparison, memory controllers used to be a discreet component, and after a certain point all contemporary CPUs came to contain the memory controller because it just made more sense to do it that way, but that doesn't mean that you go back and say that earlier CPUs were memory controllers because they weren't. That was a capacity that was added later for technical reasons.
Anyway, it's just a pet peeve, and a little bit of it is because I did make the mistake of buying a 3D accelerator card and not a GPU in the era that it mattered, and that was really frustrating at the time.
For context, I grew up using computers right from the commodore 64 and I had an 8088 IBM PC I eventually got a 5 MB hard drive for, and I've been here for the whole ride.
The thing is, the term GPU basically came about specifically because 3D accelerator cards gained an important new feature: previously, 3D accelerator cards solely accelerated the rasterization of 3D triangles, usually with textures and some texture filtering. Upon the release of the geforce 256, a new important feature was released and that was hardware texture and lighting. Now you can argue that this is just a marketing term gpu, but it's a meaningful one. There was a distinct fault line between cards that did have GPU features, specifically texture and lighting at that point, and those that did not. After a few years, there were video games that you could not play with a 3D accelerator card. You required a GPU with hardware texturing lighting.
I saw one person claiming that a GPU is a device that has 3D and 2D at once. That's absurd and totally breaks immediately. Early 3D accelerator cards such as the S3 Virge or the matrix millennium both did both 2D and 3d, and anyone who knows what they're talking about would never call these cards a GPU. They just aren't.
Hypothetically, you can split the features into three separate buckets: 2D cards which exclusively do 2D, 3D accelerator cards which accelerate 3D rendering, and gpus which hardware accelerate parts of the mathematics in the pipeline before the 3D rendering. You could have an architecture with three separate chips doing three separate functions: one card handling 2D, a second card handling 3D acceleration, and a third card handling GPU functions like hardware t&l or later pixel in vertex shader functions or today completely wide open pipelines that can be used for general computing. It is simply because of the way that the devices evolved that today every 3D accelerator card has 2d, and every GPU has 3d and 2d.
There are certain chips that are 3D accelerators today but are not gpus. As an example, matrox licenses out their g200 graphics core which was when it was released a low end 3D accelerator card with exceptional 2D performance. However, it does not accelerate any part of the mathematics prior to the 3D rasterization, and therefore is not a GPU. In fact, the fight to attempt to build a GPU effectively took matrox and many 3D accelerator card manufacturers out of the race. That was the moment many previously marginally competitive 3D accelerator companies effectively left the business because they were able to make 3D accelerators, but they were not able to make gpus. It was a mass extinction event for 3d accelerator companies.
As a point of comparison, memory controllers used to be a discreet component, and after a certain point all contemporary CPUs came to contain the memory controller because it just made more sense to do it that way, but that doesn't mean that you go back and say that earlier CPUs were memory controllers because they weren't. That was a capacity that was added later for technical reasons.
Anyway, it's just a pet peeve, and a little bit of it is because I did make the mistake of buying a 3D accelerator card and not a GPU in the era that it mattered, and that was really frustrating at the time.
- replies
- 1
- announces
- 0
- likes
- 1
@sj_zero The difference is fixed function vs shader pipelines. The GeForce cards were the first ones to let you run arbitrary shaders on them, which is now how all of them work, and enables things like compute shaders. Earlier cards had one pipeline implemented in hardware, all they would do was that one rasterization process.