15+… I was there, Gandalf… We had these kinds of setups 25+ years ago. How time flies.
Before that, it was often XTerm style systems. The local machine only booted an XServer and then connected to a central UNIX system. All programs ran on the UNIX server, and were rendered on the XTerm/XServer you were sitting at.
The original XServer systems were efficient enough to run over serial lines, not just Ethernet.
Another setup was to put multiple monitors/keyboards/mice on a single UNIX/Linux tower and have it launch multiple XServer sessions so you could have a single computer with up to six people sitting at it.
I also managed a Rembo lab for a bit. It used a PXE shim OS to get a menu from the Rembo server. From there, you could boot the main OS, or download a new hard drive image from the server. I would build new drive images and upload them to the server, then updating the lab would mean rebooting the computers and clicking a “grab latest” button. It actually worked very well for distributing OSes. We had both Linux and Windows images students could pull down.
Lab management at scale is a continual struggle to keep everything functional and patched.
I haven’t looked into it too hard yet. I saw some design that would allow remote GUI rendering for Wayland, but it likely won’t be the all in design for network transparency that X11 had (has).
I use SSH with X forwarding for all kinds of system maintenance and demos in my CS courses.