I am familiar with the laptop + docking concept, you get to a regular place of work and connect in better keyboard, bigger / additional screens, better mouse, possibly an external drive for backup. To ease it up many vendors have proprietary docking stations, so I get the advantage of standardizing on USB-C and use it as your only docking cable, a little less proprietary.
The Linux method of a distributed computing model though has its appeals as it uses networking as the transport medium, not so much specialist ports and cables. So it is even less tied to a specific technology.
Like most things Linux, others years to decades later “invented” the same ideas like Apple sidecar.
So in the Linux ecosystem, say you’re on your main, processor-powerful, laptop, you’d then just set your DISPLAY parameter to your low-power device like a Raspberry Pi and then launch apps. The transport mechanism is the network, such as Wifi or an Ethernet hub.
What I do is not that, I keep two or three computers in parallel doing different things.
Say in Linux I wanted to make a smart TV, there are many methods but one is to get a 2nd-hand used Monitor which supported HDMI, a Raspberry Pi which similarly supported HDMI, I’d install a flavour of Linux (Raspbian) and browsers and media players, then I’d remote desktop into and tell it to begin something. I can remote using say VNC off my phone so my phone is the remote control. I’d then in effect have a 2nd monitor, or a smartTV, costing very little.
While it sounds complicated, this method is more resiliant as you have a shared-nothing architecture, if your main system broke, you have a fully independent backup system.