It's not a workaround.
In the old days, if you had 2 services that were hard coded to use the same network port, you would need virtualization or a different server and make sure the networking for those is correct.
Network ports allow multiple services to use the same network adapter as a port is like a "sub" address.
Docker being able to remap host network ports to containers ports is a huge feature.
If a container doesn't need to be accessed outside of the docker network, you don't need to expose the port.
The only way to have multiple services on the same port is to use either a load balancer (for multiple instances of the same service) or an application-aware reverse proxy (like nginx, haproxy, caddy etc for web things, I'm sure there are other application-aware reverse proxies).
Sure, but what you are describing is the problem that k8s solves.
I've run plenty of production things from docker compose. Auto scaling hasn't been a requirement, and HA was built into the application (so 2 separate VMs running the compose stack). Docker was perfect for it, and k8s would've been a sledgehammer.