Maybe it's from a time when servers had fewer resources. If your server would run out of memory with 20 concurrent attempted logins, you may want to have that set to 20, else you should end up with a bigger DOS than if you were just able to prevent ssh logins. The same would be true with bandwidth. If you were on dialup, for example, you may want to make sure that you aren't overwhelming your connection, such that if you can only support 20 concurrent real time logins, you wouldn't end up with a terrible degradation in service. Not that it's too popular to run services on your bandwidth capped 2G internet connection...
Sitting at a login prompt on my ARM NAS gives me this:
sshd:
Virt: 11672 Res: 5280 Shared: 4584
So it seems that I'm wasting 5MB of my 256MB of memory on my NAS when somebody just hangs at the login prompt. I could fairly easily DOS my little NAS by just running `while [ 1 ]; do ssh <box> & done`, couldn't I?
Are you really using more than 10 connections sitting at a login prompt? What are they being used
for? Is the system powerful enough to handle many concurrent logins?
I think that today you should probably have this set up with a combination of dynamic blacklists (BlockHosts or sshguard) and this option, if your server is truly needing more resources. That way all of the failed logins don't eat up your cap for too long. The nice thing about the percent number is that it gives you, as an administrator, at least some chance of logging in before the system is so overloaded that you could never get in and start killing things.
It could be that because ssh is becoming more of a transport layer than just a remote tty, this option is less useful. At Github for example, most of the unauthenticated connections that github has should just be dropped quickly because nobody uses password authentication.