Maybe it's from a time when servers had fewer resources.  If your server would run out of memory with 20 concurrent attempted logins, you may want to have that set to 20, else you should end up with a bigger DOS than if you were just able to prevent ssh logins.  The same would be true with bandwidth.  If you were on dialup, for example, you may want to make sure that you aren't overwhelming your connection, such that if you can only support 20 concurrent real time logins, you wouldn't end up with a terrible degradation in service.  Not that it's too popular to run services on your bandwidth capped 2G internet connection...

Sitting at a login prompt on my ARM NAS gives me this:
sshd:
Virt: 11672  Res: 5280  Shared: 4584

So it seems that I'm wasting 5MB of my 256MB of memory on my NAS when somebody just hangs at the login prompt.  I could fairly easily DOS my little NAS by just running `while [ 1 ]; do ssh <box> & done`, couldn't I?

Are you really using more than 10 connections sitting at a login prompt?  What are they being used for?  Is the system powerful enough to handle many concurrent logins?

I think that today you should probably have this set up with a combination of dynamic blacklists (BlockHosts or sshguard) and this option, if your server is truly needing more resources.  That way all of the failed logins don't eat up your cap for too long.  The nice thing about the percent number is that it gives you, as an administrator, at least some chance of logging in before the system is so overloaded that you could never get in and start killing things.

It could be that because ssh is becoming more of a transport layer than just a remote tty, this option is less useful.  At Github for example, most of the unauthenticated connections that github has should just be dropped quickly because nobody uses password authentication.


On Wed, Feb 11, 2015 at 9:46 AM, Jamie Guinan <guinan@bluebutton.com> wrote:

Hi Group!

I'm seeking opinions about an sshd feature.

I was doing some work on a remote system (I did not set it up) that
seemed to be refusing ssh and scp connections randomly.

I looked into it, and finally stumbled across this from sshd_config(5),

     MaxStartups
             Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon.
             Additional connections will be dropped until authentication succeeds or the LoginGraceTime
             expires for a connection.  The default is 10.

             Alternatively, random early drop can be enabled by specifying the three colon separated val‐
             ues “start:rate:full” (e.g. "10:30:60").  sshd(8) will refuse connection attempts with a
             probability of “rate/100” (30%) if there are currently “start” (10) unauthenticated connec‐
             tions.  The probability increases linearly and all connection attempts are refused if the
             number of unauthenticated connections reaches “full” (60).

Indeed, the server had this setting,

  MaxStartups 10:50:20

This just seems like a terrible option.  You can DOS their server just
by making 20 connections and leaving them hanging.  And it breaks any
automated scripts relying on ssh or scp (unless you wrap them in a
loop until they succeed, ugh).

My guess is that the reasoning behind this feature that it will filter
out a number of automated attacks.  That seems no better than
security-by-obscurity, and I know how some of you feel about that.  :)

Can you think of any sane reason to enable this feature, or for it to
even exist at all?

Best regards,
-Jamie
_______________________________________________
Wlug mailing list
Wlug@mail.wlug.org
http://mail.wlug.org/mailman/listinfo/wlug