I must say I am puzzled about this. In general I understand the reason behind the FAILED_LOGIN_ATTEMPTS - it is against password breaking brute force attacks. On the other hand it means that some 'lost' application host with a wrong password becomes the point of a DoS attack. Which is better (or worse), hard to tell.
Usually a database is located after some firewall (or two) as this is quite a deep layer in the application stack. I usually meet with databases, where the schemas are application schemas, so there exists a client application interface between a human and a database - passwords are encoded in the application configuration and a direct access to a database itself is strictly limited. On the other hand there are users (though not numerous), who are allowed to make a direct connection and among them a 'malicious' one may be hidden .
So, how do I imagine dealing with the configuration?
I believe for application it is better to create another profile, which keeps the FAILED_LOGIN_ATTEMPTS parameter to the UNLIMITED, because it is not so rare that some forgotten application or script exists, which would block the schema and thus practically disable the application. Of course there is a monitoring system, but usually the delay in the information feedback to a human is ~5 minutes. There other issues come up (multiple application hosts, and only one of them with a wrong password; few applications sharing the same schema; scripts run from the cron on different shell accounts; etc.) and we get a noticeable delay in the application work, which was meant possibly to work in the 24x7 regime. And this may happen quite frequently and there is no need for malice.
Further, it would be reasonable to move direct users to other databases and possibly connect them through mix of additional schemas and/or database links, so that they would not be able to connect directly to the database with application schemas.
The drawbacks?
- the human users have got performance penalty, if connected through another database or may try to break passwords if there will be no such prevention measures - so if it is a database with plenty of directly connected human users then this would not be such a great idea.
- 11g: Multiple failed login attempt can block new application connections - shortly the 11g version has additional security feature against brute force attacks - if set to the UNLIMITED there is a delay enabled when returning an error message due to the failed login attempt after the first few attempts. Due to the bug 7715339 such delayed session keeps library cache lock for prolonged period (due to enabled delay) and new sessions wait on this lock till the number hits the sessions/processes ceiling. It is possible to disable the delay feature with event='28401 trace name context forever, level 1'
No comments:
Post a Comment