The old security model for these things used to be “Trust your network” – ie: Lock them in a room, somewhere behind a firewall, and cross your fingers. Nowadays however bleeding edge security features such as usernames and passwords have been implemented on many of the administrative interfaces for these services *gasp*.
On a recent penetration juken (http://jstnkndy.github.io/) and I ran into a semi-locked down Hadoop cluster. The HDFS file browsing web interfaces were still enabled and didn’t require authentication (eg: http://<namenode host>:1022/browseDirectory.jsp) but we wanted shells, and lots of them.
So where to start? A quick portscan and httpscreenshot run showed a number of management and monitoring tools running. After some initial stumbling around and default password checking on the web interfaces we’d come up dry. In a short moment of brialliance, Juken decided to try the default DBMS credentials on the Postgresql database server for the Ambari administrative tool – they worked.
Ambari is a provisioning tool for Hadoop clusters. With a few clicks, you can instruct it to install different packages like YARN, Hadoop, HDFS, etc.. on the various nodes that it manages. Unfortunately there is no official “feature” to send yourself a bash shell on the remote machines.
With credentials to the Postgres database, it was trivial to create a new admin user in Ambari with the password of “admin”. They hash with sha256 bcrypt and a random salt… the easiest way is to just add a new user or modify the existing admins password with the following:
update ambari.users set
Unfortunately, Ambari needs to be restarted for new users or changed passwords to take effect… so now you must wait.
Eventually the change was applied and we were in. After some time and much frustration with technology we aren’t exactly familiar with, we found the undocumented “shell” feature – it’s always there somewhere, you just have to look hard enough.