* 5-10 years experience administering Big Data processing infrastructures capable of handling data in 100’s of TB to petabytes
* Experience with private and public cloud environments (especially AWS), AWS services like Route 53, ELB, ADX, VPC, ElastiCache, RDS, S3 etc and AWS APIs.
* Hands-on experience in designing and implementing automation systems for configuration management and code deployment
* Work with geographically distributed systems and complex network topologies.
* A high level of experience with open source tools and the OSS community.
* Networking knowledge (TCP/IP, firewall, load-balancing, etc.)
**Technical Skills:**
* Excellent Unix/Linux server administration skills, including package management, bare metal installations, and virtualization.
* Data center build-outs and management (power and HVAC calculations, rack and stack, lights out management).
* Excellent system automation experience with Ansible or puppet, or chef
* Excellent scripting skills in shell, python ruby or perl,
* Solid understanding of Java applications, memory, and JVM management.
* Knowledge of security best practices, policies, and procedures a huge plus.
* Experience maintaining monitoring systems like Nagios and New Relic.
* MySQL administration experience.
* Systems and software agnostic– will use the best available tool for the job.
**Bonus Skills and Experience:**
* Experience with Kafka, ZooKeeper, Elasticsearch, Hadoop or other NoSQL datastore is huge plus.
* Familiar and comfortable with Apache, Nginx, varnish, Django, Mysql, etc.
* Experience with Arista switches, Juniper, JunOS, Cisco iOS and network analysis tools
* Experience with designing and managing networks is huge plus
* Experience with continuous integration and version control systems– Jenkins, git, etc.