* Understand the end-to-end operations of complex Hadoop-based ecosystems and manage / configure core technologies such as HDFS, MapReduce, YARN, HBase, ZooKeeper and Kafka.
* Comprehensive systems hardware and network troubleshooting experience in physical, virtual and cloud platform environments, including the operation and administration of virtual and cloud infrastructure provider frameworks. Experience with at least one virtualization and one cloud provider (for instance, VMWare, AWS, and Azure) is required.
* Experience with development, design and deployment of:
* Configuration management tools, such as Puppet, Chef and Ansible
* Infrastructure layer deployment tools, such as Terraform, Packer, CloudFormation and Azure Resource Manager)
* Coding and continuous integration tools, such as Git, Jenkins and CircleCI, test frameworks such as Beaker and Test Kitchen, and unit tests with rspec, serverspec, busser, bash, etc.
* Be conversant about cloud architecture, service integrations, and operational visibility on common cloud (AWS, Azure, and Google) platforms. Understanding of ecosystem deployment options and how to automate them via API calls is a huge asset.
* Ability to pick up new technologies and ecosystem components quickly, and establish their relevance, architecture and integration with existing systems.