* We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to identify and apply Hadoop and NoSQL solutions to challenges with data and provide better data solutions to industries.
* Minimum 2 years designing and implementing relational data models working with RDBMS move to preferred
* Minimum 2 years working with traditional as well as Big Data ETL tools move to preferred
* Minimum 2 years of experience designing and building REST web services move to preferred
* Designing and building statistical analysis models, machine learning models, other analytical modeling using these technologies on large data sets (e.g. R, MLib, Mahout, Spark, GraphX) move to preferred
* Minimum 1 year of experience implementing large scale cloud data solutions using AWS data services e.g. EMR, Redshift move to preferred
* 2+ years of hands-on experience designing, implementing and operationalizing production data solutions using emerging technologies such as Hadoop Ecosystem (MapReduce, Hive, HBase, Spark, Sqoop, Flume, Pig, Kafka etc.), NoSQL(e.g. Cassandra, MongoDB), In-Memory Data Technologies, Data Munging Technologies.
* Architecting large scale Hadoop/NoSQL operational environments for production deployments
* Designing and Building different data access patterns from Hadoop/NoSQL data stores
* Managing and Modeling data using Hadoop and NoSQL data stores
* Metadata management with Hadoop and NoSQL data in a hybrid environment
* Experience with data munging / data wrangling tools and technologies
**Basic Qualifications**
* Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience
* Minimum 2+ years of building and deploying applications java applications in a Linux/Unix environment.
* Minimum of 1+ years designing and building large scale data loading, manipulation, processing, analysis, blending and exploration solutions using Hadoop/NoSQL technologies (e.g. HDFS, Hive, Sqoop, Flume, Spark, Kafka, HBase, Cassandra, MongoDB etc.)
* Minimum 1+ years of architecting and organizing data at scale for a Hadoop/NoSQL data stores
* Minimum 1+ years of coding with MapReduce Java, Spark, Pig, Hadoop Streaming, HiveQL, Perl/Python/PHP for data analysis of production Hadoop/NoSQL applications