Big Data Platform Engineer

Verizon

(Palo Alto, California)
Full Time
Job Posting Details
About Verizon
Verizon is one of the largest communication technology companies in the world. Every day, we connect millions of people, companies and communities with our powerful technology. We’re using our award-winning network to make breakthroughs in interactive entertainment, digital media, the Internet of Things and broadband services for customers. Whatever the future looks like, whatever the digital world promises, we will deliver.
Summary
Verizon Corporate Technology's Big Data Group is looking for Big Data engineers with expert level experience in architecting and building our new Hadoop, NoSql, InMemory Platforms(s) and data collectors. You will be part of the team building worlds one of the largest Big Data Platform(s) that can ingest 100’s of Terabytes of data that will be consumed for Business Analytics, Operational Analytics, Text Analytics, Data Services and build Big Data Solutions for various Verizon Business units
Responsibilities
* Architect, Design and build big data platform primarily based on Hadoop echo system that is fault-tolerant & scalable. * Build high throughput messaging framework to transport high volume data. * Use different protocols as needed for different data services (NoSQL/JSON/REST/JMS). * Develop framework to deploy Restful web services. * Build ETL, distributed caching, transactional and messaging services. * Architect and build security compliant user management framework for multitenant big data platform. * Build High-Availability (HA) architectures and deployments primarily using big data technologies. * Creating and managing Data Pipelines.
Ideal Candidate
* Bachelor¹s degree in Computer Science, Management Information Systems or equivalent is preferred. * Experience building and managing complex products/solutions. * Proven track record of Architecting Distributed Solutions dealing with real high volume of data. * Strong understanding of virtual machine technologies, physical machines, networking and storage systems. * Experience with distributed, highly-scalable, multi-node environments. * Expert level experience with Big Data Technologies (Solr, Hive, HBase, Spark, Kafka, Yarn, Storm, Splunk, Vertica), understands the concepts and technology ecosystem around both real-time and batch processing in Hadoop. * Expert level experience with Couchbase server. * Expert level experience with GlusterFS. * Experience in Dev/Ops (Puppet, Chef, Python) * Experience developing Restful web services in Spring framework * Working knowledge of web technologies and protocols (NoSQL/JSON/REST/JMS)

Questions

Answered by on
This question has not been answered
Answered by on

There are no answered questions, sign up or login to ask a question

Want to see jobs that are matched to you?

DreamHire recommends you jobs that fit your
skills, experiences, career goals, and more.