Big Data Architect - Monroe, LA
Location: Monroe, LA
Duration: 12+ Months
Designs, develops, and implements infrastructure to provide highly-complex, reliable, and scalable database to meet the organization's objectives and requirements. Analyzes organization's business requirements for database design, and executes changes to database as required. 6-8 years’ experience
As a Big Data (Hadoop) Architect, will be responsible for Cloudera Hadoop development, high-speed querying, managing and deploying Flume, HIVE and PIG, test prototypes and oversee handover to operational teams and propose best practices / standards. Expertise with Designing, building, installing, configuring and developing a Cloudera Hadoop echosystem.
• Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems. Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users.
• Work in a Scrum-based agile team environment using Hadoop. Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming.
• Support the development of data science and analytics solutions and product that improve existing processes and decision making.
• Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics.
• Contribute to business and market development.
• Specific skills and abilities:
• Strong computer science and programing background
• Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)
• Ability to design and build data models, semantic layer to access data sets
• Ability to own a complete functional area - from analysis to design to development and complete support
• Ability to translate high-level business requirements into detailed design
• Build integration between data systems ( restful API, micro batch, streaming) using technologies ( e.g. Snaplogic - iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)
• Hands on experience working the Cloudera Hadoop ecosystem and technologies
• Strong desire to learn a variety of technologies and processes with a "can do" attitude
• Experience guiding and mentoring 5-8 developers on various tasks
• Aptitude to identify, create, and use best practices and reusable elements
• Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exits
Qualifications & Skills:
• Bachelor’s degree, Master’s degree required.
• Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD.
• Hands on experience in Scala and Python.
• 7 + years of experience in programing and data engineering with minimum 2 years of experience in Cloudera Hadoop.
United States of America
Online / Full App
Folder Group ID
Apply for this Position
Job Title: Big Data Architect
Location: Monroe, LA
Job Type: Temp Position
Reference ID: 344118
Posted Date: 10/29/2018