Description

Dallas, TX. Implements complex big data projects with a focus on collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into insights using multiple platforms. This position makes decisions regarding the hardware and software design needed for the project and acts according to the decisions. The Data Engineer will develop prototypes and proof of concepts for the selected solutions.

Details

  • Location:
    Dallas
    ,
    TX
  • Salary:
    $72,046.00
    -
    $140,808.00
    yearly
  • Deadline:
    2018-09-13

Qualifications

Minimum qualifications

EDUCATION:

  • Bachelor's degree in information technology, math, engineering, computer science or business field.

EXPERIENCE:

  • Three (3) years of experience in designing, building and maintaining data frameworks for data platforms which must have included experience with big data technologies such as Hadoop, Mahout, Pig, Hive, HBase, Zookeeper, or a similar big data technology.

OTHER REQUIREMENT(S):

  • No FELONY or Class A misdemeanor convictions.
  • No Class B misdemeanor conviction within the last ten (10) years.
  • No family violence convictions
  • Cannot currently be on deferred adjudication for any felony, class A misdemeanor or class B misdemeanor charge.

KNOWLEDGE, SKILLS, ABILITIES:

  • Effective oral and written communication skills.

EQUIVALENCIES:

  • A master's degree or higher in a specified field plus one (1) year of the required experience will meet the education and experience requirements.
  • A certification in Big Data Technology plus three (3) years of the required experience will meet the education and experience requirements. 

Preferred qualifications

PREFERENCES:

  • Experience with big data technology Cluster Administration.
  • Experience with data Integration on traditional and open sources like Hadoop environments.
  • Experience in information delivery analytics and business intelligence based on data from hybrid of Hadoop Distributed File Systems (HDFS), non-relational and relational databases.
  • Experience in HDFS, MapReduce, HBase, Hive and handling real-time data stream using Kafka/Storm, Spark.
  • Experience developing and maintaining cross platform ETL / ELT processes. 

View application