Company Details

Data engineer

Job ID

: 21078

Location

: Irvine, CA, USA

Salary

: $60 per hour

Job Views

: 134

Posted

: 10-10-2018

Key Skills

Hadoop, Spark, Kafka,SQL and NoSQL databases, including Postgres and Cassandra,data pipeline and workflow management tools: Azkaban, Luigi, Airflow,stream-processing systems: Storm, Spark-Streaming,object-oriented/object function scripting languages: Java

Job Description

Title:                     Data Engineer

Location:             Irvine, CA

Employing your skills in designing, developing and delivering world class data algorithmic artifacts, including documentation and coding; coordinate data algorithmic development with infrastructural development

Work closely with our engineering team to integrate your amazing innovations and algorithms into our products.

Research and apply advanced algorithms and methods involving data mining, statistical analysis and machine learning techniques

Process unstructured data into a form suitable for analysis – and then do the analysis.

Support business decisions with ad hoc analysis as needed.

Master third party systems and interfaces, including: data available by the parties, API to be used for obtaining the data, limitations related to these interfaces

Excellent subject matter expertise in designing algorithms, business logics to automate commerce process flows.

Apply your broad-based data development expertise to create practical and innovative solutions

Efficiently implement clean, maintainable, and testable data solutions with high availability, blazing speed in performance and fault tolerant.

Participate in agile project execution and provide accurate work effort estimates

Apply excellent communications skills, creativity and practical knowledge to benefit our customers

 

What you bring to the role:

Bachelor's degree in Computer Science, Engineering, Science and Math or related technical discipline is required

Preferred: an MBA (or equivalent) from a top-tier institution, or equivalent business experience preferred

7-10 years of technical experience, with at least 5+ years of experience with web services development and middleware applications or Master’s degree plus 5-7 years of technical experience.

Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.

Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

Strong analytic skills related to working with unstructured datasets.

Build processes supporting data transformation, data structures, metadata, dependency and workload management.

A successful history of manipulating, processing and extracting value from large disconnected datasets.

Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.

Experience with big data tools: Hadoop, Spark, Kafka, etc.

Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Experience with stream-processing systems: Storm, Spark-Streaming, etc.

Experience with object-oriented/object function scripting languages: Java, C# etc. 

Advanced Search