Trabajo de Senior Data Engineer en Svitla, Jalisco - México

Senior Data Engineer

About Svitla:

Svitla Systems is a proven custom software development company and testing provider. We deliver unparalleled value to our clients, who rely on our expertise in Managed Team Extension and AgileSquads.

Our main office is in the heart of Silicon Valley - with sales and development offices throughout the US, Mexico, and Europe (Ukraine, Montenegro, and Germany).

Our mission is to build a business that is not only profitable but also contributes to the well-being of our employees and their families, improves our communities, and makes a lasting difference in the world.

Responsibilities:
You will be responsible for expanding and
optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. You will be an expert
data pipeline builder and data wrangler who enjoys optimizing data
systems and building them from the ground up. You will support our
software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery
architecture is consistent throughout ongoing projects. You will need to
be self-sufficient and comfortable supporting the data needs of multiple
teams, systems and products. If you get excited by the prospect of
optimizing or even re-designing the company’s data architecture to
support the next generation of products and data initiatives; this
opportunity is the right fit for you.

Responsibilities:
 Create and maintain optimal data pipeline architecture
 Build the infrastructure required for optimal extraction, transformation,
and loading of data from a wide variety of data sources
 Identify, design, and implement internal process improvements:
automating manual processes, optimizing data delivery, re-designing
infrastructure for greater scalability, etc.
 Work with stakeholders including the Executive, Product, Data and
Design teams to assist with data-related technical issues and support
their data infrastructure needs.
 Keep our data separated and secure across national boundaries through
multiple data centers and AWS regions.
 Create data tools for data analysts  and data scientists
 Work with data and analytics experts to strive for greater functionality in
our data systems.

Requirements
Responsibilities:
 Create and maintain optimal data pipeline architecture
 Build the infrastructure required for optimal extraction, transformation,
and loading of data from a wide variety of data sources
 Identify, design, and implement internal process improvements:
automating manual processes, optimizing data delivery, re-designing
infrastructure for greater scalability, etc.
 Work with stakeholders including the Executive, Product, Data and
Design teams to assist with data-related technical issues and support
their data infrastructure needs.
 Keep our data separated and secure across national boundaries through
multiple data centers and AWS regions.
 Create data tools for data analysts  and data scientists
 Work with data and analytics experts to strive for greater functionality in
our data systems.
Essential requirements:
 Bachelor's Degree or equivalent experience
 5+ years of experience in a Data Engineer role
 Experience building and optimizing ‘big data’ data pipelines,
architectures and data sets.
 Experience with AWS cloud services: S3, EC2, EMR, RDS, Redshift
 Experience with big data tools: Hadoop, Spark
 Experience with object-oriented/object function scripting languages:
Python / Java /Scala / C++
 Advanced working knowledge and experience working with relational
databases, query authoring (SQL) as well as working familiarity with a
variety of databases.

 Experience performing root cause analysis on internal and external data
and processes to answer specific business questions and identify
opportunities for improvement.
 Experience with Agile development (SAFe, SCRUM)
Desirable requirements:
 Experience with data pipeline and workflow management tools: (Airflow,
Azkaban, Luigi, etc)
 Experience with stream-processing systems: Storm, Spark-Streaming,
etc.
 Data modelling
 Experience with 3/5NF and Star Schemas