Responsibilities
Collaborate with cross-functional teams to identify data requirements and develop data processing pipelines.
Design, build, and maintain scalable data infrastructure and database systems.
Develop and implement data integration processes to consolidate data from various sources.
Extract, transform, and load (ETL) raw data into usable formats for analysis and reporting.
Build and maintain data quality checks and validation processes to ensure accurate and reliable data.
Troubleshoot data-related issues and propose solutions for performance optimization.
Collaborate with data scientists and analysts to understand their data needs and assist with data model development.
Stay up-to-date with emerging data engineering technologies and best practices.
Qualifications
Bachelor's degree in Computer Science, Information Systems, or a related field.
2+ Years of professional experience working ETLs, ELTs and other related data engineering roles.
Strong programming skills in languages such as Python, Java, or Scala.
Knowledge of SQL and experience working with relational databases and SQL queries.
Familiarity with distributed computing systems and big data technologies such as Hadoop, Spark, or Kafka.
Understanding of data modeling and schema design principles.
Experience with ETL tools and techniques.
Proficiency in data manipulation, cleansing, and integration.
Excellent problem-solving skills and attention to detail.
Strong communication and interpersonal skills to collaborate effectively with team members.
Requirements
Previous experience in data engineering or a related field is a plus but not required.
Demonstrated ability to analyze and work with complex datasets.
Experience with cloud-based data platforms such as AWS or Azure is a plus.
Familiarity with data visualization tools and techniques is desirable.
Understanding of data governance and data security principles is advantageous.
Solid interpersonal skills and ability to work in a multicultural environment