Posted June 25, 2021

Senior Data Engineer

Location: Vancouver
Job ID: 2020
Employment Type: Contract

Are you a Senior Data Engineer who is passionate about technology and enjoys working in a team environment?  Do you want to work on interesting, innovative projects?  Our client is a fast-paced and energetic software company that is a leader in their field and loves innovative ideas.  We love this company because the employees can influence the success of the company and are able to work with like-minded, smart people

Our client has asked for our help in looking for a Senior Data Engineer for a 12 month contract to support Data Science initiatives.

As a senior member of the Data team, you will have significant responsibility and influence in shaping its future direction. This role is inherently cross-functional, and the ideal candidate will work across disciplines. You are able to iterate quickly on all stages of data pipeline and you will develop large scale data pipelines and analytical solutions using Big Data (and streaming) technologies.

Here is what you'll be doing on a day to day basis:

  • Design, model and develop data sets to support reporting analytics and exploratory analysis.

  • Research and employ cutting edge techniques to build and design the data infrastructure for distributed processing, aggregation, and collection of streamed real-time data.

  • Architect and build data delivery solutions in a microservice environment.

  • Contribute to technical design and ongoing development of our custom ETL solutions and analytics platforms, and help improve of design and delivery standards.

  • Focus on automation and optimization for all areas of DW/ETL maintenance and deployment.

  • Work with big data developers to build scalable and supportable infrastructure.

  • Employ a variety of languages and tools (e.g. scripting languages) to marry systems together.

  • Assess and recommend the implementation available and latest big data technologies.

  • Recommend ways to improve data reliability, efficiency and quality.

  • Responsible for developing the data architecture components that scales for the ever evolving data needs of the entire company.

  • Solve big data warehousing problems on a massive scale and apply cloud-based services to solve challenging problems around: big data processing, data warehouse design, and enabling self-service.

  • Collaborate effectively with other members of the team and broader services group, including but not limited to Product Team, Data Science Team, Development Teams and Release and Operations Teams

Here is the type of person we are looking for:

  • Min Bachelor of Computer Science ideally Masters in Data

  • 7+ years of Data Engineering or similar experience.

  • Experience in high level programming languages such as Java, Scala, or Python.

  • Proficiency with databases and SQL is required.

  • Experience working with large data sets - both SQL and NoSQL databases (e.g. MySQL, PostgreSQL, DynamoDB, etc.).

  • Experience building ETLs and data pipelines using tools such as Apache Airflow and Spark.

  • Experience working with cloud technologies (Azure, AWS).

  • Demonstrated ETL/data programming skills (using scripts or products like Informatica, DataStage).

  • Experience with DevOps practices, CI/CD, managing production deployments, Git and GitHub.

  • Ability to communicate design, concepts and decisions both verbally and in writing.

  • Ability to mentor other data engineering talent in the team.

  • Experience with large scale data warehousing, mining or analytic systems.

  • Ability to work with analysts to gather requirements and translate them into data engineering tasks.

  • Awareness of security, performance, high-availability and fault-tolerance and best practices.

  • Aptitude to independently learn new technologies.

Nice to have skills:

  • Proficiency in data processing using technologies like Spark Streaming, Spark SQL, or Map/Reduce.

  • Experience building real time data pipelines using Apache Kafka.

  • Experience with virtualization, containers, and orchestration (Docker, Kubernetes).

  • Knowledge of data visualization and reporting tools like Tableau.

  • Experience with AWS EMR, AWS DMS, Talend, Apache Airflow, Stitch.

  • Experience with Amazon Web Services - RDS, EC2, S3, Lambda, Amazon Redshift.

If you are interested in exploring this position, please apply now! 

Our client is an equal opportunity employer and values diversity at their company.

People are our passion. People are our profession. 

Since 2010, SIGnature Recruiting has been pairing exceptional people with short-term contracts and long-term careers in Vancouver’s flourishing IT industry.  We are specialists in IT Recruiting and pride ourselves in making valuable contributions to our clients and candidates.

Apply for This Position

* Mandatory fields