Array

Data Engineer

Poland · Full-time · Middle/Senior

About The Position

NeoGames is a leader in the iLottery and iGaming space offering solutions spanning game studios, game aggregation, lotteries, online casino, sportsbook, bingo, and managed services offered through an industry leading core platform.

The Data & BI team owns the group’s Data & Analytics platforms spanning Data Engineering,

Analytical Engineering and Business Intelligence to lead the group’s data-driven modernisation both internally and for its clients.

The Data Engineer will play a vital role as part of a cross-functional team to develop data pipelines to ingest, transform, distribute and expose data from the group’s Core Data Lake for integration, reporting, analytics, and automations.

Responsibilities

  • Create data pipelines — both as batch and in real-time — to ingest data from dissimilar sources
  • Collaborate with the other teams to address data sourcing and provision requirements
  • Design and monitor robust, recoverable data pipelines following best practices with an eye out for performance, reliability, and monitoring
  • Innovation drives us — carry out research and development and work on PoCs to propose, trial and adopt new processes and technologies
  • Coordinate with the Product & Technology teams to ensure all platforms collect and provide appropriate data
  • Liaise with the other teams to ensure reporting and analytics needs can be addressed by the central data lake
  • Support the Data Quality and Security initiatives by building into the architecture the necessary data access, integrity, and accuracy controls

Requirements

  • 3+ years of experience in Data Engineering
  • Degree in Computer Science, Software Development or Engineering
  • Proficient in Python. Past exposure to Java will be considered an asset
  • Understanding of RDMS, Columnar and NoSQL engines & their performance
  • Experience with cloud architecture and tools: Microsoft Azure, Amazon or GCP
  • Experience with orchestration tools such as Apache AirFlow, dbt
  • Prior exposure to the Snowflake ecosystem will be considered an asset
  • Familiarity with Docker/Kubernetes and containerisation
  • Strong background in stream data processing technologies such as NiFi, Kinesis, Kafka
  • A grasp of DevOps concepts and tools including Terraform and Ansible are an advantage
  • Understanding of distributed logging platforms — ideally the ELK stack


Skills:

  • Fluency in spoken and written English is essential
  • Passionate about data and on the lookout for opportunities to optimise
  • Passionate about technology and eager to trial and recommend new tools or platforms

Apply for this position