Why Qubiz?
For more than 16 years, Qubiz has been a leading tech company with 400+ employees and a flat organisational structure.
By prioritising people, both colleagues and clients, we have created an environment focused on well-being, which allows us to deliver exceptional performance.
Most importantly, our 91–94% retention rate and the many colleagues who have been with us for 5, 10 and even 15 years is a testament to the fact that you can build a long and successful career with us.
Key responsibilities
- Design and implement end-to-end data pipelines (batch & streaming)
- Build and maintain data lake / Lakehouse architectures using modern cloud tools
- Implement Medallion Architecture (Bronze, Silver, Gold layers) for scalable data processing
- Process and optimize high-volume datasets using Apache Spark (PySpark) or SQL
- Develop robust data models for analytics and reporting
- Ensure data quality, governance, lineage, and security
- Integrate and leverage AI tools (LLMs, embeddings, ML pipelines) where relevant
- Collaborate with analysts, data scientists, and stakeholders to deliver insights
- Optimize pipelines for performance, scalability, and cost efficiency
- Build automated data pipelines and workflows (DataOps mindset, CI/CD)
- Mentor junior engineers and drive best practices
Required skills & experience
- 5+ years experience in Data Engineering or similar roles
- Strong expertise in Python, Apache Spark / PySpark, and SQL & Data Warehousing concepts
- Proven experience with Data Lakes, large-scale data processing, and Medallion architecture
- Hands-on experience with Microsoft ecosystem: Microsoft Fabric / Azure Data Services / Synapse / Data Factory
- Experience with Power BI or similar BI tools
- Strong understanding of ETL/ELT pipelines, data modeling, and performance optimization
- Experience with real-time or near real-time data pipelines
- Experience working with AI/ML-enabled data platforms
- Familiarity with LLMs (e.g. GPT-based tools), embeddings, or vector databases
Nice to have
- Experience with Databricks
- Experience with DBT
- Experience with streaming tools (Kafka, Event Hubs)
- Familiarity with automation of data pipelines (DataOps, orchestration tools)
- Infrastructure as Code (Terraform, ARM)
- Data governance tools
What's in it for you?
- Real responsibilities and challenging projects
- Friendly environment suited for professional and personal development
- Clear paths and real help in your professional development
- Training and certifications
- Fun team-building sessions
- Private health insurance
- 50% discount for gym or sports pass of your choice
- Fresh fruit on your table every day