Data Engineer Lead, Marketing in Warszawa | NatWest Group Careers
Join us as a Data Engineer Lead, Marketing
What you'll do
- This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
- You'll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers' and the bank's data safe and secure
- Participating actively in the data engineering community, you'll deliver opportunities to support our strategic direction while building your network across the bank
We'll look to you to lead and inspire a team of data engineers and drive value for the customer through modelling, sourcing and data transformation. You'll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering.
We'll also expect you to be:
The skills you'll need
- Delivering the automation of data engineering pipelines through the removal of manual stages
- Developing and sharing your knowledge of the bank's data structures and metrics, advocating change where needed for product development
- Developing a strategy for streaming data ingestion and transformations
- Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight
To be successful in this role, you'll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You'll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. You'll need ETL experience, and StreamSets or Informatica BDM experience is desirable. SCALA and AWS experience is also desirable.
You'll also demonstrate:
- Demonstratable experience of building and maintaining data pipelines
Strong coding experience in one or more of the following - SQL, Python, PySpark
- Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
- Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL
- A good understanding of modern code development practices
- Good critical thinking and proven problem solving abilities
It would be ideal if you have experience of using Oracle, Unix scripting, Java, Cloud, API, NoSQL and Kafka.