Job Description
- Design and implement scalable data pipelines to ingest, process, and store large volumes of data. Optimize and maintain existing data systems and infrastructure.
- Develop and maintain documentation related to data architecture, metadata and processes.
- Stay up to date with emerging technologies and industry trends to continuously improve data engineering practices.
- Able to contribute to the overall knowledge base of the bank as one of the key experts of data management Participate and contribute to design processes for initiatives in the bank.
- Capable of initiating and proposing continuous improvement programs within his/her domain. Provide L3 support for their specific domains, this includes incident management (for items not solvable by L1 and L2 support), Problem Management with corresponding root cause analysis and implement operationalization initiatives within their specific domains.
- Participate with Proof of concept being implemented by the group.
Qualifications
Soft Skills:
- Communication
- Employs multiple strategies for gathering information from key stakeholders.
- Communicate to stakeholders in a manner they understand - translating technical terms for the understanding of stakeholders
- Articulation to their work group - to understand the process as part of group
Leadership
- Works closely with team members; Engages and contributes to suggestions on how to improve
- Managing own self - Level of accountability, ownership
Stakeholder Management
- Has the ability to listen to stakeholders and understand their perspective. Can put their perspective clearly. Works to identify possible compromises.
Analytical Ability
- Demonstrates the ability to assess the available information, make a decision and act on it.
Technical Skills:
- Technical Skills Possessed:
- Proficiency in SQL and experience with relational databases.
- Strong programming skills in Python.
- Experience with ETL tools and data pipeline frameworks (e.g., Apache Airflow, Talend, Informatica).
- Knowledge of data warehousing solutions like Snowflake or Redshift.
- Familiarity with big data technologies such as Hadoop, Spark, or Kafka is a preference
Application Experience:
Business and Industry Knowledge:
Knowledge of
- Data protection requirements on PII
- Regulator requirements on data localisation for financial sector
- Industry data models such as IBM BDW and IIW, Teradata FSLDM, and SAS IIA.
Experience in
- Agile Implementation Methodology
- Test-Driven Development