The Opportunity
Manulife's Digital Operations is seeking for an experienced Data Engineer to join our team to provide data-driven solutions and support Manulife/John Hancock business units. We are looking for the right individual who would be interested to propose and develop data solutions, has strong willingness to learn through team collaboration, desire to drive and execute on our data strategy, willing to help mature our agile practices, and is looking for an opportunity to embrace technical challenges and be part of a self-organizing team.
On the job you will:
- Design and develop data pipelines and ETL jobs using Big Data Technologies based on functional/non-functional business requirements
- Design & implement Data Integration/Ingestion/Extraction solutions based on high level architecture design
- Identify, design and implement process improvements & delivery optimizations
- Collaborate with Stakeholders, Business Analysts and Data Architects to assist and translate business requirements to technical solutions
- Develop big data and analytic solutions leveraging new or existing technology to advance Manulife's all lines of business
- Exploratory data analysis; Query and process on-premise or cloud-based data, provide reports, summarize and visualize the data
- Design, upgrade and implement new data workflows, automation, tools and API integrations.
- Perform POC on new integration patterns and solutions.
- Write and maintain technical documentation
- Perform unit tests and system integration tests
- Executes updates, patches, and other activities required to maintain and enhance the operations of on-premise or cloud-based environments
- Supports the Agile delivery squads when required
We are looking for someone with:
- At least 2 years experience as Data Engineer with focus on big data processing and/or relational databases
- At least 2 years experience working with Microsoft Azure Data Platform, specifically Azure Data Lake, Azure Data Factory, Azure Databricks
- Experienced in any of the following programming/scripting languages (SQL, Python, Shell, Scala)
- Experienced in creating data pipelines & developing complex and optimized queries
- Experienced with working on Structured, Semi-Structured, Unstructured datasets
- Knowledgeable with any of the Big Data tools and technologies: Hadoop, Spark, Hive, Sqoop, Kafka, Nifi
- Knowledgeable with relational SQL and NoSQL databases: MSSQL, Postgres, HBase (MongoDB is a plus)
- Experienced with Workflow Management Tools: Airflow, Crontab, CA Workload Automation
- Knowledgeable with CI/CD tools
- Knowledgeable or at least have the basic concept of Data Visualization in any of the following tools: Tableau, PowerBI, QlikView/QlikSense
- Knowledgeable in using collaboration tools (eg. MS Teams/Skype, Confluence, JIRA)
- Experience with any SLDC Methodologies and familiarity with different Agile methodologies
- Demonstrates a commitment to delivering excellent service, balanced with appropriate risk management.
- Monitor, validate, and drive continuous improvement to methods, and propose enhancements to data sources that improve usability and results
- Good Communication and presentation skills
- Analytical, structured, organized, and proactive
- Stakeholder and project management is a plus
Our commitment to you:
- Our mission; to be a part of making Decisions Easier and Lives Better
- A leadership team dedicated to your growth and success
- A bold ambition and set of goals to be a leader in driving transformation in our industry
- Our best. Every day.