Data Engineer - Hadoop Developer

Role Description

Looking for candidates with a strong hadoop development background (functional or object oriented programming) who understands ETL and the core fundamentals of data warehousing. In addition, Spark, scala and python are pluses since we are moving towards most of our ETL development to hadoop/Spark. The client currently use Teradata as their database, so a somewhat knowledgeable background on Teradata is beneficial. The candidate will need SQL but not Teradata specific functions.


  • Analyze & translate functional specifications & change requests into technical specifications.
  • Design, Develop, Implement ETL framework for Enterprise DW system.
  • Executing unit tests and validating expected results; iterating until test conditions have passed Translate business needs into end-user applications.
  • Ensure accuracy & integrity of data & applications through analysis, coding, writing clear documentation & problem resolution.
  • Troubleshoot and remediate issues impacting processes in ETL framework.
  • Modifying existing code to provide defect fixes for existing ETLs.
  • Coordinate and Collaborate with ETL team and business users to implement all ETL procedures for all new projects and maintain effective awareness of all production activities according to required standards and provide support to all existing applications.
  • Provide support to all ETL schedule and maintain compliance to same and develop and maintain various standards to perform ETL codes and maintain an effective project life cycle on all ETL processes.
  • Keep abreast of the tools, techniques and components being used in the industry through research and applies this knowledge to the system(s) being developed.

For more information please call Michael on +353 16146058 or

Back to Top