Não estão sendo aceitas mais candidaturas para esta vaga
- ScalaKey responsibilities
- Develop Scala/Spark programs, scripts, and macros for data extraction, transformation and analysis
- Design and implement solutions to meet business requirements
- Support and maintain existing Hadoop applications and related technologies
- Develop and maintain metadata, user access and security controls
- Develop and maintain technical documentation, including data models, process flows and system diagrams Requirements
- Minimum 3 5 years of experience from Scala/Spark related projects and/or engagements
- Create Scala/Spark jobs for data transformation and aggregation as per the complex business requirements.
- Should be able work in a challenging and agile environment with quick turnaround times and strict deadlines.
- Perform Unit tests of the Scala code
- Raise PR, trigger build and release JAR versions for deployment via Jenkins pipeline
- Should be familiar with CI/CD concepts and the processes
- Peer review the code
- Perform RCA of the bugs raised
- Should have excellent understanding of Hadoop ecosystem
- Should be well versed with below technologies JenkinsHQL(Hive Queries)OozieShell scriptingGITSplunk Preferred