Responsibilities:
- Integrate Terraform Cloud with CI/CD pipelines for automated infrastructure deployment
- Monitor and optimize Terraform Cloud performance, scalability, and reliability
- Proactive handling of project and supporting project implementation
- Create high-quality technical documentation
- Support out of hours work if schedule requires
- Extensive experience in Big-Data technologies using Hadoop & Spark frameworks – Hive QL, HDFS, Yarn, Spark-Core, Spark-SQL, Sqoop jobs for data transfer, HBase data load & PySpark programs.
- Engage in Cloud security activities
- Architected the cloud data platform to be highly scalable in data volume while maintaining optimal performance
- Deliver new and complex high-quality solutions to clients in response to varying business requirements using Software Development Life Cycle model
Requirements:
- Resourceful and self-driven individual with sense of urgency and commitment
- Ability to work effectively in a fast paced implementation environment
- Minimum 5 years of working experience in the related field is required for this position
- Preferably Executives specializing in AWS Cloud, Linux, Java, python, scripting
- Must have experience with Azure Delta Lake, Azure Synapse Analytics, Azure Cosmos DB
- Must have experience with Terraform, BitBucket, JIRA, Azure DevOps, Docker
- Microsoft Certified Azure Data Engineer