Position Overview
Autodesk is looking for diverse engineering candidates to join Licensing and Compliance team, a collection of systems and teams focused on detecting and exposing non-compliant users of Autodesk software. This is a key strategic initiative for the company!
As a Data Engineer, you will work to rapidly improve critical data processing & analytics pipelines. You will tackle hard problems to improve the platform’s reliability, resiliency, and scalability.
We are looking for someone who thrives on autonomy and has experience driving long-term projects to completion. You are detail and quality oriented, and excited about the prospects of having a big impact with data at Autodesk. Our tech stack includes Hive, Spark, Presto, Jenkins, Snowflake, PowerBI, Looker and various AWS services.
Responsibilities
- You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs
- Break down complex problems, document technical solutions and sequence work to make fast, iterative improvements
- Build and scale data infrastructure that powers batch and real-time data processing of billions of records
- Automate cloud infrastructure, services, and observability
- Develop CI/CD pipelines and testing automation
- Interface with data engineers, data scientists, product managers and all data stakeholders to understand their needs and promote best practices
- You have a growth mindset. You will identify business challenges and opportunities for improvement and solve for them using data analysis and data mining to make strategic or tactical recommendations
- You will support analytics and provide critical insights around product usage, campaign performance, funnel metrics, segmentation, conversion, and revenue growth
- You will build ad-hoc analysis, long-term projects, reports and dashboards to find new insights and to measure progress in key initiatives
- You will work closely with business stakeholders to understand and maintain focus on their analytical needs, including identifying critical metrics and KPIs
- You will partner with different teams within the organization to understand business needs and requirements
- You will deliver presentations that will distill complex problems into clear insights
Minimum Qualifications
- 4-7 years of relevant industry experience in big data systems, data processing and SQL databases
- 3+ years of coding experience in Spark data frames, Spark SQL, PySpark
- 3+ years of hands on programming skills, able to write modular, maintainable code, preferably Python & SQL
- Good understanding of SQL, dimensional modeling, and analytical big data warehouses like Hive and Snowflake
- Familiar with ETL workflow management tools like Airflow
- 2+ years of building reports and dashboards BI tools. Knowledge of Looker a plus
- Experience with version control and CICD tools like Git and Jenkins CI
- Experience in working and analyzing data on notebook solutions like Jupyter, EMR Notebooks, Apache Zeppelin
- Problem solver with excellent written and interpersonal skills; ability to make sound, complex decisions in a fast-paced, technical environment.
- Bachelor’s degree in computer science, Engineering or related field, or equivalent training, fellowship, or work experience