About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the US. Data Axle has set up a strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles & Responsibilities:
We are looking for a Senior Data Engineer who will design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
- Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
- Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies.
- Creation and support of real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena.
- Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency.
- Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems.
- Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning.
- Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers.
Qualifications
- 7+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets.
- Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical discipline.
- Demonstrated strength in data modeling, ETL development, and data warehousing.
- Experience using big data processing technology using Spark.
- Knowledge of data management fundamentals and data storage principles.
- Experience using business intelligence reporting tools (Tableau, Business Objects, Cognos, Power BI etc.).
- Experience working with AWS big data technologies (Redshift, S3, EMR, Spark).
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
- Experience working with distributed systems as it pertains to data storage and computing.
- Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.
नौकरी रिपोर्ट करें