Resource Quick
Python/PySpark Developer - Data Engineering
Job Location
Bangalore, India
Job Description
Key Responsibilities : - Develop robust and scalable Python and PySpark applications to support data engineering and analytics initiatives. - Design and deploy cloud-native solutions on AWS, leveraging services like EC2, Lambda, S3, RDS, Redshift, and Glue. - Manage and transform large datasets, ensuring efficient data ingestion and migration through well-designed pipelines. - Utilize SQL Server and stored procedures for effective data management and transformation. - Develop, deploy, and manage scalable ETL and orchestration solutions. Required Skills and Experience : - Strong proficiency in Python and PySpark. - Hands-on experience with AWS services (EC2, Lambda, S3, RDS, Redshift, Glue). - Solid understanding of data engineering principles and practices. - Experience with SQL Server and stored procedures. - Strong problem-solving and analytical skills. - Excellent communication and collaboration skills. Preferred Skills : - Experience with data visualization tools (i.e., Tableau, Power BI). - Knowledge of data warehousing and data lake concepts. - Experience with CI/CD pipelines and DevOps practices (ref:hirist.tech)
Location: Bangalore, IN
Posted Date: 11/14/2024
Location: Bangalore, IN
Posted Date: 11/14/2024
Contact Information
Contact | Human Resources Resource Quick |
---|