ATech
Hadoop Administrator - Big Data
Job Location
in, India
Job Description
Job Profile : Hadoop Administrator WFH and WFO (Hiring Office Jaipur and Ahmedabad) Role: Big Data Engineer Industry Type: IT Services & Consulting Job description : - Good understanding of SDLC and agile methodologies - Installation and configuration of Hadoop clusters, including HDFS, MapReduce, Hive, Pig, HBase, and other related tools - Managing and monitoring Hadoop clusters to ensure high availability and performance - Planning and implementing data backup and disaster recovery strategies for Hadoop clusters - Proactively monitoring and tuning Hadoop cluster performance to optimize resource utilization and prevent bottlenecks - Providing technical support to developers and end-users as needed - Awareness of latest technologies and trends - Logical thinking and problem solving skills along with an ability to collaborate - Work with customers and project management teams to gather requirements and translate them into technical specifications. - Manages day-to-day of the ticketing queue for clients ensuring completion in line with client needs/expectations. - Helps prepare recurring internal meetings and advises on timeline progression. - Ensures all tickets are up to date. Responsible for ticket escalation as needed. - Builds out weekly client slides. Provides snapshot of projects and status. - Participate in QA, UAT & production launch support related to various customer engagements. - Collaborate with product development team to help support platform functionality - Provide Tier 2/3 technical support for custom integrations. - Has working knowledge of support processes and working in support environment. - Drives resolution to remediate both routine and non-routine problems and holds resources accountable to provide permanent solutions designed to improve the customer experience. - Strong Linux administration and troubleshooting skills to analyze the logs and recognize the root cause of the issue. - Troubleshoot all cluster related issues. - Having to take health check-up of complete cluster nodes (by the logs and physically as well). - Manage and monitoring of HDFS and Yarn (resources, tuning, benchmarking). - Ability to communicate comfortably with all levels of management and Vendors. - Ability to work closely with and develop relationships with clients. - Manage Hadoop Distributed File System (HDFS) cluster users and permissions. - Analyze system failures, identify root causes, and recommend a course of action. Document the systems processes and procedures for future reference. - Troubleshoot application errors and ensure that they do not occur again. - Configure NameNode to ensure high availability. - Analyze storage data volume and assign space in HDFS. - Manage software and hardware deployment in the Hadoop ecosystem and the expansion of existing ones. - Conduct implementation in a Hadoop cluster and its maintenance. - Deploy and manage Hadoop infrastructure on a current basis. (ref:hirist.tech)
Location: in, IN
Posted Date: 11/23/2024
Location: in, IN
Posted Date: 11/23/2024
Contact Information
Contact | Human Resources ATech |
---|