Udostępnij za pośrednictwem


Listing current running Hadoop Jobs and Killing running Jobs

When you have jobs running in Hadoop, you can use the map/reduce web view to list the current running jobs however what if you would need to kill any current running job because the submitted jobs started malfunctioning or in worst case scenario, the job is stuck in infinite loops. I have seen several scenarios when a submitted job got stuck in problematic states due to code defect in map/reduce job or the Hadoop cluster itself. In any of such situation, you would need to manually kill the job which is already started.

 

To kill a currently running Hadoop job first you need Job ID and then Kill the job using the as following commands:

  • Hadoop job -list
  • Hadoop job –kill <JobID>

To list current running job in Hadoop Command shell please use below command:

 

 On Linux:      $ bin/hadoop job –list
 On Windows:    HADOOP_HOME = C:\Apps\Dist\ 
                HADOOP_HOME\bin\Hadoop job list

 

Above command will return job details as below:

 

[Linux]

 1 jobs currently running
 JobId                  State          StartTime       UserName
 job_201203293423_0001   1             1334506474312   avkash
  
 [Windows]
 c:\apps\dist>hadoop job -list
 1 jobs currently running
 JobId                  State   StartTime       UserName        Priority        SchedulingInfo
 job_201204011859_0002   1       1333307249654   avkash       NORMAL          NA
  
  

Once you have the JobID you can use the following command to kill the job:

  
 On Linux:      $ bin/hadoop job -kill jobid
 On Windows:    HADOOP_HOME = C:\Apps\Dist\ 
                HADOOP_HOME\bin\Hadoop job –kill <Job_ID>
  
 [Windows]
 c:\apps\dist>hadoop job -kill job_201204011859_0002
 Killed job job_201204011859_0002

Comments

  • Anonymous
    October 23, 2013
    The comment has been removed