ml.shifu.guagua.hadoop.io
类 GuaguaInputSplit

java.lang.Object
  继承者 org.apache.hadoop.mapreduce.InputSplit
      继承者 ml.shifu.guagua.hadoop.io.GuaguaInputSplit
所有已实现的接口:
org.apache.hadoop.io.Writable

public class GuaguaInputSplit
extends org.apache.hadoop.mapreduce.InputSplit
implements org.apache.hadoop.io.Writable

InputSplit implementation in guagua. If mapper with isMaster true means it is master, and the master's FileSplit is null.


构造方法摘要
GuaguaInputSplit()
          Default constructor without any setting
GuaguaInputSplit(boolean isMaster, org.apache.hadoop.mapreduce.lib.input.FileSplit... fileSplits)
          Constructor with isMaster and fileSplits settings.
GuaguaInputSplit(boolean isMaster, org.apache.hadoop.mapreduce.lib.input.FileSplit fileSplit)
          Constructor with isMaster and one FileSplit settings.
 
方法摘要
 org.apache.hadoop.mapreduce.lib.input.FileSplit[] getFileSplits()
           
 long getLength()
          For master split, use Long.MAX_VALUE as its length to make it is the first task for Hadoop job.
 String[] getLocations()
          Data locality functions, return all hosts for all file splits.
 boolean isMaster()
           
 void readFields(DataInput in)
           
 void setFileSplits(org.apache.hadoop.mapreduce.lib.input.FileSplit[] fileSplits)
           
 void setMaster(boolean isMaster)
           
 String toString()
           
 void write(DataOutput out)
           
 
从类 java.lang.Object 继承的方法
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
 

构造方法详细信息

GuaguaInputSplit

public GuaguaInputSplit()
Default constructor without any setting


GuaguaInputSplit

public GuaguaInputSplit(boolean isMaster,
                        org.apache.hadoop.mapreduce.lib.input.FileSplit... fileSplits)
Constructor with isMaster and fileSplits settings.

参数:
isMaster - Whether the input split is master split.
fileSplits - File splits used for mapper task.

GuaguaInputSplit

public GuaguaInputSplit(boolean isMaster,
                        org.apache.hadoop.mapreduce.lib.input.FileSplit fileSplit)
Constructor with isMaster and one FileSplit settings.

参数:
isMaster - Whether the input split is master split.
fileSplit - File split used for mapper task.
方法详细信息

write

public void write(DataOutput out)
           throws IOException
指定者:
接口 org.apache.hadoop.io.Writable 中的 write
抛出:
IOException

readFields

public void readFields(DataInput in)
                throws IOException
指定者:
接口 org.apache.hadoop.io.Writable 中的 readFields
抛出:
IOException

getLength

public long getLength()
               throws IOException,
                      InterruptedException
For master split, use Long.MAX_VALUE as its length to make it is the first task for Hadoop job. It is convenient for users to check master in Hadoop UI.

指定者:
org.apache.hadoop.mapreduce.InputSplit 中的 getLength
抛出:
IOException
InterruptedException

getLocations

public String[] getLocations()
                      throws IOException,
                             InterruptedException
Data locality functions, return all hosts for all file splits.

指定者:
org.apache.hadoop.mapreduce.InputSplit 中的 getLocations
抛出:
IOException
InterruptedException

isMaster

public boolean isMaster()

setMaster

public void setMaster(boolean isMaster)

getFileSplits

public org.apache.hadoop.mapreduce.lib.input.FileSplit[] getFileSplits()

setFileSplits

public void setFileSplits(org.apache.hadoop.mapreduce.lib.input.FileSplit[] fileSplits)

toString

public String toString()
覆盖:
Object 中的 toString


Copyright © 2015. All Rights Reserved.