|
|||||||||
| 上一个类 下一个类 | 框架 无框架 | ||||||||
| 摘要: 嵌套 | 字段 | 构造方法 | 方法 | 详细信息: 字段 | 构造方法 | 方法 | ||||||||
java.lang.Objectorg.apache.hadoop.mapreduce.InputSplit
ml.shifu.guagua.mapreduce.GuaguaInputSplit
public class GuaguaInputSplit
InputSplit implementation in guagua for Hadoop MapReduce job.
If mapper with isMaster true means it is master, for master so far fileSplits is
null.
For worker, input fileSplits are included, here FileSplit array is used to make guagua support
combining FileSplits in one task.
| 构造方法摘要 | |
|---|---|
GuaguaInputSplit()
Default constructor without any setting. |
|
GuaguaInputSplit(boolean isMaster,
org.apache.hadoop.mapreduce.lib.input.FileSplit... fileSplits)
Constructor with isMaster and fileSplits settings. |
|
GuaguaInputSplit(boolean isMaster,
org.apache.hadoop.mapreduce.lib.input.FileSplit fileSplit)
Constructor with isMaster and one FileSplit settings. |
|
| 方法摘要 | |
|---|---|
org.apache.hadoop.mapreduce.lib.input.FileSplit[] |
getFileSplits()
|
long |
getLength()
For master split, use Long.MAX_VALUE as its length to make it is the first task for Hadoop job. |
String[] |
getLocations()
This is just a mock. |
boolean |
isMaster()
|
void |
readFields(DataInput in)
|
void |
setFileSplits(org.apache.hadoop.mapreduce.lib.input.FileSplit[] fileSplits)
|
void |
setMaster(boolean isMaster)
|
String |
toString()
|
void |
write(DataOutput out)
|
| 从类 java.lang.Object 继承的方法 |
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait |
| 构造方法详细信息 |
|---|
public GuaguaInputSplit()
public GuaguaInputSplit(boolean isMaster,
org.apache.hadoop.mapreduce.lib.input.FileSplit... fileSplits)
isMaster and fileSplits settings.
isMaster - Whether the input split is master split.fileSplits - File splits used for mapper task.
public GuaguaInputSplit(boolean isMaster,
org.apache.hadoop.mapreduce.lib.input.FileSplit fileSplit)
isMaster and one FileSplit settings.
isMaster - Whether the input split is master split.fileSplit - File split used for mapper task.| 方法详细信息 |
|---|
public void write(DataOutput out)
throws IOException
org.apache.hadoop.io.Writable 中的 writeIOException
public void readFields(DataInput in)
throws IOException
org.apache.hadoop.io.Writable 中的 readFieldsIOException
public long getLength()
throws IOException,
InterruptedException
Long.MAX_VALUE as its length to make it is the first task for Hadoop job. It
is convenient for users to check master in Hadoop UI.
org.apache.hadoop.mapreduce.InputSplit 中的 getLengthIOException
InterruptedException
public String[] getLocations()
throws IOException,
InterruptedException
org.apache.hadoop.mapreduce.InputSplit 中的 getLocationsIOException
InterruptedExceptionpublic boolean isMaster()
public void setMaster(boolean isMaster)
public org.apache.hadoop.mapreduce.lib.input.FileSplit[] getFileSplits()
public void setFileSplits(org.apache.hadoop.mapreduce.lib.input.FileSplit[] fileSplits)
public String toString()
Object 中的 toString
|
|||||||||
| 上一个类 下一个类 | 框架 无框架 | ||||||||
| 摘要: 嵌套 | 字段 | 构造方法 | 方法 | 详细信息: 字段 | 构造方法 | 方法 | ||||||||