public class BlockPlacementPolicyWithNodeGroup extends org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
clusterMap, considerLoad, heartbeatInterval, host2datanodeMap, tolerateHeartbeatMultiplier| Modifier | Constructor and Description |
|---|---|
protected |
BlockPlacementPolicyWithNodeGroup() |
protected |
BlockPlacementPolicyWithNodeGroup(Configuration conf,
org.apache.hadoop.hdfs.server.blockmanagement.FSClusterStats stats,
NetworkTopology clusterMap,
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager datanodeManager) |
| Modifier and Type | Method and Description |
|---|---|
protected int |
addToExcludedNodes(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor chosenNode,
Set<Node> excludedNodes)
Find other nodes in the same nodegroup of localMachine and add them
into excludeNodes as replica should not be duplicated for nodes
within the same nodegroup
|
protected DatanodeStorageInfo |
chooseLocalRack(Node localMachine,
Set<Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<StorageType,Integer> storageTypes)
Choose one node from the rack that localMachine is on.
|
protected DatanodeStorageInfo |
chooseLocalStorage(Node localMachine,
Set<Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<StorageType,Integer> storageTypes,
boolean fallbackToLocalRack)
choose local node of localMachine as the target.
|
protected void |
chooseRemoteRack(int numOfReplicas,
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
Set<Node> excludedNodes,
long blocksize,
int maxReplicasPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<StorageType,Integer> storageTypes)
Choose numOfReplicas nodes from the racks
that localMachine is NOT on.
|
protected String |
getRack(org.apache.hadoop.hdfs.protocol.DatanodeInfo cur)
Get rack string from a data node
|
void |
initialize(Configuration conf,
org.apache.hadoop.hdfs.server.blockmanagement.FSClusterStats stats,
NetworkTopology clusterMap,
org.apache.hadoop.hdfs.server.blockmanagement.Host2NodesMap host2datanodeMap)
Used to setup a BlockPlacementPolicy object.
|
Collection<DatanodeStorageInfo> |
pickupReplicaSet(Collection<DatanodeStorageInfo> first,
Collection<DatanodeStorageInfo> second,
Map<String,List<DatanodeStorageInfo>> rackMap)
Pick up replica node set for deleting replica as over-replicated.
|
chooseDataNode, chooseRandom, chooseRandom, chooseReplicasToDelete, chooseReplicaToDelete, chooseTarget, verifyBlockPlacementadjustSetsWithChosenReplica, getInstance, splitNodesWithRackprotected BlockPlacementPolicyWithNodeGroup(Configuration conf, org.apache.hadoop.hdfs.server.blockmanagement.FSClusterStats stats, NetworkTopology clusterMap, org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager datanodeManager)
protected BlockPlacementPolicyWithNodeGroup()
public void initialize(Configuration conf, org.apache.hadoop.hdfs.server.blockmanagement.FSClusterStats stats, NetworkTopology clusterMap, org.apache.hadoop.hdfs.server.blockmanagement.Host2NodesMap host2datanodeMap)
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyinitialize in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultconf - the configuration objectstats - retrieve cluster status from hereclusterMap - cluster topologyprotected DatanodeStorageInfo chooseLocalStorage(Node localMachine, Set<Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes, boolean fallbackToLocalRack) throws org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
chooseLocalStorage in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasExceptionprotected DatanodeStorageInfo chooseLocalRack(Node localMachine, Set<Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes) throws org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultchooseLocalRack in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasExceptionprotected void chooseRemoteRack(int numOfReplicas, org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine, Set<Node> excludedNodes, long blocksize, int maxReplicasPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes) throws org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
protected String getRack(org.apache.hadoop.hdfs.protocol.DatanodeInfo cur)
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicygetRack in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyprotected int addToExcludedNodes(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor chosenNode, Set<Node> excludedNodes)
addToExcludedNodes in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultpublic Collection<DatanodeStorageInfo> pickupReplicaSet(Collection<DatanodeStorageInfo> first, Collection<DatanodeStorageInfo> second, Map<String,List<DatanodeStorageInfo>> rackMap)
Copyright © 2018 CERN. All Rights Reserved.