public class HbaseClient extends AbstractHbaseClient implements java.io.Closeable
clusterId| Constructor and Description |
|---|
HbaseClient(org.apache.hadoop.hbase.client.HConnection connection)
Constructor
|
| Modifier and Type | Method and Description |
|---|---|
<H extends ResponseHandler<org.apache.hadoop.hbase.client.Result>> |
append(org.apache.hadoop.hbase.TableName table,
org.apache.hadoop.hbase.client.Append append,
H handler)
Appends values to one or more columns within a single row.
|
<H extends ResponseHandler<java.lang.Boolean>> |
checkAndDelete(org.apache.hadoop.hbase.TableName table,
byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
org.apache.hadoop.hbase.client.Delete delete,
H handler)
Atomically checks if a row/family/qualifier value matches the expected
value.
|
<H extends ResponseHandler<java.lang.Boolean>> |
checkAndMutate(org.apache.hadoop.hbase.TableName table,
byte[] row,
byte[] family,
byte[] qualifier,
org.apache.hadoop.hbase.filter.CompareFilter.CompareOp compareOp,
byte[] value,
org.apache.hadoop.hbase.client.RowMutations mutation,
H handler)
Atomically checks if a row/family/qualifier value matches the expected val
If it does, it performs the row mutations.
|
<H extends ResponseHandler<java.lang.Boolean>> |
checkAndPut(org.apache.hadoop.hbase.TableName table,
byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
org.apache.hadoop.hbase.client.Put put,
H handler)
Atomically checks if a row/family/qualifier value matches the expected
value.
|
void |
close() |
AsyncRpcChannel |
coprocessorService(org.apache.hadoop.hbase.TableName table,
byte[] row)
Creates and returns a
RpcChannel instance connected to the
table region containing the specified row. |
<H extends ResponseHandler<java.lang.Void>> |
delete(org.apache.hadoop.hbase.TableName table,
org.apache.hadoop.hbase.client.Delete delete,
H handler)
Deletes the specified cells/row.
|
<H extends ResponseHandler<java.lang.Void>> |
delete(org.apache.hadoop.hbase.TableName table,
java.util.List<org.apache.hadoop.hbase.client.Delete> deletes,
H handler)
Deletes the specified cells/rows in bulk.
|
<H extends ResponseHandler<org.apache.hadoop.hbase.client.Result>> |
get(org.apache.hadoop.hbase.TableName table,
org.apache.hadoop.hbase.client.Get get,
H handler)
Send a Get
|
<H extends ResponseHandler<org.apache.hadoop.hbase.client.Result[]>> |
get(org.apache.hadoop.hbase.TableName table,
java.util.List<org.apache.hadoop.hbase.client.Get> gets,
H handler)
Send a Get
|
AsyncResultScanner |
getScanner(org.apache.hadoop.hbase.TableName table,
org.apache.hadoop.hbase.client.Scan scan)
Send a scan and get a cell scanner
|
<H extends ResponseHandler<org.apache.hadoop.hbase.client.Result>> |
increment(org.apache.hadoop.hbase.TableName table,
org.apache.hadoop.hbase.client.Increment increment,
H handler)
Increments one or more columns within a single row.
|
<H extends ResponseHandler<java.lang.Long>> |
incrementColumnValue(org.apache.hadoop.hbase.TableName table,
byte[] row,
byte[] family,
byte[] qualifier,
long amount,
org.apache.hadoop.hbase.client.Durability durability,
H handler)
Atomically increments a column value.
|
<H extends ResponseHandler<java.lang.Long>> |
incrementColumnValue(org.apache.hadoop.hbase.TableName table,
byte[] row,
byte[] family,
byte[] qualifier,
long amount,
H handler)
See
incrementColumnValue(TableName, byte[], byte[], byte[], long, Durability, ResponseHandler)
The Durability is defaulted to Durability.SYNC_WAL. |
<H extends ResponseHandler<java.lang.Void>> |
mutateRow(org.apache.hadoop.hbase.TableName table,
org.apache.hadoop.hbase.client.RowMutations rm,
H handler)
Performs multiple mutations atomically on a single row.
|
<T> HbaseResponsePromise<T> |
newPromise()
Get a new promise chained to event loop of internal netty client
|
com.google.protobuf.RpcController |
newRpcController(ResponseHandler<?> promise)
Get a new Rpc controller
|
<H extends ResponseHandler<java.lang.Void>> |
put(org.apache.hadoop.hbase.TableName table,
java.util.List<org.apache.hadoop.hbase.client.Put> puts,
H handler)
Send a list of puts to the server
|
<H extends ResponseHandler<java.lang.Void>> |
put(org.apache.hadoop.hbase.TableName table,
org.apache.hadoop.hbase.client.Put put,
H handler)
Send a put
|
getConnectionpublic HbaseClient(org.apache.hadoop.hbase.client.HConnection connection)
throws java.io.IOException
connection - to Hbasejava.io.IOException - if HConnection could not be set uppublic <H extends ResponseHandler<org.apache.hadoop.hbase.client.Result>> H get(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.Get get, H handler)
table - to run get onget - to fetchhandler - on responsepublic <H extends ResponseHandler<org.apache.hadoop.hbase.client.Result[]>> H get(org.apache.hadoop.hbase.TableName table, java.util.List<org.apache.hadoop.hbase.client.Get> gets, H handler)
table - to run get ongets - to fetchhandler - on responsepublic AsyncResultScanner getScanner(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.Scan scan)
table - to get scanner fromscan - to performpublic <H extends ResponseHandler<java.lang.Void>> H put(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.Put put, H handler)
H - Type of Handlertable - to send Put toput - to sendhandler - to handle exceptionspublic <H extends ResponseHandler<java.lang.Void>> H put(org.apache.hadoop.hbase.TableName table, java.util.List<org.apache.hadoop.hbase.client.Put> puts, H handler)
H - Handler to handle any exceptionstable - to send puts toputs - to sendhandler - to handle exceptionspublic <H extends ResponseHandler<java.lang.Boolean>> H checkAndPut(org.apache.hadoop.hbase.TableName table, byte[] row, byte[] family, byte[] qualifier, byte[] value, org.apache.hadoop.hbase.client.Put put, H handler)
H - Handler to handle any exceptionstable - to check on and send put torow - to checkfamily - column family to checkqualifier - column qualifier to checkvalue - the expected valueput - data to put if check succeedshandler - to handle exceptionspublic <H extends ResponseHandler<java.lang.Void>> H delete(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.Delete delete, H handler)
H - Handler to handle any exceptionstable - to send delete todelete - The object that specifies what to delete.handler - to handle exceptionspublic <H extends ResponseHandler<java.lang.Void>> H delete(org.apache.hadoop.hbase.TableName table, java.util.List<org.apache.hadoop.hbase.client.Delete> deletes, H handler)
H - Handler to handle any exceptionstable - to send deletes todeletes - List of things to delete. List gets modified by this
method (in particular it gets re-ordered, so the order in which the elements
are inserted in the list gives no guarantee as to the order in which the
Deletes are executed).handler - to handle exceptionspublic <H extends ResponseHandler<java.lang.Boolean>> H checkAndDelete(org.apache.hadoop.hbase.TableName table, byte[] row, byte[] family, byte[] qualifier, byte[] value, org.apache.hadoop.hbase.client.Delete delete, H handler)
H - Handler to handle any exceptionstable - to send check and delete torow - to checkfamily - column family to checkqualifier - column qualifier to checkvalue - the expected valuedelete - data to delete if check succeedshandler - to handle exceptionspublic <H extends ResponseHandler<java.lang.Void>> H mutateRow(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.RowMutations rm, H handler)
Put and Delete are supported.H - Handler to handle any exceptionstable - to mutate row onrm - object that specifies the set of mutations to perform atomicallyhandler - to handle exceptionspublic <H extends ResponseHandler<org.apache.hadoop.hbase.client.Result>> H append(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.Append append, H handler)
H - Handler to handle any exceptionstable - to append toappend - object that specifies the columns and amounts to be used
for the increment operationshandler - to handle exceptionspublic <H extends ResponseHandler<org.apache.hadoop.hbase.client.Result>> H increment(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.Increment increment, H handler)
H - Handler to handle any exceptionstable - to increment onincrement - object that specifies the columns and amounts to be used
for the increment operationshandler - to handle exceptionspublic <H extends ResponseHandler<java.lang.Long>> H incrementColumnValue(org.apache.hadoop.hbase.TableName table, byte[] row, byte[] family, byte[] qualifier, long amount, H handler)
incrementColumnValue(TableName, byte[], byte[], byte[], long, Durability, ResponseHandler)
The Durability is defaulted to Durability.SYNC_WAL.H - Handler to handle any exceptionstable - to increment column value onrow - The row that contains the cell to increment.family - The column family of the cell to increment.qualifier - The column qualifier of the cell to increment.amount - The amount to increment the cell with (or decrement, if the
amount is negative).handler - to handle responsepublic <H extends ResponseHandler<java.lang.Long>> H incrementColumnValue(org.apache.hadoop.hbase.TableName table, byte[] row, byte[] family, byte[] qualifier, long amount, org.apache.hadoop.hbase.client.Durability durability, H handler)
amount and
written to the specified column.
Setting durability to Durability.SKIP_WAL means that in a fail
scenario you will lose any increments that have not been flushed.
H - Handler to handle any exceptionstable - to increment column value onrow - The row that contains the cell to increment.family - The column family of the cell to increment.qualifier - The column qualifier of the cell to increment.amount - The amount to increment the cell with (or decrement, if the
amount is negative).durability - The persistence guarantee for this increment.handler - to handle responsepublic <H extends ResponseHandler<java.lang.Boolean>> H checkAndMutate(org.apache.hadoop.hbase.TableName table, byte[] row, byte[] family, byte[] qualifier, org.apache.hadoop.hbase.filter.CompareFilter.CompareOp compareOp, byte[] value, org.apache.hadoop.hbase.client.RowMutations mutation, H handler)
H - Handler to handle any exceptionstable - to check and mutaterow - to checkfamily - column family to checkqualifier - column qualifier to checkcompareOp - the comparison operatorvalue - the expected valuemutation - mutations to perform if check succeedshandler - for the responsepublic AsyncRpcChannel coprocessorService(org.apache.hadoop.hbase.TableName table, byte[] row) throws java.io.IOException
RpcChannel instance connected to the
table region containing the specified row. The row given does not actually have
to exist. Whichever region would contain the row based on start and end keys will
be used. Note that the row parameter is also not passed to the
coprocessor handler registered for this protocol, unless the row
is separately passed as an argument in the service request. The parameter
here is only used to locate the region used to handle the call.
The obtained RpcChannel instance can be used to access a published
coprocessor Service using standard protobuf service invocations:
CoprocessorRpcChannel channel = myTable.coprocessorService(rowkey);
MyService.BlockingInterface service = MyService.newBlockingStub(channel);
MyCallRequest request = MyCallRequest.newBuilder()
...
.build();
MyCallResponse response = service.myCall(null, request);
table - to get service fromrow - The row key used to identify the remote region locationjava.io.IOException - when there was an error creating connection or getting locationpublic <T> HbaseResponsePromise<T> newPromise()
T - Type of response to returnpublic void close()
throws java.io.IOException
close in interface java.io.Closeableclose in interface java.lang.AutoCloseablejava.io.IOExceptionpublic com.google.protobuf.RpcController newRpcController(ResponseHandler<?> promise)
promise - to handle result