Index
All Classes and Interfaces|All Packages|Constant Field Values|Serialized Form
C
- canEqual(Object) - Method in class org.anchoranalysis.inference.concurrency.WithPriority
- close() - Method in class org.anchoranalysis.inference.concurrency.ConcurrentModelPool
-
Close all models, to indicate they are no longer in use, and to perform tidy-up.
- close() - Method in interface org.anchoranalysis.inference.InferenceModel
-
Indicates that the model will no longer be used, and does appropriate tidying up and freeing of resources.
- compareTo(WithPriority<T>) - Method in class org.anchoranalysis.inference.concurrency.WithPriority
-
Orders so that
gpu==truehas higher priority in queue togpu==false. - ConcurrencyPlan - Class in org.anchoranalysis.inference.concurrency
-
How many allocated CPUs and CPUs can be used concurrently for inference.
- ConcurrentModel<T> - Class in org.anchoranalysis.inference.concurrency
-
An instance of model that can be used concurrently for inference.
- ConcurrentModel(T, boolean) - Constructor for class org.anchoranalysis.inference.concurrency.ConcurrentModel
-
Creates a new
ConcurrentModelinstance. - ConcurrentModelException - Exception Class in org.anchoranalysis.inference.concurrency
-
This exception indicates that an error occurred when performing inference from a model concurrently.
- ConcurrentModelException(Throwable) - Constructor for exception class org.anchoranalysis.inference.concurrency.ConcurrentModelException
-
Creates with a cause.
- ConcurrentModelPool<T> - Class in org.anchoranalysis.inference.concurrency
-
Keeps concurrent copies of a model to be used by different threads.
- ConcurrentModelPool(ConcurrencyPlan, CreateModelForPool<T>, Logger) - Constructor for class org.anchoranalysis.inference.concurrency.ConcurrentModelPool
-
Creates with a particular plan and function to create models.
- create(boolean) - Method in interface org.anchoranalysis.inference.concurrency.CreateModelForPool
-
Creates a model.
- CreateModelFailedException - Exception Class in org.anchoranalysis.inference.concurrency
-
When creating a model to be used for inference fails.
- CreateModelFailedException(Throwable) - Constructor for exception class org.anchoranalysis.inference.concurrency.CreateModelFailedException
-
Creates with a cause only.
- CreateModelForPool<T> - Interface in org.anchoranalysis.inference.concurrency
-
Creates a model to use in the pool.
D
- DEFAULT_NUMBER_GPUS - Static variable in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
The default number of GPUs used for
numberGPUsin certain constructors. - disableGPUs() - Method in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
Derive a
ConcurrencyPlanthat preserves the number of CPUs but disables all GPUs.
E
- equals(Object) - Method in class org.anchoranalysis.inference.concurrency.ConcurrentModel
- equals(Object) - Method in class org.anchoranalysis.inference.concurrency.WithPriority
- executeOrWait(CheckedFunction<ConcurrentModel<T>, S, ConcurrentModelException>) - Method in class org.anchoranalysis.inference.concurrency.ConcurrentModelPool
-
Execute on the next available model (or wait until one becomes available).
G
- get() - Method in class org.anchoranalysis.inference.concurrency.WithPriority
-
Gets the underlying element stored in the structure.
- getModel() - Method in class org.anchoranalysis.inference.concurrency.ConcurrentModel
-
The underlying model.
H
- hashCode() - Method in class org.anchoranalysis.inference.concurrency.ConcurrentModel
- hashCode() - Method in class org.anchoranalysis.inference.concurrency.WithPriority
I
- InferenceModel - Interface in org.anchoranalysis.inference
-
A model used for inference.
- isGpu() - Method in class org.anchoranalysis.inference.concurrency.ConcurrentModel
-
Whether model is using the GPU or not.
- isGPU() - Method in class org.anchoranalysis.inference.concurrency.WithPriority
-
Is the element returned by
WithPriority.get()associated with a GPU?
M
- multipleProcessors(int, int) - Static method in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
Creates a plan for a multiple CPU-processors.
N
- noCPUProcessor() - Static method in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
Creates a plan for no CPU processors with the default number of GPUs.
- numberCPUs() - Method in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
The number of CPUs to be used in the plan.
- numberGPUs() - Method in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
The number of GPUs to be used in the plan.
O
- org.anchoranalysis.inference - package org.anchoranalysis.inference
-
High-level classes for performing machine learning model inference.
- org.anchoranalysis.inference.concurrency - package org.anchoranalysis.inference.concurrency
-
Specifying how many CPUs and GPUs can be allocated for some purpose.
S
- singleCPUProcessor() - Static method in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
Creates a plan for a single-CPU processor with the default number of GPUs.
- singleCPUProcessor(int) - Static method in class org.anchoranalysis.inference.concurrency.ConcurrencyPlan
-
Creates a plan for a single-CPU processor - with a maximum number of GPUs.
T
- toString() - Method in class org.anchoranalysis.inference.concurrency.ConcurrentModel
W
- WithPriority<T> - Class in org.anchoranalysis.inference.concurrency
-
Wraps an element of type
Tto ensure priority is given when the flaggpu==true. - WithPriority(T, boolean) - Constructor for class org.anchoranalysis.inference.concurrency.WithPriority
-
Creates a new
WithPriorityinstance.
All Classes and Interfaces|All Packages|Constant Field Values|Serialized Form