java.lang.Object
physx.NativeObject
physx.common.PxCudaContextManager
* Manages thread locks, and task scheduling for a CUDA context
*
* A PxCudaContextManager manages access to a single CUDA context, allowing it to
* be shared between multiple scenes.
* The context must be acquired from the manager before using any CUDA APIs unless stated differently.
*
* The PxCudaContextManager is based on the CUDA driver API and explicitly does not
* support the CUDA runtime API (aka, CUDART).
-
Nested Class Summary
Nested classes/interfaces inherited from class physx.NativeObject
NativeObject.Allocator<T> -
Field Summary
FieldsFields inherited from class physx.NativeObject
address, isExternallyAllocated, SIZEOF_BYTE, SIZEOF_DOUBLE, SIZEOF_FLOAT, SIZEOF_INT, SIZEOF_LONG, SIZEOF_POINTER, SIZEOF_SHORT -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoid* Acquire the CUDA context for the current thread * * Acquisitions are allowed to be recursive within a single threadstatic PxCudaContextManagerarrayGet(long baseAddress, int index) booleantrue if GPU map host memory to GPU (0-copy)boolean* Context manager has a valid CUDA context * * This method should be called after creating a PxCudaContextManager, * especially if the manager was responsible for allocating its own * CUDA context (desc.ctx == NULL).intreturns cached value of SM clock frequency* Return the CUcontext* Return the CudaContext* Get the cuda modules that have been loaded into this context on construction * @return Pointer to the cuda modulesreturns device handle retrieved from driverlongreturns cached value of device memory sizeintreturns cached value of cuGetDriverVersion()intreturns the maximum number of threads per blockintreturns cache value of SM unit countintreturns total amount of shared memory available per block in bytesintreturns total amount of shared memory available per multiprocessor in bytesbooleantrue if GPU work can run in concurrent streamsbooleantrue if GPU is an integrated (MCP) partvoidrelease()* Release the PxCudaContextManager * * If the PxCudaContextManager created the CUDA context it was * responsible for, it also frees that contextvoid* Release the CUDA context from the current thread * * The CUDA context should be released as soon as practically * possible, to allow other CPU threads to work efficiently.voidsetUsingConcurrentStreams(boolean flag) turn on/off using concurrent streams for GPU workbooleanG80booleanG92booleanGT200booleanGT260booleanGF100booleanGK100booleanGK110booleanGM100booleanGM200booleanGP100int* Determine if the user has configured a dedicated PhysX GPU in the NV Control Panel * Note: If using CUDA Interop, this will always return false * \returns 1 if there is a dedicated GPU * 0 if there is NOT a dedicated GPU * -1 if the routine is not implementedstatic PxCudaContextManagerwrapPointer(long address) Methods inherited from class physx.NativeObject
checkNotNull, equals, getAddress, hashCode
-
Field Details
-
SIZEOF
public static final int SIZEOF -
ALIGNOF
public static final int ALIGNOF- See Also:
-
-
Constructor Details
-
PxCudaContextManager
protected PxCudaContextManager() -
PxCudaContextManager
protected PxCudaContextManager(long address)
-
-
Method Details
-
wrapPointer
-
arrayGet
-
acquireContext
public void acquireContext()* Acquire the CUDA context for the current thread * * Acquisitions are allowed to be recursive within a single thread. * You can acquire the context multiple times so long as you release * it the same count. * * The context must be acquired before using most CUDA functions. -
releaseContext
public void releaseContext()* Release the CUDA context from the current thread * * The CUDA context should be released as soon as practically * possible, to allow other CPU threads to work efficiently. -
getContext
* Return the CUcontext -
getCudaContext
* Return the CudaContext -
contextIsValid
public boolean contextIsValid()* Context manager has a valid CUDA context * * This method should be called after creating a PxCudaContextManager, * especially if the manager was responsible for allocating its own * CUDA context (desc.ctx == NULL). -
supportsArchSM10
public boolean supportsArchSM10()G80 -
supportsArchSM11
public boolean supportsArchSM11()G92 -
supportsArchSM12
public boolean supportsArchSM12()GT200 -
supportsArchSM13
public boolean supportsArchSM13()GT260 -
supportsArchSM20
public boolean supportsArchSM20()GF100 -
supportsArchSM30
public boolean supportsArchSM30()GK100 -
supportsArchSM35
public boolean supportsArchSM35()GK110 -
supportsArchSM50
public boolean supportsArchSM50()GM100 -
supportsArchSM52
public boolean supportsArchSM52()GM200 -
supportsArchSM60
public boolean supportsArchSM60()GP100 -
isIntegrated
public boolean isIntegrated()true if GPU is an integrated (MCP) part -
canMapHostMemory
public boolean canMapHostMemory()true if GPU map host memory to GPU (0-copy) -
getDriverVersion
public int getDriverVersion()returns cached value of cuGetDriverVersion() -
getDeviceTotalMemBytes
public long getDeviceTotalMemBytes()returns cached value of device memory size -
getMultiprocessorCount
public int getMultiprocessorCount()returns cache value of SM unit count -
getClockRate
public int getClockRate()returns cached value of SM clock frequency -
getMaxThreadsPerBlock
public int getMaxThreadsPerBlock()returns the maximum number of threads per block -
getDeviceName
- Returns:
- WebIDL type: DOMString [Const]
-
getDevice
returns device handle retrieved from driver -
setUsingConcurrentStreams
public void setUsingConcurrentStreams(boolean flag) turn on/off using concurrent streams for GPU work -
getUsingConcurrentStreams
public boolean getUsingConcurrentStreams()true if GPU work can run in concurrent streams -
usingDedicatedGPU
public int usingDedicatedGPU()* Determine if the user has configured a dedicated PhysX GPU in the NV Control Panel * Note: If using CUDA Interop, this will always return false * \returns 1 if there is a dedicated GPU * 0 if there is NOT a dedicated GPU * -1 if the routine is not implemented -
getCuModules
* Get the cuda modules that have been loaded into this context on construction * @return Pointer to the cuda modules -
release
public void release()* Release the PxCudaContextManager * * If the PxCudaContextManager created the CUDA context it was * responsible for, it also frees that context. * * Do not release the PxCudaContextManager if there are any scenes * using it. Those scenes must be released first. *
-