| Class and Description |
|---|
| org.bytedeco.cuda.cusparse.cusparseHybMat |
| org.bytedeco.cuda.cusparse.cusparseSolveAnalysisInfo |
| org.bytedeco.cuda.nvml.nvmlEccErrorCounts_t
Different GPU families can have different memory error counters
See \ref nvmlDeviceGetMemoryErrorCounter
|
| Field and Description |
|---|
| org.bytedeco.cuda.global.cudart.cudaDeviceBlockingSync
This flag was deprecated as of CUDA 4.0 and
replaced with ::cudaDeviceScheduleBlockingSync.
|
| org.bytedeco.cuda.global.nvml.nvmlClocksThrottleReasonUserDefinedClocks
Renamed to \ref nvmlClocksThrottleReasonApplicationsClocksSetting
as the name describes the situation more accurately.
|
| Method and Description |
|---|
| org.bytedeco.cuda.global.cudart.cuCtxAttach(CUctx_st, int)
Note that this function is deprecated and should not be used.
Increments the usage count of the context and passes back a context handle
in \p *pctx that must be passed to ::cuCtxDetach() when the application is
done with the context. ::cuCtxAttach() fails if there is no context current
to the thread.
Currently, the \p flags parameter must be 0.
|
| org.bytedeco.cuda.global.cudart.cuCtxDetach(CUctx_st)
Note that this function is deprecated and should not be used.
Decrements the usage count of the context \p ctx, and destroys the context
if the usage count goes to 0. The context must be a handle that was passed
back by ::cuCtxCreate() or ::cuCtxAttach(), and must be current to the
calling thread.
|
| org.bytedeco.cuda.global.cudart.cudaBindSurfaceToArray(surfaceReference, cudaArray, cudaChannelFormatDesc)
Binds the CUDA array \p array to the surface reference \p surfref.
\p desc describes how the memory is interpreted when fetching values from
the surface. Any CUDA array previously bound to \p surfref is unbound.
|
| org.bytedeco.cuda.global.cudart.cudaBindTexture(SizeTPointer, textureReference, Pointer, cudaChannelFormatDesc, long)
Binds \p size bytes of the memory area pointed to by \p devPtr to the
texture reference \p texref. \p desc describes how the memory is interpreted
when fetching values from the texture. Any memory previously bound to
\p texref is unbound.
Since the hardware enforces an alignment requirement on texture base
addresses,
\ref ::cudaBindTexture(size_t*, const struct textureReference*, const void*, const struct cudaChannelFormatDesc*, size_t) "cudaBindTexture()"
returns in \p *offset a byte offset that
must be applied to texture fetches in order to read from the desired memory.
This offset must be divided by the texel size and passed to kernels that
read from the texture so they can be applied to the ::tex1Dfetch() function.
If the device memory pointer was returned from ::cudaMalloc(), the offset is
guaranteed to be 0 and NULL may be passed as the \p offset parameter.
The total number of elements (or texels) in the linear address range
cannot exceed ::cudaDeviceProp::maxTexture1DLinear[0].
The number of elements is computed as (\p size / elementSize),
where elementSize is determined from \p desc.
|
| org.bytedeco.cuda.global.cudart.cudaBindTexture2D(SizeTPointer, textureReference, Pointer, cudaChannelFormatDesc, long, long, long)
Binds the 2D memory area pointed to by \p devPtr to the
texture reference \p texref. The size of the area is constrained by
\p width in texel units, \p height in texel units, and \p pitch in byte
units. \p desc describes how the memory is interpreted when fetching values
from the texture. Any memory previously bound to \p texref is unbound.
Since the hardware enforces an alignment requirement on texture base
addresses, ::cudaBindTexture2D() returns in \p *offset a byte offset that
must be applied to texture fetches in order to read from the desired memory.
This offset must be divided by the texel size and passed to kernels that
read from the texture so they can be applied to the ::tex2D() function.
If the device memory pointer was returned from ::cudaMalloc(), the offset is
guaranteed to be 0 and NULL may be passed as the \p offset parameter.
\p width and \p height, which are specified in elements (or texels), cannot
exceed ::cudaDeviceProp::maxTexture2DLinear[0] and ::cudaDeviceProp::maxTexture2DLinear[1]
respectively. \p pitch, which is specified in bytes, cannot exceed
::cudaDeviceProp::maxTexture2DLinear[2].
The driver returns ::cudaErrorInvalidValue if \p pitch is not a multiple of
::cudaDeviceProp::texturePitchAlignment.
|
| org.bytedeco.cuda.global.cudart.cudaBindTextureToArray(textureReference, cudaArray, cudaChannelFormatDesc)
Binds the CUDA array \p array to the texture reference \p texref.
\p desc describes how the memory is interpreted when fetching values from
the texture. Any CUDA array previously bound to \p texref is unbound.
|
| org.bytedeco.cuda.global.cudart.cudaBindTextureToMipmappedArray(textureReference, cudaMipmappedArray, cudaChannelFormatDesc)
Binds the CUDA mipmapped array \p mipmappedArray to the texture reference \p texref.
\p desc describes how the memory is interpreted when fetching values from
the texture. Any CUDA mipmapped array previously bound to \p texref is unbound.
|
| org.bytedeco.cuda.global.cudart.cudaGetSurfaceReference(PointerPointer, Pointer)
Returns in \p *surfref the structure associated to the surface reference
defined by symbol \p symbol.
|
| org.bytedeco.cuda.global.cudart.cudaGetTextureAlignmentOffset(SizeTPointer, textureReference)
Returns in \p *offset the offset that was returned when texture reference
\p texref was bound.
|
| org.bytedeco.cuda.global.cudart.cudaGetTextureReference(PointerPointer, Pointer)
Returns in \p *texref the structure associated to the texture reference
defined by symbol \p symbol.
|
| org.bytedeco.cuda.global.cudart.cudaMemcpyArrayToArray(cudaArray, long, long, cudaArray, long, long, long) |
| org.bytedeco.cuda.global.cudart.cudaMemcpyArrayToArray(cudaArray, long, long, cudaArray, long, long, long, int)
Copies \p count bytes from the CUDA array \p src starting at the upper
left corner (\p wOffsetSrc, \p hOffsetSrc) to the CUDA array \p dst
starting at the upper left corner (\p wOffsetDst, \p hOffsetDst) where
\p kind specifies the direction of the copy, and must be one of
::cudaMemcpyHostToHost, ::cudaMemcpyHostToDevice, ::cudaMemcpyDeviceToHost,
::cudaMemcpyDeviceToDevice, or ::cudaMemcpyDefault. Passing
::cudaMemcpyDefault is recommended, in which case the type of transfer is
inferred from the pointer values. However, ::cudaMemcpyDefault is only
allowed on systems that support unified virtual addressing.
|
| org.bytedeco.cuda.global.cudart.cudaMemcpyFromArray(Pointer, cudaArray, long, long, long, int)
Copies \p count bytes from the CUDA array \p src starting at the upper
left corner (\p wOffset, hOffset) to the memory area pointed to by \p dst,
where \p kind specifies the direction of the copy, and must be one of
::cudaMemcpyHostToHost, ::cudaMemcpyHostToDevice, ::cudaMemcpyDeviceToHost,
::cudaMemcpyDeviceToDevice, or ::cudaMemcpyDefault. Passing
::cudaMemcpyDefault is recommended, in which case the type of transfer is
inferred from the pointer values. However, ::cudaMemcpyDefault is only
allowed on systems that support unified virtual addressing.
|
| org.bytedeco.cuda.global.cudart.cudaMemcpyFromArrayAsync(Pointer, cudaArray, long, long, long, int) |
| org.bytedeco.cuda.global.cudart.cudaMemcpyFromArrayAsync(Pointer, cudaArray, long, long, long, int, CUstream_st)
Copies \p count bytes from the CUDA array \p src starting at the upper
left corner (\p wOffset, hOffset) to the memory area pointed to by \p dst,
where \p kind specifies the direction of the copy, and must be one of
::cudaMemcpyHostToHost, ::cudaMemcpyHostToDevice, ::cudaMemcpyDeviceToHost,
::cudaMemcpyDeviceToDevice, or ::cudaMemcpyDefault. Passing
::cudaMemcpyDefault is recommended, in which case the type of transfer is
inferred from the pointer values. However, ::cudaMemcpyDefault is only
allowed on systems that support unified virtual addressing.
::cudaMemcpyFromArrayAsync() is asynchronous with respect to the host, so
the call may return before the copy is complete. The copy can optionally
be associated to a stream by passing a non-zero \p stream argument. If \p
kind is ::cudaMemcpyHostToDevice or ::cudaMemcpyDeviceToHost and \p stream
is non-zero, the copy may overlap with operations in other streams.
|
| org.bytedeco.cuda.global.cudart.cudaMemcpyToArray(cudaArray, long, long, Pointer, long, int)
Copies \p count bytes from the memory area pointed to by \p src to the
CUDA array \p dst starting at the upper left corner
(\p wOffset, \p hOffset), where \p kind specifies the direction
of the copy, and must be one of ::cudaMemcpyHostToHost,
::cudaMemcpyHostToDevice, ::cudaMemcpyDeviceToHost,
::cudaMemcpyDeviceToDevice, or ::cudaMemcpyDefault. Passing
::cudaMemcpyDefault is recommended, in which case the type of transfer is
inferred from the pointer values. However, ::cudaMemcpyDefault is only
allowed on systems that support unified virtual addressing.
|
| org.bytedeco.cuda.global.cudart.cudaMemcpyToArrayAsync(cudaArray, long, long, Pointer, long, int) |
| org.bytedeco.cuda.global.cudart.cudaMemcpyToArrayAsync(cudaArray, long, long, Pointer, long, int, CUstream_st)
Copies \p count bytes from the memory area pointed to by \p src to the
CUDA array \p dst starting at the upper left corner
(\p wOffset, \p hOffset), where \p kind specifies the
direction of the copy, and must be one of ::cudaMemcpyHostToHost,
::cudaMemcpyHostToDevice, ::cudaMemcpyDeviceToHost,
::cudaMemcpyDeviceToDevice, or ::cudaMemcpyDefault. Passing
::cudaMemcpyDefault is recommended, in which case the type of transfer is
inferred from the pointer values. However, ::cudaMemcpyDefault is only
allowed on systems that support unified virtual addressing.
::cudaMemcpyToArrayAsync() is asynchronous with respect to the host, so
the call may return before the copy is complete. The copy can optionally
be associated to a stream by passing a non-zero \p stream argument. If \p
kind is ::cudaMemcpyHostToDevice or ::cudaMemcpyDeviceToHost and \p stream
is non-zero, the copy may overlap with operations in other streams.
|
| org.bytedeco.cuda.global.cudart.cudaSetDoubleForDevice(double[]) |
| org.bytedeco.cuda.global.cudart.cudaSetDoubleForDevice(DoubleBuffer) |
| org.bytedeco.cuda.global.cudart.cudaSetDoubleForDevice(DoublePointer)
This function is deprecated as of CUDA 7.5
Converts the double value of \p d to an internal float representation if
the device does not support double arithmetic. If the device does natively
support doubles, then this function does nothing.
|
| org.bytedeco.cuda.global.cudart.cudaSetDoubleForHost(double[]) |
| org.bytedeco.cuda.global.cudart.cudaSetDoubleForHost(DoubleBuffer) |
| org.bytedeco.cuda.global.cudart.cudaSetDoubleForHost(DoublePointer)
This function is deprecated as of CUDA 7.5
Converts the double value of \p d from a potentially internal float
representation if the device does not support double arithmetic. If the
device does natively support doubles, then this function does nothing.
|
| org.bytedeco.cuda.global.cudart.cudaThreadExit()
Note that this function is deprecated because its name does not
reflect its behavior. Its functionality is identical to the
non-deprecated function ::cudaDeviceReset(), which should be used
instead.
Explicitly destroys all cleans up all resources associated with the current
device in the current process. Any subsequent API call to this device will
reinitialize the device.
Note that this function will reset the device immediately. It is the caller's
responsibility to ensure that the device is not being accessed by any
other host threads from the process when this function is called.
|
| org.bytedeco.cuda.global.cudart.cudaThreadGetCacheConfig(int[]) |
| org.bytedeco.cuda.global.cudart.cudaThreadGetCacheConfig(IntBuffer) |
| org.bytedeco.cuda.global.cudart.cudaThreadGetCacheConfig(IntPointer)
Note that this function is deprecated because its name does not
reflect its behavior. Its functionality is identical to the
non-deprecated function ::cudaDeviceGetCacheConfig(), which should be
used instead.
On devices where the L1 cache and shared memory use the same hardware
resources, this returns through \p pCacheConfig the preferred cache
configuration for the current device. This is only a preference. The
runtime will use the requested configuration if possible, but it is free to
choose a different configuration if required to execute functions.
This will return a \p pCacheConfig of ::cudaFuncCachePreferNone on devices
where the size of the L1 cache and shared memory are fixed.
The supported cache configurations are:
- ::cudaFuncCachePreferNone: no preference for shared memory or L1 (default)
- ::cudaFuncCachePreferShared: prefer larger shared memory and smaller L1 cache
- ::cudaFuncCachePreferL1: prefer larger L1 cache and smaller shared memory
|
| org.bytedeco.cuda.global.cudart.cudaThreadGetLimit(SizeTPointer, int)
Note that this function is deprecated because its name does not
reflect its behavior. Its functionality is identical to the
non-deprecated function ::cudaDeviceGetLimit(), which should be used
instead.
Returns in \p *pValue the current size of \p limit. The supported
::cudaLimit values are:
- ::cudaLimitStackSize: stack size of each GPU thread;
- ::cudaLimitPrintfFifoSize: size of the shared FIFO used by the
::printf() device system call.
- ::cudaLimitMallocHeapSize: size of the heap used by the
::malloc() and ::free() device system calls;
|
| org.bytedeco.cuda.global.cudart.cudaThreadSetCacheConfig(int)
Note that this function is deprecated because its name does not
reflect its behavior. Its functionality is identical to the
non-deprecated function ::cudaDeviceSetCacheConfig(), which should be
used instead.
On devices where the L1 cache and shared memory use the same hardware
resources, this sets through \p cacheConfig the preferred cache
configuration for the current device. This is only a preference. The
runtime will use the requested configuration if possible, but it is free to
choose a different configuration if required to execute the function. Any
function preference set via
\ref ::cudaFuncSetCacheConfig(const void*, enum cudaFuncCache) "cudaFuncSetCacheConfig (C API)"
or
\ref ::cudaFuncSetCacheConfig(T*, enum cudaFuncCache) "cudaFuncSetCacheConfig (C++ API)"
will be preferred over this device-wide setting. Setting the device-wide
cache configuration to ::cudaFuncCachePreferNone will cause subsequent
kernel launches to prefer to not change the cache configuration unless
required to launch the kernel.
This setting does nothing on devices where the size of the L1 cache and
shared memory are fixed.
Launching a kernel with a different preference than the most recent
preference setting may insert a device-side synchronization point.
The supported cache configurations are:
- ::cudaFuncCachePreferNone: no preference for shared memory or L1 (default)
- ::cudaFuncCachePreferShared: prefer larger shared memory and smaller L1 cache
- ::cudaFuncCachePreferL1: prefer larger L1 cache and smaller shared memory
|
| org.bytedeco.cuda.global.cudart.cudaThreadSetLimit(int, long)
Note that this function is deprecated because its name does not
reflect its behavior. Its functionality is identical to the
non-deprecated function ::cudaDeviceSetLimit(), which should be used
instead.
Setting \p limit to \p value is a request by the application to update
the current limit maintained by the device. The driver is free to
modify the requested value to meet h/w requirements (this could be
clamping to minimum or maximum values, rounding up to nearest element
size, etc). The application can use ::cudaThreadGetLimit() to find out
exactly what the limit has been set to.
Setting each ::cudaLimit has its own specific restrictions, so each is
discussed here.
- ::cudaLimitStackSize controls the stack size of each GPU thread.
- ::cudaLimitPrintfFifoSize controls the size of the shared FIFO
used by the ::printf() device system call.
Setting ::cudaLimitPrintfFifoSize must be performed before
launching any kernel that uses the ::printf() device
system call, otherwise ::cudaErrorInvalidValue will be returned.
- ::cudaLimitMallocHeapSize controls the size of the heap used
by the ::malloc() and ::free() device system calls. Setting
::cudaLimitMallocHeapSize must be performed before launching
any kernel that uses the ::malloc() or ::free() device system calls,
otherwise ::cudaErrorInvalidValue will be returned.
|
| org.bytedeco.cuda.global.cudart.cudaThreadSynchronize()
Note that this function is deprecated because its name does not
reflect its behavior. Its functionality is similar to the
non-deprecated function ::cudaDeviceSynchronize(), which should be used
instead.
Blocks until the device has completed all preceding requested tasks.
::cudaThreadSynchronize() returns an error if one of the preceding tasks
has failed. If the ::cudaDeviceScheduleBlockingSync flag was set for
this device, the host thread will block until the device has finished
its work.
|
| org.bytedeco.cuda.global.cudart.cudaUnbindTexture(textureReference)
Unbinds the texture bound to \p texref. If \p texref is not currently bound, no operation is performed.
|
| org.bytedeco.cuda.global.cudart.cuDeviceComputeCapability(int[], int[], int) |
| org.bytedeco.cuda.global.cudart.cuDeviceComputeCapability(IntBuffer, IntBuffer, int) |
| org.bytedeco.cuda.global.cudart.cuDeviceComputeCapability(IntPointer, IntPointer, int)
This function was deprecated as of CUDA 5.0 and its functionality superceded
by ::cuDeviceGetAttribute().
Returns in \p *major and \p *minor the major and minor revision numbers that
define the compute capability of the device \p dev.
|
| org.bytedeco.cuda.global.cudart.cuDeviceGetProperties(CUdevprop, int)
This function was deprecated as of CUDA 5.0 and replaced by ::cuDeviceGetAttribute().
Returns in \p *prop the properties of device \p dev. The ::CUdevprop
structure is defined as:
where:
- ::maxThreadsPerBlock is the maximum number of threads per block;
- ::maxThreadsDim[3] is the maximum sizes of each dimension of a block;
- ::maxGridSize[3] is the maximum sizes of each dimension of a grid;
- ::sharedMemPerBlock is the total amount of shared memory available per
block in bytes;
- ::totalConstantMemory is the total amount of constant memory available on
the device in bytes;
- ::SIMDWidth is the warp size;
- ::memPitch is the maximum pitch allowed by the memory copy functions that
involve memory regions allocated through ::cuMemAllocPitch();
- ::regsPerBlock is the total number of registers available per block;
- ::clockRate is the clock frequency in kilohertz;
- ::textureAlign is the alignment requirement; texture base addresses that
are aligned to ::textureAlign bytes do not need an offset applied to
texture fetches. |
| org.bytedeco.cuda.global.cudart.cuFuncSetBlockShape(CUfunc_st, int, int, int)
Specifies the \p x, \p y, and \p z dimensions of the thread blocks that are
created when the kernel given by \p hfunc is launched.
|
| org.bytedeco.cuda.global.cudart.cuFuncSetSharedSize(CUfunc_st, int)
Sets through \p bytes the amount of dynamic shared memory that will be
available to each thread block when the kernel given by \p hfunc is launched.
|
| org.bytedeco.cuda.global.cudart.cuLaunch(CUfunc_st)
Invokes the kernel \p f on a 1 x 1 x 1 grid of blocks. The block
contains the number of threads specified by a previous call to
::cuFuncSetBlockShape().
|
| org.bytedeco.cuda.global.cudart.cuLaunchGrid(CUfunc_st, int, int)
Invokes the kernel \p f on a \p grid_width x \p grid_height grid of
blocks. Each block contains the number of threads specified by a previous
call to ::cuFuncSetBlockShape().
|
| org.bytedeco.cuda.global.cudart.cuLaunchGridAsync(CUfunc_st, int, int, CUstream_st)
Invokes the kernel \p f on a \p grid_width x \p grid_height grid of
blocks. Each block contains the number of threads specified by a previous
call to ::cuFuncSetBlockShape().
|
| org.bytedeco.cuda.global.cudart.cuParamSetf(CUfunc_st, int, float)
Sets a floating-point parameter that will be specified the next time the
kernel corresponding to \p hfunc will be invoked. \p offset is a byte offset.
|
| org.bytedeco.cuda.global.cudart.cuParamSeti(CUfunc_st, int, int)
Sets an integer parameter that will be specified the next time the
kernel corresponding to \p hfunc will be invoked. \p offset is a byte offset.
|
| org.bytedeco.cuda.global.cudart.cuParamSetSize(CUfunc_st, int)
Sets through \p numbytes the total size in bytes needed by the function
parameters of the kernel corresponding to \p hfunc.
|
| org.bytedeco.cuda.global.cudart.cuParamSetTexRef(CUfunc_st, int, CUtexref_st)
Makes the CUDA array or linear memory bound to the texture reference
\p hTexRef available to a device program as a texture. In this version of
CUDA, the texture-reference must be obtained via ::cuModuleGetTexRef() and
the \p texunit parameter must be set to ::CU_PARAM_TR_DEFAULT.
|
| org.bytedeco.cuda.global.cudart.cuParamSetv(CUfunc_st, int, Pointer, int)
Copies an arbitrary amount of data (specified in \p numbytes) from \p ptr
into the parameter space of the kernel corresponding to \p hfunc. \p offset
is a byte offset.
|
| org.bytedeco.cuda.global.cusparse.cusparseCcsc2hyb(cusparseContext, int, int, cusparseMatDescr, float2, int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCcsc2hyb(cusparseContext, int, int, cusparseMatDescr, float2, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCcsc2hyb(cusparseContext, int, int, cusparseMatDescr, float2, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCcsr2hyb(cusparseContext, int, int, cusparseMatDescr, float2, int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCcsr2hyb(cusparseContext, int, int, cusparseMatDescr, float2, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCcsr2hyb(cusparseContext, int, int, cusparseMatDescr, float2, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCdense2hyb(cusparseContext, int, int, cusparseMatDescr, float2, int, int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCdense2hyb(cusparseContext, int, int, cusparseMatDescr, float2, int, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseCdense2hyb(cusparseContext, int, int, cusparseMatDescr, float2, int, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseChyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, float2, int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseChyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, float2, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseChyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, float2, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseChyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, float2, int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseChyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, float2, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseChyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, float2, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseChyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, float2, int) |
| org.bytedeco.cuda.global.cusparse.cusparseChybmv(cusparseContext, int, float2, cusparseMatDescr, cusparseHybMat, float2, float2, float2) |
| org.bytedeco.cuda.global.cusparse.cusparseChybsv_analysis(cusparseContext, int, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo) |
| org.bytedeco.cuda.global.cusparse.cusparseChybsv_solve(cusparseContext, int, float2, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, float2, float2) |
| org.bytedeco.cuda.global.cusparse.cusparseCreateHybMat(cusparseHybMat) |
| org.bytedeco.cuda.global.cusparse.cusparseCreateSolveAnalysisInfo(cusparseSolveAnalysisInfo) |
| org.bytedeco.cuda.global.cusparse.cusparseDcsc2hyb(cusparseContext, int, int, cusparseMatDescr, double[], int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDcsc2hyb(cusparseContext, int, int, cusparseMatDescr, DoubleBuffer, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDcsc2hyb(cusparseContext, int, int, cusparseMatDescr, DoublePointer, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDcsr2hyb(cusparseContext, int, int, cusparseMatDescr, double[], int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDcsr2hyb(cusparseContext, int, int, cusparseMatDescr, DoubleBuffer, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDcsr2hyb(cusparseContext, int, int, cusparseMatDescr, DoublePointer, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDdense2hyb(cusparseContext, int, int, cusparseMatDescr, double[], int, int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDdense2hyb(cusparseContext, int, int, cusparseMatDescr, DoubleBuffer, int, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDdense2hyb(cusparseContext, int, int, cusparseMatDescr, DoublePointer, int, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDestroyHybMat(cusparseHybMat) |
| org.bytedeco.cuda.global.cusparse.cusparseDestroySolveAnalysisInfo(cusparseSolveAnalysisInfo) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, double[], int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, DoubleBuffer, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, DoublePointer, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, double[], int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, DoubleBuffer, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, DoublePointer, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, double[], int) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, DoubleBuffer, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDhyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, DoublePointer, int) |
| org.bytedeco.cuda.global.cusparse.cusparseDhybmv(cusparseContext, int, double[], cusparseMatDescr, cusparseHybMat, double[], double[], double[]) |
| org.bytedeco.cuda.global.cusparse.cusparseDhybmv(cusparseContext, int, DoubleBuffer, cusparseMatDescr, cusparseHybMat, DoubleBuffer, DoubleBuffer, DoubleBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseDhybmv(cusparseContext, int, DoublePointer, cusparseMatDescr, cusparseHybMat, DoublePointer, DoublePointer, DoublePointer) |
| org.bytedeco.cuda.global.cusparse.cusparseDhybsv_analysis(cusparseContext, int, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo) |
| org.bytedeco.cuda.global.cusparse.cusparseDhybsv_solve(cusparseContext, int, double[], cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, double[], double[]) |
| org.bytedeco.cuda.global.cusparse.cusparseDhybsv_solve(cusparseContext, int, DoubleBuffer, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, DoubleBuffer, DoubleBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseDhybsv_solve(cusparseContext, int, DoublePointer, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, DoublePointer, DoublePointer) |
| org.bytedeco.cuda.global.cusparse.cusparseScsc2hyb(cusparseContext, int, int, cusparseMatDescr, float[], int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseScsc2hyb(cusparseContext, int, int, cusparseMatDescr, FloatBuffer, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseScsc2hyb(cusparseContext, int, int, cusparseMatDescr, FloatPointer, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseScsr2hyb(cusparseContext, int, int, cusparseMatDescr, float[], int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseScsr2hyb(cusparseContext, int, int, cusparseMatDescr, FloatBuffer, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseScsr2hyb(cusparseContext, int, int, cusparseMatDescr, FloatPointer, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseSdense2hyb(cusparseContext, int, int, cusparseMatDescr, float[], int, int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseSdense2hyb(cusparseContext, int, int, cusparseMatDescr, FloatBuffer, int, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseSdense2hyb(cusparseContext, int, int, cusparseMatDescr, FloatPointer, int, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, float[], int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, FloatBuffer, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, FloatPointer, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, float[], int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, FloatBuffer, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, FloatPointer, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, float[], int) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, FloatBuffer, int) |
| org.bytedeco.cuda.global.cusparse.cusparseShyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, FloatPointer, int) |
| org.bytedeco.cuda.global.cusparse.cusparseShybmv(cusparseContext, int, float[], cusparseMatDescr, cusparseHybMat, float[], float[], float[]) |
| org.bytedeco.cuda.global.cusparse.cusparseShybmv(cusparseContext, int, FloatBuffer, cusparseMatDescr, cusparseHybMat, FloatBuffer, FloatBuffer, FloatBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseShybmv(cusparseContext, int, FloatPointer, cusparseMatDescr, cusparseHybMat, FloatPointer, FloatPointer, FloatPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseShybsv_analysis(cusparseContext, int, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo) |
| org.bytedeco.cuda.global.cusparse.cusparseShybsv_solve(cusparseContext, int, float[], cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, float[], float[]) |
| org.bytedeco.cuda.global.cusparse.cusparseShybsv_solve(cusparseContext, int, FloatBuffer, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, FloatBuffer, FloatBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseShybsv_solve(cusparseContext, int, FloatPointer, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, FloatPointer, FloatPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseZcsc2hyb(cusparseContext, int, int, cusparseMatDescr, double2, int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZcsc2hyb(cusparseContext, int, int, cusparseMatDescr, double2, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZcsc2hyb(cusparseContext, int, int, cusparseMatDescr, double2, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZcsr2hyb(cusparseContext, int, int, cusparseMatDescr, double2, int[], int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZcsr2hyb(cusparseContext, int, int, cusparseMatDescr, double2, IntBuffer, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZcsr2hyb(cusparseContext, int, int, cusparseMatDescr, double2, IntPointer, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZdense2hyb(cusparseContext, int, int, cusparseMatDescr, double2, int, int[], cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZdense2hyb(cusparseContext, int, int, cusparseMatDescr, double2, int, IntBuffer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZdense2hyb(cusparseContext, int, int, cusparseMatDescr, double2, int, IntPointer, cusparseHybMat, int, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZhyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, double2, int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseZhyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, double2, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseZhyb2csc(cusparseContext, cusparseMatDescr, cusparseHybMat, double2, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseZhyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, double2, int[], int[]) |
| org.bytedeco.cuda.global.cusparse.cusparseZhyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, double2, IntBuffer, IntBuffer) |
| org.bytedeco.cuda.global.cusparse.cusparseZhyb2csr(cusparseContext, cusparseMatDescr, cusparseHybMat, double2, IntPointer, IntPointer) |
| org.bytedeco.cuda.global.cusparse.cusparseZhyb2dense(cusparseContext, cusparseMatDescr, cusparseHybMat, double2, int) |
| org.bytedeco.cuda.global.cusparse.cusparseZhybmv(cusparseContext, int, double2, cusparseMatDescr, cusparseHybMat, double2, double2, double2) |
| org.bytedeco.cuda.global.cusparse.cusparseZhybsv_analysis(cusparseContext, int, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo) |
| org.bytedeco.cuda.global.cusparse.cusparseZhybsv_solve(cusparseContext, int, double2, cusparseMatDescr, cusparseHybMat, cusparseSolveAnalysisInfo, double2, double2) |
| org.bytedeco.cuda.global.cudart.cuSurfRefGetArray(CUarray_st, CUsurfref_st)
Returns in \p *phArray the CUDA array bound to the surface reference
\p hSurfRef, or returns ::CUDA_ERROR_INVALID_VALUE if the surface reference
is not bound to any CUDA array.
|
| org.bytedeco.cuda.global.cudart.cuSurfRefSetArray(CUsurfref_st, CUarray_st, int)
Sets the CUDA array \p hArray to be read and written by the surface reference
\p hSurfRef. Any previous CUDA array state associated with the surface
reference is superseded by this function. \p Flags must be set to 0.
The ::CUDA_ARRAY3D_SURFACE_LDST flag must have been set for the CUDA array.
Any CUDA array previously bound to \p hSurfRef is unbound.
|
| org.bytedeco.cuda.global.cudart.cuTexRefCreate(CUtexref_st)
Creates a texture reference and returns its handle in \p *pTexRef. Once
created, the application must call ::cuTexRefSetArray() or
::cuTexRefSetAddress() to associate the reference with allocated memory.
Other texture reference functions are used to specify the format and
interpretation (addressing, filtering, etc.) to be used when the memory is
read through this texture reference.
|
| org.bytedeco.cuda.global.cudart.cuTexRefDestroy(CUtexref_st)
Destroys the texture reference specified by \p hTexRef.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetAddress(LongPointer, CUtexref_st)
Returns in \p *pdptr the base address bound to the texture reference
\p hTexRef, or returns ::CUDA_ERROR_INVALID_VALUE if the texture reference
is not bound to any device memory range.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetAddressMode(IntPointer, CUtexref_st, int)
Returns in \p *pam the addressing mode corresponding to the
dimension \p dim of the texture reference \p hTexRef. Currently, the only
valid value for \p dim are 0 and 1.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetArray(CUarray_st, CUtexref_st)
Returns in \p *phArray the CUDA array bound to the texture reference
\p hTexRef, or returns ::CUDA_ERROR_INVALID_VALUE if the texture reference
is not bound to any CUDA array.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetBorderColor(FloatPointer, CUtexref_st)
Returns in \p pBorderColor, values of the RGBA color used by
the texture reference \p hTexRef.
The color value is of type float and holds color components in
the following sequence:
pBorderColor[0] holds 'R' component
pBorderColor[1] holds 'G' component
pBorderColor[2] holds 'B' component
pBorderColor[3] holds 'A' component
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetFilterMode(IntPointer, CUtexref_st)
Returns in \p *pfm the filtering mode of the texture reference
\p hTexRef.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetFlags(IntPointer, CUtexref_st)
Returns in \p *pFlags the flags of the texture reference \p hTexRef.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetFormat(IntPointer, IntPointer, CUtexref_st)
Returns in \p *pFormat and \p *pNumChannels the format and number
of components of the CUDA array bound to the texture reference \p hTexRef.
If \p pFormat or \p pNumChannels is NULL, it will be ignored.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetMaxAnisotropy(IntPointer, CUtexref_st)
Returns the maximum anisotropy in \p pmaxAniso that's used when reading memory through
the texture reference \p hTexRef.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetMipmapFilterMode(IntPointer, CUtexref_st)
Returns the mipmap filtering mode in \p pfm that's used when reading memory through
the texture reference \p hTexRef.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetMipmapLevelBias(FloatPointer, CUtexref_st)
Returns the mipmap level bias in \p pBias that's added to the specified mipmap
level when reading memory through the texture reference \p hTexRef.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetMipmapLevelClamp(FloatPointer, FloatPointer, CUtexref_st)
Returns the min/max mipmap level clamps in \p pminMipmapLevelClamp and \p pmaxMipmapLevelClamp
that's used when reading memory through the texture reference \p hTexRef.
|
| org.bytedeco.cuda.global.cudart.cuTexRefGetMipmappedArray(CUmipmappedArray_st, CUtexref_st)
Returns in \p *phMipmappedArray the CUDA mipmapped array bound to the texture
reference \p hTexRef, or returns ::CUDA_ERROR_INVALID_VALUE if the texture reference
is not bound to any CUDA mipmapped array.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetAddress(SizeTPointer, CUtexref_st, long, long)
Binds a linear address range to the texture reference \p hTexRef. Any
previous address or CUDA array state associated with the texture reference
is superseded by this function. Any memory previously bound to \p hTexRef
is unbound.
Since the hardware enforces an alignment requirement on texture base
addresses, ::cuTexRefSetAddress() passes back a byte offset in
\p *ByteOffset that must be applied to texture fetches in order to read from
the desired memory. This offset must be divided by the texel size and
passed to kernels that read from the texture so they can be applied to the
::tex1Dfetch() function.
If the device memory pointer was returned from ::cuMemAlloc(), the offset
is guaranteed to be 0 and NULL may be passed as the \p ByteOffset parameter.
The total number of elements (or texels) in the linear address range
cannot exceed ::CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE1D_LINEAR_WIDTH.
The number of elements is computed as (\p bytes / bytesPerElement),
where bytesPerElement is determined from the data format and number of
components set using ::cuTexRefSetFormat().
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetAddress2D(CUtexref_st, CUDA_ARRAY_DESCRIPTOR, long, long)
Binds a linear address range to the texture reference \p hTexRef. Any
previous address or CUDA array state associated with the texture reference
is superseded by this function. Any memory previously bound to \p hTexRef
is unbound.
Using a ::tex2D() function inside a kernel requires a call to either
::cuTexRefSetArray() to bind the corresponding texture reference to an
array, or ::cuTexRefSetAddress2D() to bind the texture reference to linear
memory.
Function calls to ::cuTexRefSetFormat() cannot follow calls to
::cuTexRefSetAddress2D() for the same texture reference.
It is required that \p dptr be aligned to the appropriate hardware-specific
texture alignment. You can query this value using the device attribute
::CU_DEVICE_ATTRIBUTE_TEXTURE_ALIGNMENT. If an unaligned \p dptr is
supplied, ::CUDA_ERROR_INVALID_VALUE is returned.
\p Pitch has to be aligned to the hardware-specific texture pitch alignment.
This value can be queried using the device attribute
::CU_DEVICE_ATTRIBUTE_TEXTURE_PITCH_ALIGNMENT. If an unaligned \p Pitch is
supplied, ::CUDA_ERROR_INVALID_VALUE is returned.
Width and Height, which are specified in elements (or texels), cannot exceed
::CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_WIDTH and
::CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_HEIGHT respectively.
\p Pitch, which is specified in bytes, cannot exceed
::CU_DEVICE_ATTRIBUTE_MAXIMUM_TEXTURE2D_LINEAR_PITCH.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetAddressMode(CUtexref_st, int, int)
Specifies the addressing mode \p am for the given dimension \p dim of the
texture reference \p hTexRef. If \p dim is zero, the addressing mode is
applied to the first parameter of the functions used to fetch from the
texture; if \p dim is 1, the second, and so on. ::CUaddress_mode is defined
as:
Note that this call has no effect if \p hTexRef is bound to linear memory.
Also, if the flag, ::CU_TRSF_NORMALIZED_COORDINATES, is not set, the only
supported address mode is ::CU_TR_ADDRESS_MODE_CLAMP. |
| org.bytedeco.cuda.global.cudart.cuTexRefSetArray(CUtexref_st, CUarray_st, int)
Binds the CUDA array \p hArray to the texture reference \p hTexRef. Any
previous address or CUDA array state associated with the texture reference
is superseded by this function. \p Flags must be set to
::CU_TRSA_OVERRIDE_FORMAT. Any CUDA array previously bound to \p hTexRef is
unbound.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetBorderColor(CUtexref_st, FloatPointer)
Specifies the value of the RGBA color via the \p pBorderColor to the texture reference
\p hTexRef. The color value supports only float type and holds color components in
the following sequence:
pBorderColor[0] holds 'R' component
pBorderColor[1] holds 'G' component
pBorderColor[2] holds 'B' component
pBorderColor[3] holds 'A' component
Note that the color values can be set only when the Address mode is set to
CU_TR_ADDRESS_MODE_BORDER using ::cuTexRefSetAddressMode.
Applications using integer border color values have to "reinterpret_cast" their values to float.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetFilterMode(CUtexref_st, int)
Specifies the filtering mode \p fm to be used when reading memory through
the texture reference \p hTexRef. ::CUfilter_mode_enum is defined as:
Note that this call has no effect if \p hTexRef is bound to linear memory. |
| org.bytedeco.cuda.global.cudart.cuTexRefSetFlags(CUtexref_st, int)
Specifies optional flags via \p Flags to specify the behavior of data
returned through the texture reference \p hTexRef. The valid flags are:
- ::CU_TRSF_READ_AS_INTEGER, which suppresses the default behavior of
having the texture promote integer data to floating point data in the
range [0, 1]. Note that texture with 32-bit integer format
would not be promoted, regardless of whether or not this
flag is specified;
- ::CU_TRSF_NORMALIZED_COORDINATES, which suppresses the
default behavior of having the texture coordinates range
from [0, Dim) where Dim is the width or height of the CUDA
array. Instead, the texture coordinates [0, 1.0) reference
the entire breadth of the array dimension;
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetFormat(CUtexref_st, int, int)
Specifies the format of the data to be read by the texture reference
\p hTexRef. \p fmt and \p NumPackedComponents are exactly analogous to the
::Format and ::NumChannels members of the ::CUDA_ARRAY_DESCRIPTOR structure:
They specify the format of each component and the number of components per
array element.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetMaxAnisotropy(CUtexref_st, int)
Specifies the maximum anisotropy \p maxAniso to be used when reading memory through
the texture reference \p hTexRef.
Note that this call has no effect if \p hTexRef is bound to linear memory.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetMipmapFilterMode(CUtexref_st, int)
Specifies the mipmap filtering mode \p fm to be used when reading memory through
the texture reference \p hTexRef. ::CUfilter_mode_enum is defined as:
Note that this call has no effect if \p hTexRef is not bound to a mipmapped array. |
| org.bytedeco.cuda.global.cudart.cuTexRefSetMipmapLevelBias(CUtexref_st, float)
Specifies the mipmap level bias \p bias to be added to the specified mipmap level when
reading memory through the texture reference \p hTexRef.
Note that this call has no effect if \p hTexRef is not bound to a mipmapped array.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetMipmapLevelClamp(CUtexref_st, float, float)
Specifies the min/max mipmap level clamps, \p minMipmapLevelClamp and \p maxMipmapLevelClamp
respectively, to be used when reading memory through the texture reference
\p hTexRef.
Note that this call has no effect if \p hTexRef is not bound to a mipmapped array.
|
| org.bytedeco.cuda.global.cudart.cuTexRefSetMipmappedArray(CUtexref_st, CUmipmappedArray_st, int)
Binds the CUDA mipmapped array \p hMipmappedArray to the texture reference \p hTexRef.
Any previous address or CUDA array state associated with the texture reference
is superseded by this function. \p Flags must be set to ::CU_TRSA_OVERRIDE_FORMAT.
Any CUDA array previously bound to \p hTexRef is unbound.
|
| org.bytedeco.cuda.cudart.cudaPointerAttributes.isManaged()
Indicates if this pointer points to managed memory
|
| org.bytedeco.cuda.cudart.cudaPointerAttributes.memoryType()
The physical location of the memory, ::cudaMemoryTypeHost or
::cudaMemoryTypeDevice. Note that managed memory can return either
::cudaMemoryTypeDevice or ::cudaMemoryTypeHost regardless of it's
physical location.
|
| org.bytedeco.cuda.global.nvml.NVML_DOUBLE_BIT_ECC()
Mapped to \ref NVML_MEMORY_ERROR_TYPE_UNCORRECTED
|
| org.bytedeco.cuda.global.nvml.NVML_SINGLE_BIT_ECC()
Mapped to \ref NVML_MEMORY_ERROR_TYPE_CORRECTED
|
| org.bytedeco.cuda.global.nvml.nvmlDeviceGetDetailedEccErrors(nvmlDevice_st, int, int, nvmlEccErrorCounts_t)
This API supports only a fixed set of ECC error locations
On different GPU architectures different locations are supported
See \ref nvmlDeviceGetMemoryErrorCounter
For Fermi &tm; or newer fully supported devices.
Only applicable to devices with ECC.
Requires \a NVML_INFOROM_ECC version 2.0 or higher to report aggregate location-based ECC counts.
Requires \a NVML_INFOROM_ECC version 1.0 or higher to report all other ECC counts.
Requires ECC Mode to be enabled.
Detailed errors provide separate ECC counts for specific parts of the memory system.
Reports zero for unsupported ECC error counters when a subset of ECC error counters are supported.
See \ref nvmlMemoryErrorType_t for a description of available bit types.\n
See \ref nvmlEccCounterType_t for a description of available counter types.\n
See \ref nvmlEccErrorCounts_t for a description of provided detailed ECC counts.
|
| org.bytedeco.cuda.global.nvml.nvmlDeviceGetHandleBySerial(BytePointer, nvmlDevice_st)
Since more than one GPU can exist on a single board this function is deprecated in favor
of \ref nvmlDeviceGetHandleByUUID.
For dual GPU boards this function will return NVML_ERROR_INVALID_ARGUMENT.
Starting from NVML 5, this API causes NVML to initialize the target GPU
NVML may initialize additional GPUs as it searches for the target GPU
|
Copyright © 2019. All rights reserved.