nvidia-cuda-proxy-control man page

nvidia-cuda-proxy-control ā€” NVIDIA CUDA proxy management program

Synopsis

nvidia-cuda-proxy-control [-d]

Description

CUDA proxy is a feature that allows multiple CUDA processes to share a single GPU context. A CUDA program runs in proxy mode if the proxy control daemon is running on the system. When a CUDA program starts, it tries to connect to the daemon, which will then create a proxy server process for the connecting client if one does not exist for the user (UID) who launched the client. Each user (UID) has its own proxy server process. The proxy server creates the shared GPU context, manages its clients, and issues work to the GPU on behalf of its clients. The proxy mode should be transparent to CUDA programs.

Currently, proxy is available on 64-bit Linux only, and requires a device that supports Unified Virtual Address (UVA). Applications requiring pre-CUDA 4.0 APIs are not supported under proxy mode.

Options

-d

Start the proxy control daemon, assuming the user has enough privilege (e.g. root).

<no arguments>

Start the front-end management user interface to the daemon, which needs to be started first. The front-end UI keeps reading commands from stdin until EOF. Commands are separated by the newline character. If an invalid command is issued and rejected, an error message will be printed to stdout. The exit status of the front-end UI is zero if communication with the daemon is successful. A non-zero value is returned if the daemon is not found or connection to the daemon is broken unexpectedly. See the "quit" command below for more information about the exit status.

Commands supported by the proxy control daemon:

get_server_list

Print out a list of PIDs of all proxy server processes.

start_server -uid UID

Start a new proxy server process for the specified user (UID).

shutdown_server PID [-f]

Shutdown the proxy server with given PID. The proxy server will not accept any new client connections and it exits when all current clients disconnect. -f is forced immediate shutdown. If a client launches a faulty kernel that runs forever, a forced shutdown of the proxy server may be required, since the proxy server creates and issues GPU work on behalf of its clients.

get_client_list PID

Print out a list of PIDs of all clients connected to the server with given PID.

quit [-t TIMEOUT]

Shutdown the proxy control daemon process and all proxy servers. The proxy control daemon stops accepting new clients while waiting for current proxy servers and proxy clients to finish. If TIMEOUT is specified (in seconds), the daemon will force proxy servers to shutdown if they are still running after TIMEOUT seconds.

This command is synchronous. The front-end UI waits for the daemon to shutdown, then returns the daemon's exit status. The exit status is zero iff all proxy servers have exited gracefully.

Environment

CUDA_PROXY_PIPE_DIRECTORY

Specify the directory that contains the named pipes used for communication among proxy control, proxy server, and proxy clients. The value of this environment variable should be consistent in the proxy control daemon and all proxy client processes. Default directory is /tmp/nvidia-proxy

CUDA_PROXY_LOG_DIRECTORY

Specify the directory that contains the proxy log files. This variable is used by the proxy control daemon only. Default directory is /var/log/nvidia-proxy

Files

Log files created by the proxy control daemon in the specified directory

control.log

Record startup and shutdown of proxy control daemon, user commands issued with their results, and status of proxy servers.

server.log

Record startup and shutdown of proxy servers, and status of proxy clients.

Info

2012-04-10 nvidia-cuda-proxy-control NVIDIA