Run Bittensor via Docker

Since Docker containers encapsulate everything an application needs to run (and only those things), any host with the Docker runtime installed (be it a developer’s laptop or a public cloud instance) can run a Docker container. This is very useful when deploying your model to a cloud GPU or some remote peer if you do not wish to handle the hassle of setting up an environment locally. Hence, once a model has been trained and is ready to be deployed onto the Bittensor network, you may wish to use Docker to deploy it somewhere instead of a native Python call. For this purpose, Bittensor contains a bash script (start_bittensor.sh) that automatically pulls the latest image from Docker Hub and constructs a docker container that is able to run a given model. The parameters of start_bittensor.sh are summarized as follows:

Flag Description
-n, --neuron Which model (or “neuron”) to run in examples/. E.g. mnist, cifar, bert, etc.
-l, --logdir Logging directory
-p, --port Bind side port for accepting requests.
-c, --chain_endpoint Bittensor chain endpoint.
-a, --axon_port Axon terminal bind port.
-m, --metagraph_port Metagraph bind port.
-s, --metagraph_size Metagraph cache size.
-b, --bootstrap Metagraph boot peer.
-k, --neuron_key Neuron Key.
-r, --remote_ip Remote serving IP.
-mp, --model_path Path to a saved version of the model to resume training.

To start a dockerized version of a model, simply run the following:

./start_bittensor.sh

This will utilize the default values for all of the above command line arguments and run the network under examples/mnist on localhost.