Run Bittensor via PythonΒΆ

First, ensure that you have Python 3.5 or above to run Bittensor. Once you have done so, you can move to the examples directory and pick a model to run. You can run the model directly by calling:


This will run a single instance of the model that will not speak to any peers with default parameters. The parameters for running a Bittensor model are as follows:

Flag Description
--chain_endpoint Bittensor chain endpoint.
--axon_port Axon terminal bind port.
--metagraph_port Metagraph bind port.
--metagraph_size Metagraph cache size.
--bootstrap Metagraph boot peer.
--neuron_key Neuron Key.
--remote_ip Remote serving IP.
--model_path Path to a saved version of the model to resume training.

To run a model on a given port number (e.g. 8120) that peers can connect to, you can run it as follows:

python --metagraph_port 8120

Once this model is running, you can run another model that bootstraps itself to port 8120 as follows:

python --bootstrap ''

This will run another instance of the model that will communicate with the first model and they will begin to share knowledge with each other.