Frequently Asked Questions

How do I change change my parameters, such as payout address, allotted storage space, and bandwidth?

1. Stop the running Storage Node container:

docker stop -t 300 storagenode

2. Remove the existing container:

docker rm storagenode

3. Start your storage node again by running the following command after editing WALLET, EMAIL, ADDRESS, BANDWIDTH, STORAGE, <identity-dir>, and <storage-dir>:

Windows
Non-ARM based platforms
ARM-based platforms
docker run -d --restart unless-stopped -p 28967:28967 -e WALLET="0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" -e EMAIL="user@example.com" -e ADDRESS="domain.ddns.net:28967" -e BANDWIDTH="20TB" -e STORAGE="2TB" --mount type=bind,source="<identity-dir>",destination=/app/identity --mount type=bind,source="<storage-dir>",destination=/app/config --name storagenode storjlabs/storagenode:alpha
docker run -d --restart unless-stopped -p 28967:28967 \
-e WALLET="0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
-e EMAIL="user@example.com" \
-e ADDRESS="domain.ddns.net:28967" \
-e BANDWIDTH="20TB" \
-e STORAGE="2TB" \
--mount type=bind,source="<identity-dir>",destination=/app/identity \
--mount type=bind,source="<storage-dir>",destination=/app/config \
--name storagenode storjlabs/storagenode:alpha
docker run -d --restart unless-stopped -p 28967:28967 \
-e WALLET="0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
-e EMAIL="user@example.com" \
-e ADDRESS="domain.ddns.net:28967" \
-e BANDWIDTH="20TB" \
-e STORAGE="2TB" \
--mount type=bind,source="<identity-dir>",destination=/app/identity \
--mount type=bind,source="<storage-dir>",destination=/app/config \
--name storagenode storjlabs/storagenode:arm

How do I allow my node to focus on a smaller or larger maximum number of concurrent uploads?

Slower nodes like Raspberry Pi3 for example, may have difficulties getting any data. In previous releases (up to 0.14.3) they were accepting too many concurrent uploads but were unable to finish them in time. In the new releases after 0.14.3, we now have a new configuration option available to fine-tune the number of concurrent uploads. If your node was already previously running, first docker stop -t 300 storagenode and then please edit your config.yaml file (on Windows, use Nodepad++, not Notepad, on MacOS, use TextEdit, not Notes) to add the following line at the end of the file:

storage2.max-concurrent-requests: 7

Save the config.yaml file and then restart your node by docker restart -t 300 storagenode, if it was already running before. If this is the first time you will start your node, please follow the instructions in the following sections to use the proper parameters with your docker run storagenode command.

This will allow slow nodes to focus on a smaller number of uploads and finish them as fast as possible, while refusing the uploads it couldn't process anyway. You can adjust the number of requests - 7 is just an initial suggestion, but you can modify this number up or down and monitor the performance of your node until you find the right number of requests for your particular node which does not cause your node to be overwhelmed. Faster nodes will be able to function with a higher number of concurrent requests than slow nodes.

Many other storage nodes are working through this as well, see how they approach this.‚Äč

How do I check my logs?

You can look at your logs to see if you have some errors indicating that something is not functioning properly:

docker logs storagenode

Why is my log so long?

Use this command if you just want to see the last 20 lines of the log:

docker logs --tail 20 storagenode

How do I redirect my logs to a file?

1. To redirect the logs to a file, stop your node:

docker stop -t 300 storagenode

2. Then edit your config.yaml (you can use nano or vi editor) to add:

log.output: "/app/config/node.log"

3. Start your node again:

docker start storagenode

When you use this option, docker logs commands no longer show your node log. Use the file instead.

How do I shutdown my node for maintenance on my system?

If you need to shutdown the Storage Node for maintenance on your system, run:

docker stop -t 300 storagenode

After you finished your maintenance, restart the node with:

docker start storagenode

How do I migrate my Node to a new Drive or Computer?

If you want to migrate your node to a new drive or computer, you need to migrate both the contents of your storage folder and your identity folder to the new location and change the corresponding paths for both storage and identity folders --mount parameters in your docker run storagenode command.

How do I estimating my Payouts per Satellite?

If you would like to estimate how much you can expect to get paid for running your node during a given month, please follow the instructions here. Please note that this script will not give you exact values, your actual payout may be slightly different from what you calculated for each satellite. Also note that the script will estimate what will be the payout you will receive depending on how long you already have been running the node on a satellite, taking into account the amount withheld in the initial months which is not immediately paid out to the node operator. Please see more details about held amounts in this blog post.

What other commands can I run?

Run help to see other commands:

docker exec -it storagenode /app/storagenode help

Run the following to execute other commands:

docker exec -it storagenode /app/storagenode <<command>>

What if I'm using a Remote Connection?

If you must use a remote connection, due to the length of time it takes for some of the steps, it is highly recommended to run them inside a virtual console like TMUX or SCREEN.

It is recommended to perform the next steps local to the machine, and not via a remote connection.

What if I'm using Windows or Mac OS?

Your node may require extra monitoring. You may have to frequently restart Docker from the Docker desktop application when your Last Contact shown in the dashboard gets larger than a few seconds.

This is your Storage Node Dashboard. The Last Contact time may lag on Windows and Mac, requiring you restart your node.