Hive Developer Portal - Quickstart
Getting started to develop robust and feature rich Hive applications couldn’t be easier. Accessing hive data is easy from various options depending on your infrastructure and objectives.
Applications that interface directly with the Hive blockchain will need to connect to a
hived node. Developers may choose to use one of the public API nodes that are available, or run their own instance of a node.
hived fully supports WebSockets (
ws://) public nodes typically do not. All nodes listed use HTTPS (
https://). If you require WebSockets for your solutions, please consider setting up your own
hived node or proxy WebSockets to HTTPS using lineman.
For a report on the latest public full nodes, check the latest posts on @fullnodeupdate by @holger80.
The simplest way to get started is by deploying a pre-built dockerized container.
Dockerized p2p Node
To install a witness or seed node:
git clone https://github.com/someguy123/hive-docker.git cd hive-docker # If you don't already have a docker installation, this will install it for you ./run.sh install_docker # This downloads/updates the low-memory docker image for Hive ./run.sh install # If you are a witness, you need to adjust the configuration as needed # e.g. witness name, private key, logging config, turn off p2p-endpoint etc. # If you're running a seed, then don't worry about the config, it will just work nano data/witness_node_data_dir/config.ini # (optional) Setting the .env file up (see the env settings section of this readme) # will help you to adjust settings for hive-in-a-box nano .env # Once you've configured your server, it's recommended to download the block log, as replays can be # faster than p2p download ./run.sh dlblocks # You'll also want to set the shared memory size (use sudo if not logged in as root). # Adjust 64G to whatever size is needed for your type of server and make sure to leave growth room. # Please be aware that the shared memory size changes constantly. Ask in a witness chatroom if you're unsure. ./run.sh shm_size 64G # Then after you've downloaded the blockchain, you can start hived in replay mode ./run.sh replay # If you DON'T want to replay, use "start" instead ./run.sh start
You may want to persist the /dev/shm size (shared memory) across reboots. To do this, you can edit
/etc/fstab, please be very careful, as any mistakes in this file will cause your system to become unbootable.
Dockerized Full Node
To install a full RPC node - follow the same steps as above, but use
install_full instead of
Remember to adjust the config, you’ll need a higher shared memory size (potentially up to 1 TB), and various plugins.
For handling requests to your full node in docker, I recommend spinning up an nginx container, and connecting nginx to the Hive node using a docker network.
docker network create rpc_default # Assuming your RPC container is called "rpc1" instead of witness/seed docker network connect rpc_default rpc1 docker network connect rpc_default nginx
Nginx will now be able to access the container RPC1 via
http://rpc1:8090 (assuming 8090 is the RPC port in your config). Then you can set up SSL and container port forwarding as needed for nginx.
Customized Docker Node
If the above options do not meet your needs, refer to Hive-in-a-box by @someguy123:
Building Without Docker
Full non-docker steps can be reviewed here:
Normally syncing blockchain starts from very first,
0 genesis block. It might take long time to catch up with live network, because it connects to various p2p nodes in the Hive network and requests blocks from 0 to head block. It stores blocks in block log file and builds up the current state in the shared memory file. But there is a way to bootstrap syncing by using trusted
block_log file. The block log is an external append only log of the blocks. It contains blocks that are only added to the log after they are irreversible because the log is append only.
Trusted block log file helps to download blocks faster. Various operators provide public block log file which can be downloaded from:
block_log files updated periodically, as of March 2021 uncompressed
block_log file size ~350 GB. (Docker container on
stable branch of Hive source code has option to use
USE_PUBLIC_BLOCKLOG=1 to download latest block log and start Hive node with replay.)
Block log should be place in
blockchain directory below
data_dir and node should be started with
--replay-blockchain to ensure block log is valid and continue to sync from the point of snapshot. Replay uses the downloaded block log file to build up the shared memory file up to the highest block stored in that snapshot and then continues with sync up to the head block.
Replay helps to sync blockchain in much faster rate, but as blockchain grows in size replay might also take some time to verify blocks.
There is another trick which might help with faster sync/replay on smaller equipped servers:
while : do dd if=blockchain/block_log iflag=nocache count=0 sleep 60 done
Above bash script drops
block_log from the OS cache, leaving more memory free for backing the blockchain database. It might also help while running live, but measurement would be needed to determine this.
Few other tricks that might help:
For Linux users, virtual memory writes dirty pages of the shared file out to disk more often than is optimal which results in hived being slowed down by redundant IO operations. These settings are recommended to optimize reindex time.
echo 75 | sudo tee /proc/sys/vm/dirty_background_ratio echo 1000 | sudo tee /proc/sys/vm/dirty_expire_centisecs echo 80 | sudo tee /proc/sys/vm/dirty_ratio echo 30000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs
Another settings that can be changed in
flush-state-interval - it is to specify a target number of blocks to process before flushing the chain database to disk. This is needed on Linux machines and a value of 100000 is recommended. It is not needed on OS X, but can be used if desired.
Hive blockchain software is written in C++ and in order to modify the source code you need some understanding of the C++ programming language. Each Hive node runs an instance of this software, so in order to test your changes, you will need to know how to install dependencies which can be found in the Hive repo. This also means that some knowledge of System administration is also required. There are multiple advantages of running a testnet, you can test your scripts or applications on a testnet without extra spam on the live network, which allows much more flexibility to try new things. Having access to a testnet also helps you to work on new features and possibly submit new or improved pull requests to official the Hive GitHub repository.
The Hive Public Testnet is maintained to aid developers who want to rapidly test their applications. Unless your account was created very recently, you should be able to participate in the testnet using your own mainnet account and keys (though please be careful, if you leak your key during testnet, your mainnet account will be compromised).
- Chain ID:
- Condenser: testblog.openhive.network
- Wallet: testwallet.openhive.network
Also see: hive.blog/hive-139531/@gtg/hf25-public-testnet-reloaded-rc2
Running a Private Testnet Node
Alternatively, if you would like to run a private local testnet, you can get up and running with docker:
docker run -d -p 8090:8090 inertia/tintoy:latest
For details on running a local testnet, see: Setting Up a Testnet