Skip to content
Vuk Djoric edited this page Jun 5, 2018 · 32 revisions

Usage

You can check out our video how to import the data.

Setup two nodes

In order to properly test the network you will need at least two nodes that will connect to each other.

Prerequisites

  1. One node must be set to run as Network Bootstrap node. This means that one node will be sitting and waiting other nodes to connect. In the future we plan to provide several bootstrap nodes for convenience, but if you are testing on your own, just make sure that BOOTSTRAP_NODE setting in .env is empty for the first one.

  2. Both nodes need to use different wallets. If you try to use the same wallet on both nodes, it may not work. Make sure that both wallets have some test ETH as well as some Alpha TRAC tokens (that can be obtained from us by sending request on email [email protected] ).

  3. Make sure that if you are using local computer to test and try to run two nodes, you must set different ports (NODE_PORT and NODE_RPC_PORT) in .env.

  4. Every time you change your configuration in .env don't forget to run npm run config to apply that configuration.

  5. In order to make the initial import, your node must whitelist the IP of the machine that is requesting the import in .env i.e IMPORT_WHITELIST=127.0.0.1 if you are importing from localhost.

Starting nodes

  1. First start the network bootstrap node. It will generate its identity on the first run. As we are using test network (TEST_NETWORK_ENABLED=1) the identity will be mined quickly. On a real network it will take some time but only on the first run.

  2. Once the first node is running (you will see message - Running in seed mode (waiting for connections)), copy the identity of that node displayed in the terminal. Use the IP address, port and the identity in the following format https://127.0.0.1:5278/#0bd885a50800346e5fbe777452a83a978d49cdcc and write it in the BOOTSTRAP_NODE setting in .env of the second node. Pay attention to put # before the identity identifier.

  3. Once you do that apply the configuration by running npm run config and start the second node.

The nodes will connect to each other and then you can proceed with import.

How to run the servers

OT node consists of two servers RPC and Kademlia node. Run both servers in a single command.

npm start

If you are having trouble in execution of this commands, than check this link and do the automatic installation and setup again. There can be a problem during installation process if the Ubuntu server has small amount of RAM memory assigned (512mb for example).

Import data

In order to import data it has to be properly formatted. Further description of XML schema can be found in Data Structure Guidelines on our wiki. Sample files can also be found in installation folder of the node ( /ot-node/importers/Transformation.xml and /ot-node/importers/Transport Ownership Observation.xml ).

We included example files in the project for your reference and testing. Please take a look here for an example of XML schema.

To import data from the XML file into OriginTrail, send a HTTPS POST request containing the XML file to the following endpoint:

https://YOUR_RPC_NODE_URL:YOUR_RPC_NODE_PORT/import_gs1

The example cURL request is:

curl -v  -F importfile=@importers/Transformation.xml http://YOUR_RPC_NODE_URL:YOUR_RPC_NODE_PORT/import_gs1

Depending on the use case, it might make sense to set up a periodic import of the data into OriginTrail (i.e. a daily cronjob exporting the data from your ERP in the XML format requested, with a POST to your OT node).

What happens after the successful import of the data?

You can perform import on any of the two nodes. Node that you import to will act as DC (Data Creator) node, as we refer to it in our documentation, while other one will act as DH (Data Holder).

After you make the import you will see the full process of two nodes doing their routine. Simplified, this is what will happen:

  1. First the DC node will fingerprint the data import on Blockchain and then create the offer and broadcast it to other nodes on network.
  2. Other nodes (in our case DH) will answer and send their own encrypted bids.
  3. Once the bidding process is over, nodes will reveal their bids and Smart Contract will choose the nodes that are selected to do the replication.
  4. Once the DH is chosen, it will receive the replication request containing the data of the import.
  5. DH will store the data while DC will generate and start sending random challenges and DH will answer them if it still has the data.
  6. Once the period of job is finished (or before for the job already done) DH may choose to claim it's fee in Alpha TRAC.

This is quite simplified flow, but you will be able to follow it in terminal as it happens. Also, you may directly access the graph database and check if both nodes have the data imported.

If the data is in the database and you have spent some ATRAC for DH services, you may consider your installation successful.