-
Notifications
You must be signed in to change notification settings - Fork 15
External Tutorial
This tutorial will focus on how to create an app on top of OneLedger blockchain protocol. This app will be referred as "external app" in this tutorial.
Blockchain is a Distributed Ledger Technology (DLT), that abstracts away trust among 2 or more parties by introducing a technology that is unhackable, immutable and trusted. Any use case that involves trust between 2 or more parties, is where blockchain can be applied, and so it’s thought of regularly in Supply Chain and Logistics, Finance, Insurance, Litigation, Health Care and many others.
Technically blockchain is a state machine running on different nodes that under a consensus mechanism.
An external app will contains 3 layers in total to utilize the OneLedger blockchain network.
- Data Layer: stores data that will be used in the app
- Transaction Layer: any actions in the blockchain system will be executed as a transaction, each and every node in the blockchain network will run a transaction in the same order to reach a consensus state.
- RPC Layer: in order to interact with blockchain network, for example, trigger the transactions in transaction layer, or query data from data layer, we need RPC layer to be able to make RPC requests to block chain network.
💡Simply put, an external app will use different transactions to perform corresponding actions that deal with data in store(s) to achieve certain functionalities. We provide rpc functions to support querying result from outside of blockchain.
💡And once our app is finished, a typical workflow when testing it:
- Store the data into blockchain network
- Use sdk to generate and sign the transaction.
- Use sdk to broadcast the transaction to one fullnode in the blockchain network
- This fullnode will do the basic validation first to ensure the signature and address is valid, then broadcast to tendermint
- Then process check will be called, which runs the transaction as a simulation to make sure nothing is wrong with this transaction
- After this, the transaction will be added into mempool and broadcasted to the blockchain network
- A validator will select transactions and include them into a new proposed block
- Other validators will check if the block is valid and try to reach a consensus. The consensus will be reached when at least 2/3 validator nodes agree that this proposed block is valid.
- Once the blockchain consensus is reached, this block will be committed, and all txs inside will be run again in the process deliver
- Retrieve the data
- Use sdk to call rpc query service and get data from data store
- Introduction
- 0. Pre-Start
- 1. Create your external app folder
- 2. Create error codes package
- 3. Create data package
- 4. Create action package
- 5. Create RPC package
- 6. Register the app into OneLedger blockchain
- 7. Test our app
Please check this tutorial to setup the develop environment: External-Tutorial-Setup
It takes 7 major steps to create an external app:
- Create your external app folder
- Create error codes package
- Create data package
- Create action(transaction) package
- Create RPC package
- Register the app into OneLedger Blockchain
- See if we can start the blockchain network locally without any errors
This tutorial will use an example app called "farm_produce" to show the steps you need to take to build an external app.
This example app will provide the functionality of
- Insert data belongs to a batch of produce into the blockchain network
- Query data by batch ID
Inside protocol
folder, there is a folder named external_apps
, everything that needs to be done will be in this folder.
The structure inside external_apps
is shown in the below, which includes a common utility folder, an example project folder and an initialization file.
external_apps
├── bid(example project folder)
├── common(common utility folder)
└── init.go
The first step for external app is to create your own external app folder, just like bid
as the example. Let's create a folder called farm_produce
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce <---
└── init.go
Create error package and error codes file to define all the error codes used in your external app, you will be provided a range of error codes that are pre-allocated to this external app to avoid conflict.
All external app error codes will be a six digit number starts with 99, and for each external app, there are 100 error codes available. For example, 990100 to 990199.
All packages inside your app folder should follow the naming convention of app name + underscore + package name
, such as farm_error
.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ └── farm_error
│ └── codes.go <---
└── init.go
Later on we will be adding error codes into this file as below. (with your dedicated error codes)
const (
ErrFailedInSerialization = 990101
ErrFailedInDeserialization = 990102
...
)
Data package takes care of the functionality to store, and get data related to your external app. There will be data structs to represent single entry of data object, and there will be data stores to hold the data. You can use multiple data structs or stores if needed.
Data in all stores is saved in a non-relational key-value(leveldb) database universally, with different prefix in the key we can differentiate data entries in different stores.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_data <---
│ └── farm_error
└── init.go
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_data
│ │ ├── init.go <---
│ └── farm_error
└── init.go
We will define some constants as below, they will be used in the id length verification.
const (
BATCHIDLENGTH = 9
FARMIDLENGTH = 6
)
Create a new file called types.go
in your data package.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_data
│ │ ├── init.go
│ │ └── types.go <---
│ └── farm_error
└── init.go
This file will store basic types that on which you want to build up some method, such is a simple validation method to check if the value is valid.
type (
BatchID string
FarmID string
)
Here we check if the length of them is correct
func (id BatchID) Err() error {
switch {
case len(id) == 0:
return errors.New("BatchID is empty")
case len(id) != BATCHIDLENGTH:
return errors.New("BatchID length is incorrect")
}
return nil
}
func (id FarmID) Err() error {
switch {
case len(id) == 0:
return errors.New("FarmID is empty")
case len(id) != FARMIDLENGTH:
return errors.New("FarmID length is incorrect")
}
return nil
}
Create a new file called errors.go
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_data
│ │ ├── errors.go <---
│ │ ├── init.go
│ │ └── types.go
│ └── farm_error
└── init.go
Inside this file, we will define errors that potentially can be triggered in data layer.
var (
ErrFailedInSerialization = codes.ProtocolError{farm_error.ErrFailedInSerialization, "failed to serialize"}
...
)
Here we define errors using the error code from farm_error
package in the last step, and combine it with an error message.
Later on we can add more errors to this file.
This is the structs that you want to utilize to represent single entry of data object in the external app.
For example, this is a note if your app is a notebook, a product if your app is a product management system.
In our farm produce app, this will be the information of a batch of produce. Let's create a file called produce.go
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_data
│ │ ├── errors.go
│ │ ├── init.go
│ │ ├── produce.go <---
│ │ └── types.go
│ └── farm_error
└── init.go
This file will at least contains a struct definition and a constructor as below.
type Produce struct {
BatchID BatchID `json:"batchId"`
ItemType string `json:"itemType"`
FarmID FarmID `json:"farmId"`
FarmName string `json:"farmName"`
HarvestLocation string `json:"harvestLocation"`
HarvestDate int64 `json:"harvestDate"`
Classification string `json:"classification"`
Quantity int `json:"quantity"`
Description string `json:"description"`
}
func NewProduce(batchID BatchID, itemType string, farmID FarmID, farmName string, harvestLocation string, harvestDate int64, classification string, quantity int, description string) *Produce {
return &Produce{BatchID: batchID, ItemType: itemType, FarmID: farmID, FarmName: farmName, HarvestLocation: harvestLocation, HarvestDate: harvestDate, Classification: classification, Quantity: quantity, Description: description}
}
Batch ID is the only factor that can differentiate different batches of produce, this value need to be unique.
And there are some other fields that describe the batch in different perspective.
🛠If your data object is an entity that designed to be owned by or traded/exchanged among users, you can use
keys.Address
as data type for that field. This represents an address on the OneLedger blockchain network. TheString()
method for address will return the string value for that address.
This the storage struct that you want to store you data entries from above step.
In farm produce example app, we will have produce store as our data store.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_data
│ │ ├── errors.go
│ │ ├── init.go
│ │ ├── produce.go
│ │ ├── produce_store.go <---
│ │ └── types.go
│ └── farm_error
└── init.go
Inside produce_store.go
, we first define our data store
type ProduceStore struct {
state *storage.State
szlr serialize.Serializer
prefix []byte
}
-
state
As mentioned before, the blockchain network is a state machine that runs on different node. Data in every store is saved in a key-value database in thestate
. We need thisstate
to put our data stores. -
szlr
is the serializer we used to handle (de)serialization. This is needed since we are transmitting data from/to sdk and different nodes. A serializer will serialize the objects into json(or other standard) string when transmitting it and deserialize the string to the original objects when receiving it. -
prefix
is a string that stays at the beginning of the keys in this store, it's used to differentiate this store and others.- You can define your own prefix value as you want.
And we add the constructor for this store:
func NewProduceStore(state *storage.State, prefix string) *ProduceStore {
return &ProduceStore{
state: state,
szlr: serialize.GetSerializer(serialize.PERSISTENT),
prefix: []byte(prefix),
}
}
For serializer, we can just use serialize.PERSISTENT
, which will do the (de)serialization using JSON standard.
Method GetState
and WithState
are used to correctly pass/get the state to/from the store.
func (ps *ProduceStore) GetState() *storage.State {
return ps.state
}
func (ps *ProduceStore) WithState(state *storage.State) data.ExtStore {
ps.state = state
return ps
}
Method Set
is to set data into the store
func (ps *ProduceStore) Set(produce *Produce) error {
prefixed := append(ps.prefix, produce.BatchID...)
data, err := ps.szlr.Serialize(produce)
if err != nil {
return ErrFailedInSerialization.Wrap(err)
}
err = ps.state.Set(prefixed, data)
if err != nil {
return ErrSettingRecord.Wrap(err)
}
return nil
}
Here we first append the produce batch id as a part of the prefixed key, and serialize the produce batch into the store. You can design your own key pattern.
Method Get
is to get data from the store
func (ps *ProduceStore) Get(batchId BatchID) (*Produce, error) {
produce := &Produce{}
prefixed := append(ps.prefix, []byte(batchId)...)
data, err := ps.state.Get(prefixed)
if err != nil {
return nil, ErrGettingRecord.Wrap(err)
}
err = ps.szlr.Deserialize(data, produce)
if err != nil {
return nil, ErrFailedInDeserialization.Wrap(err)
}
return produce, nil
}
First we create empty produce batch object and construct our key, and get the corresponding data from the state. After this, we deserialize data into our object.
Method Exist
will be able to tell if a data entry can be found in the data store.
func (ps *ProduceStore) Exists(key BatchID) bool {
prefix := append(ps.prefix, key...)
return ps.state.Exists(prefix)
}
Method Delete
is to delete data from the store
func (ps *ProduceStore) Delete(batchId BatchID) (bool, error) {
prefixed := append(ps.prefix, batchId...)
res, err := ps.state.Delete(prefixed)
if err != nil {
return false, ErrDeletingRecord.Wrap(err)
}
return res, err
}
Action pacakge handles all the transactions in the app. In the blockchain network, every action is achieved by a transaction, such as sending tokens to another address, creating a domain, adding data to stores...
When a fullnode in the blockchain network receives a transaction, it will first do basic validation, then it will be passed into the network. Since blockchain network is a decentralized system, the consensus established among all the nodes is essential.
After this transaction is passed into the network, it will be executed in different nodes to achieve the consensus. That means EVERYTHING in the transaction should be deterministic, everything should follow the same step for one transaction in multiple nodes. No random number, no random sequence and so on.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ │ └── init.go <---
│ ├── farm_data
│ └── farm_error
└── init.go
Inside this init.go
, first we will define some constants as the action type. You will be provided with 6-digit action type codes with the same range as the error codes, such as 990100 to 990199.
const (
FARM_INSERT_PRODUCE action.Type = 990101
)
And we need to register our action types in the init
function.
func init() {
action.RegisterTxType(FARM_INSERT_PRODUCE, "FARM_INSERT_PRODUCE")
}
Create a new file called errors.go
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ │ ├── errors.go <---
│ │ └── init.go
│ ├── farm_data
│ └── farm_error
└── init.go
Inside this file, we will define errors that potentially can be triggered in action layer.
var (
ErrFailedToUnmarshal = codes.ProtocolError{farm_error.ErrFailedToUnmarshal, "failed to unmarshal"}
...
)
Here we define errors using the error code from farm_error
package, and combine it with an error message.
Later on we can add more errors to this file.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ │ ├── errors.go
│ │ ├── init.go
│ │ └── insert_batch.go <---
│ ├── farm_data
│ └── farm_error
└── init.go
First let's create a file called insert_batch.go
. Inside this file we will create one transaction.
As mentioned before, a transaction will be validated first when received by a fullnode, then broadcasted into the blockchain network. This requires the transaction to be written following a specific pattern.
There will be two objects for this transaction, remember to put correct json tag for deserialization.
type InsertProduce struct {
BatchId farm_data.BatchID `json:"batchId"`
ItemType string `json:"itemType"`
FarmID farm_data.FarmID `json:"farmId"`
FarmName string `json:"farmName"`
HarvestLocation string `json:"harvestLocation"`
HarvestDate int64 `json:"harvestDate"`
Classification string `json:"classification"`
Quantity int `json:"quantity"`
Description string `json:"description"`
Operator keys.Address `json:"operator"`
}
type InsertProduceTx struct {
}
var _ action.Msg = &InsertProduce{}
var _ action.Tx = &InsertProduceTx{}
We use var _ action.Msg = &InsertProduce{}
and var _ action.Tx = &InsertProduceTx{}
to check if these two objects implement action.Msg
and action.Tx
interface respectively. Right now there will be compiling errors saying the lack of methods, we will add them below.
Method Signer
will specify the signer of this transaction. Every transaction needs to be signed by an address, and the address needs to pay a small amount of fee in OLT token to the support the network. That means in the InsertProduce
struct, there must be at least one parameter of keys.Address
type. Here we use Operator
as the signer.
func (i InsertProduce) Signers() []action.Address {
return []action.Address{i.Operator}
}
Method Type
will return the action type we created for this transaction.
func (i InsertProduce) Type() action.Type {
return FARM_INSERT_PRODUCE
}
Method Tag
will return a list of key-value pairs that contains some of the chosen parameters, this list will be included in the transaction events for future use.
func (i InsertProduce) Tags() kv.Pairs {
tags := make([]kv.Pair, 0)
tag := kv.Pair{
Key: []byte("tx.batchId"),
Value: []byte(i.BatchId),
}
tag1 := kv.Pair{
Key: []byte("tx.type"),
Value: []byte(i.Type().String()),
}
tags = append(tags, tag, tag1)
return tags
}
Method Marshal
and Unmarshal
will provide the functionalities of (de)serialization in action layer.
func (i InsertProduce) Marshal() ([]byte, error) {
return json.Marshal(i)
}
func (i *InsertProduce) Unmarshal(bytes []byte) error {
return json.Unmarshal(bytes, i)
}
Method Validate
is needed to do basic validation when a fullnode receives the transaction. The receiver of this method is InsertProduceTx
.
func (i InsertProduceTx) Validate(ctx *action.Context, signedTx action.SignedTx) (bool, error) {
insertProduce := InsertProduce{}
err := insertproduce.Unmarshal(signedTx.Data)
if err != nil {
return false, errors.Wrap(ErrFailedToUnmarshal, err.Error())
}
//validate basic signature
err = action.ValidateBasic(signedTx.RawBytes(), insertProduce.Signers(), signedTx.Signatures)
if err != nil {
return false, err
}
err = action.ValidateFee(ctx.FeePool.GetOpt(), signedTx.Fee)
if err != nil {
return false, err
}
//Check if batch ID is valid
err = insertProduce.BatchId.Err()
if err != nil {
return false, farm_data.ErrInvalidBatchID.Wrap(err)
}
//Check if farm ID is valid
err = insertProduce.FarmID.Err()
if err != nil {
return false, farm_data.ErrInvalidFarmID.Wrap(err)
}
//Check if operator address is valid oneLedger address
err = insertProduce.Operator.Err()
if err != nil {
return false, errors.Wrap(action.ErrInvalidAddress, err.Error())
}
return true, nil
}
In the Validate
method, we first create an object of type InsertProduce
, and deserialize the transaction data into this object.
Then we validate basic signatures so that the we are sure this transaction is properly signed by the signer.
After that, we need to make sure the currency for this transaction should be OLT.
At the end, we do some basic validation related to our app logic, such as the address validation and batch/farm ID validation
⚠️ Do not do any complex validation that involves accessing data store in theValidate
method, it will raise concurrency problem in the app.
Method ProcessFee
will process the amount of fee payed by the transaction signer.
func (i InsertProduceTx) ProcessFee(ctx *action.Context, signedTx action.SignedTx, start action.Gas, size action.Gas) (bool, action.Response) {
return action.BasicFeeHandling(ctx, signedTx, start, size, 1)
}
Method ProcessCheck
and ProcessDeliver
represent different stages of including the transaction into the network. As mentioned before, to achieve consensus, a transaction will be executed in different nodes.
func (i InsertProduceTx) ProcessCheck(ctx *action.Context, tx action.RawTx) (bool, action.Response) {
ctx.Logger.Debug("ProcessCheck CancelProposalTx transaction for CheckTx", tx)
return runInsertProduce(ctx, tx)
}
func (i InsertProduceTx) ProcessDeliver(ctx *action.Context, tx action.RawTx) (bool, action.Response) {
ctx.Logger.Debug("ProcessDeliver CancelProposalTx transaction for DeliverTx", tx)
return runInsertProduce(ctx, tx)
}
Inside these two methods, we will use function runInsertProduce
to perform the actual transaction logic.
func runInsertProduce(ctx *action.Context, tx action.RawTx) (bool, action.Response) {
insertProduce := InsertProduce{}
err := insertProduce.Unmarshal(tx.Data)
if err != nil {
return helpers.LogAndReturnFalse(ctx.Logger, ErrFailedToUnmarshal, insertProduce.Tags(), err)
}
//1. get produce store
produceStore, err := GetProduceStore(ctx)
if err != nil {
return helpers.LogAndReturnFalse(ctx.Logger, farm_data.ErrGettingProduceStore, insertProduce.Tags(), err)
}
//2. check if there is produce batch with same batch ID
if produceStore.Exists(insertProduce.BatchId) {
return helpers.LogAndReturnFalse(ctx.Logger, farm_data.ErrBatchIDAlreadyExists, insertProduce.Tags(), err)
}
//3. construct new produce batch
produceBatch := farm_data.NewProduce(
insertProduce.BatchId,
insertProduce.ItemType,
insertProduce.FarmID,
insertProduce.FarmName,
insertProduce.HarvestLocation,
insertProduce.HarvestDate,
insertProduce.Classification,
insertProduce.Quantity,
insertProduce.Description,
)
//4. insert the produce batch
err = produceStore.Set(produceBatch)
if err != nil {
return helpers.LogAndReturnFalse(ctx.Logger, farm_data.ErrInsertingProduce, insertProduce.Tags(), err)
}
return helpers.LogAndReturnTrue(ctx.Logger, insertProduce.Tags(), "insert_produce_success")
}
First we deserialize the transaction into InsertProduce
type struct as done in Validate
method.
🛠 To return error or info in run function, we can utilize
LogAndReturnFalse
andLogAndReturnTrue
functions fromhelpers
package.
After this we will get our store from context, this part is wrapped in GetProduceStore
function in helper.go
, you can follow the logic in this function to get your external data store.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ │ ├── errors.go
│ │ ├── helper.go <---
│ │ ├── init.go
│ │ └── insert_batch.go
│ ├── farm_data
│ └── farm_error
└── init.go
func GetProduceStore(ctx *action.Context) (*farm_data.ProduceStore, error) {
store, err := ctx.ExtStores.Get("extProduceStore")
if err != nil {
return nil, err
}
produceStore, ok := store.(*farm_data.ProduceStore)
if ok == false {
return nil, err
}
return produceStore, nil
}
Store name for external app will start with ext
, this will be elaborated in step 7 later when we register the external app into OneLedger blockchain main application.
Then we assert it to the our store type.
After this, let's come back to runInsertProduce
, we will check if there is already any batch with the same ID. If there is, the transaction will be aborted.
Then we construct our new batch object, and insert it into the store.
RPC pacakge will handle all the query requests that pointing to supported query services. We will add one query service in this package for our app.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ ├── farm_data
│ ├── farm_error
│ └── farm_rpc <---
└── init.go
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ ├── farm_data
│ ├── farm_error
│ └── farm_rpc
│ └── farm_request_types.go <---
└── init.go
Inside this file you can define your request and reply types.
type GetBatchByIDRequest struct {
BatchID farm_data.BatchID `json:"batchId"`
}
type GetBatchByIDReply struct {
ProduceBatch farm_data.Produce `json:"produceBatch"`
Height int64 `json:"height"`
}
Here we use batch ID to query the information about specific batch. And return that along with current block height.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ ├── farm_data
│ ├── farm_error
│ └── farm_rpc
│ ├── errors.go
│ └── farm_request_types.go <---
└── init.go
Inside this file you can define your rpc service errors similar as in other layers.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ ├── farm_data
│ ├── farm_error
│ └── farm_rpc
│ ├── errors.go
│ ├── farm_request_types.go
│ └── farm_rpc_query.go <---
└── init.go
In this farm_rpc_query.go
file, first we need to define our service and its constructor, also Name
function, which will be used in the registration part in step 6.
type Service struct {
balances *balance.Store
currencies *balance.CurrencySet
ons *ons.DomainStore
logger *log.Logger
produceStore *farm_data.ProduceStore
}
func Name() string {
return "farm_query"
}
func NewService(balances *balance.Store, currencies *balance.CurrencySet,
domains *ons.DomainStore, logger *log.Logger, produceStore *farm_data.ProduceStore) *Service {
return &Service{
currencies: currencies,
balances: balances,
ons: domains,
logger: logger,
produceStore: produceStore,
}
}
And we will create our query service.
func (svc *Service) GetBatchByID(req GetBatchByIDRequest, reply *GetBatchByIDReply) error {
batch, err := svc.produceStore.Get(req.BatchID)
if err != nil {
return ErrGettingProduceBatchInQuery.Wrap(err)
}
*reply = GetBatchByIDReply{
ProduceBatch: *batch,
Height: svc.produceStore.GetState().Version(),
}
return nil
}
In both services above, we use svc.produceStore.GetState().Version()
to get the current height and put it into the reply.
When we call our services using customQuery supported by sdk, the custom method will be farm_query.GetBatchByID
.
At this point, all the functionalities are done, but they are not connected with the main application yet. We need to register all the layers into it.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ ├── farm_data
│ ├── farm_error
│ ├── farm_rpc
│ └── init.go <---
└── init.go
Inside this init.go
(under farm_produce
directory) we will add a function called LoadAppData
func LoadAppData(appData *common.ExtAppData) {
logWriter := os.Stdout
logger := log.NewLoggerWithPrefix(logWriter, "extApp").WithLevel(log.Level(4))
//load txs
insertProduce := common.ExtTx{
Tx: farm_action.InsertProduceTx{},
Msg: &farm_action.InsertProduce{},
}
appData.ExtTxs = append(appData.ExtTxs, insertProduce)
//load stores
if dupName, ok := appData.ExtStores["extProduceStore"]; ok {
logger.Errorf("Trying to register external store %s failed, same name already exists", dupName)
return
} else {
appData.ExtStores["extProduceStore"] = farm_data.NewProduceStore(storage.NewState(appData.ChainState), "extFarmPrefix")
}
//load services
balances := balance.NewStore("b", storage.NewState(appData.ChainState))
olt := balance.Currency{Id: 0, Name: "OLT", Chain: chain.ONELEDGER, Decimal: 18, Unit: "nue"}
currencies := balance.NewCurrencySet()
err := currencies.Register(olt)
if err != nil {
logger.Errorf("failed to register currency %s", olt.Name, err)
return
}
appData.ExtServiceMap[farm_rpc.Name()] = farm_rpc.NewService(balances, currencies, logger, farm_data.NewProduceStore(storage.NewState(appData.ChainState), "extFarmPrefix"))
}
First we create a log writer and include it into a logger, so that we can use it to log info and errors in the app.
Then we create objects of transactions, and wrap each pair of them into the common.ExtTx
struct. And add them to appData.ExtTxs
.
After this we load all our external stores into appData.ExtStores
map, the key will start with ext
as mentioned before.
Next is to load our rpc services.
if your app needs to check balance in query part, you can pull those stores from chainstate using
balances := balance.NewStore("b", storage.NewState(appData.ChainState))
. Hereb
is fixed prefix for balance store.
And we need to put OLT as our currency in the external app.
And finally in the init.go
of external_apps
, we need to add one line into the init
function, common.Handlers.Register(bid.LoadAppData)
. This way the registration part is finished.
external_apps
├── bid(example project folder)
├── common(common utility folder)
├── farm_produce
│ ├── farm_action
│ ├── farm_data
│ ├── farm_error
│ ├── farm_rpc
│ └── init.go
└── init.go <---
func init() {
//register new external app handler function in the last line
common.Handlers.Register(farm_produce.LoadAppData)
}
Go to the protocol folder
cd $OLROOT/protocol
Compile the code, there should be no error.
make install_c
Start the blockchain network
make reset
Go to the node folder
cd $OLDATA/devnet/0-Node
Check consensus log
tail -f consensus.log
And wait for a while, if you see the height of last message is increasing, that means the network is running.
This part is covered in SDK tutorial.
Every successful transaction will return a unique transaction hash, and we can get it from the last step after we send transaction through SDK.
We can query the transaction itself directly(this has to be done in bash
, not zsh
if that's what you are using):
curl localhost:26600/tx?hash=YourTxHash
You will get similar result as below:
{
"jsonrpc": "2.0",
"id": -1,
"result": {
"hash": "845E8126F6A242F1A77B873DA9BAA8D1950EEC1D61B80B3C823D353996FE9E42",
"height": "8",
"index": 1,
"tx_result": {
"code": 0,
"data": null,
"log": "",
"info": "",
"gasWanted": "400000",
"gasUsed": "18500",
"events": [
{
"type": "insert_produce_success",
"attributes": [
{
"key": "dHguYmF0Y2hJZA==",
"value": "MTAwMDAwMDAx"
},
{
"key": "dHgudHlwZQ==",
"value": "RkFSTV9JTlNFUlRfUFJPRFVDRQ=="
}
]
}
],
"codespace": ""
},
"tx": "eyJ0eXBlIjo5OTAxMDEsImRhdGEiOiJleUppWVhSamFFbGtJam9pTVRBd01EQXdNREF4SWl3aWFYUmxiVlI1Y0dVaU9pSmhjSEJzWlhNaUxDSm1ZWEp0U1dRaU9pSkdNVEl6TkRVaUxDSm1ZWEp0VG1GdFpTSTZJbk4xYm01NUlpd2lhR0Z5ZG1WemRFeHZZMkYwYVc5dUlqb2lhR2xuYUNCbmNtOTFibVFpTENKb1lYSjJaWE4wUkdGMFpTSTZNVFl3TURNMk1EYzJNU3dpWTJ4aGMzTnBabWxqWVhScGIyNGlPaUpCUVVFaUxDSnhkV0Z1ZEdsMGVTSTZNVEF3TENKa1pYTmpjbWx3ZEdsdmJpSTZJaUlzSW05d1pYSmhkRzl5SWpvaU1HeDBNREppWlRKaU5UUmtNR0ZtTWprMU5HVmtZMlEzTmpBek1XRmhPRFE1TlRabE5qYzRPRGsxWXlKOSIsImZlZSI6eyJwcmljZSI6eyJjdXJyZW5jeSI6Ik9MVCIsInZhbHVlIjoiMTAwMDAwMDAwMCJ9LCJnYXMiOjQwMDAwMH0sIm1lbW8iOiI2MDdjZjM5MC1mOWU1LTExZWEtODc1Zi0zMzZhODg0ODZjOWIiLCJzaWduYXR1cmVzIjpbeyJTaWduZXIiOnsia2V5VHlwZSI6ImVkMjU1MTkiLCJkYXRhIjoid2tpRE8wYzdjUWJPWmtFQTZIdTJHc2RKTXB5TUJneUJHRENjL3VvMmY5QT0ifSwiU2lnbmVkIjoiK1dVSEdkTjJvWVBveUlDbVhDZ09pTWhYUXczb1FISHM5Y24wY1gwdi8vZDExQ3FpRFA0a1VTdzNoYlZJNFJkaXlWU3FyTXdTdS9SZkZNMUczZ0hLQ3c9PSJ9XX0="
And we can decode the last base64 part, the result will be similar as below:
{"type":990101,"data":"eyJiYXRjaElkIjoiMTAwMDAwMDAxIiwiaXRlbVR5cGUiOiJhcHBsZXMiLCJmYXJtSWQiOiJGMTIzNDUiLCJmYXJtTmFtZSI6InN1bm55IiwiaGFydmVzdExvY2F0aW9uIjoiaGlnaCBncm91bmQiLCJoYXJ2ZXN0RGF0ZSI6MTYwMDM2MDc2MSwiY2xhc3NpZmljYXRpb24iOiJBQUEiLCJxdWFudGl0eSI6MTAwLCJkZXNjcmlwdGlvbiI6IiIsIm9wZXJhdG9yIjoiMGx0MDJiZTJiNTRkMGFmMjk1NGVkY2Q3NjAzMWFhODQ5NTZlNjc4ODk1YyJ9","fee":{"price":{"currency":"OLT","value":"1000000000"},"gas":400000},"memo":"607cf390-f9e5-11ea-875f-336a88486c9b","signatures":[{"Signer":{"keyType":"ed25519","data":"wkiDO0c7cQbOZkEA6Hu2GsdJMpyMBgyBGDCc/uo2f9A="},"Signed":"+WUHGdN2oYPoyICmXCgOiMhXQw3oQHHs9cn0cX0v//d11CqiDP4kUSw3hbVI4RdiyVSqrMwSu/RfFM1G3gHKCw=="}]}
And after decoding the data
field, we can get all the data in this transaction:
{"batchId":"100000001","itemType":"apples","farmId":"F12345","farmName":"sunny","harvestLocation":"high ground","harvestDate":1600360761,"classification":"AAA","quantity":100,"description":"","operator":"0lt02be2b54d0af2954edcd76031aa84956e678895c"}
© OneLedger 2018-2020 Contact Information