overhide.io blog
— effortless login — from social to wallet — for your Web project
— authorize extras with hassle-free "in-app-purchases" (IAPs) in dollars and cryptos
— back-end code optional and front-end only OK... write minimal code once but support dollars and coins
— as unliable as possible — no custody of user data for logins and purchases
21 May 2019
by Jakub Ner

Reference Implementation Deployment as of May 2019

This is a discussion of overhide deployments for services running in support of https://overhide.io and https://ledger.overhide.io; as well as the future direction of components yet to be completed.

All overhide services are deployed on a Kubernetes cluster running in the cloud.

Remuneration API

The remuneration API implementations are simple standalone clound-native / 12-factor services. The API doesn’t require session state and API requests are round-robin load balanced. The APIs depend on underlying ledgers for indexing of transactions; the services use managed data or leverage third parties for the indexing. The services are stateless.

overhide-ledger

The service is available at:

The deployment is modelled below.

Any clound-native/12-factor Node.js instance (green) of the service can be hit with request in any order as the instances are round-robin load-balanced and have no session state. The layer-7 load-balancer (nginx reverse-proxy) terminates SSL for incoming TCP and keeps persistent sessions on the cluster.

Each instance has simple HTTP liveness and readiness probing for Kubernetes cluster management.

Overall liveness probing and alerting is done by https://uptimerobot.com, which sends emails should problems arise in getting answers from the cluster.

The cluster is instrumented with Prometheus and visualization–for tuning and introspecting the cluster–is done with Grafana (Azure Data Explorer as of 2020).

The ledger data is written to a managed database independent of the cluster. Replication, backups, and availability guarantees are as per our agreement with the database provider.

In faded-red are indicated future considerations should the need arise based on cluster observation. Since overhide-ledger has a high ratio of reads to writes, it may make sense to add database read-replicas and preface with a read-cache.

Managed database backups may not be granular enough to fully guarantee that all ledger-processed transaction receipts have been captured and none are lost during a database fail-over. As such, we run a logs server that writes out a log stream of all transactions. The logs server sits in our Kubernetes cluster. At time of this writing reconciliation of the database and the logs server would be a manual process in the event of an outage.

Ethereum

The service is available at:

The deployment is modelled below.

Any clound-native/12-factor Node.js instance (green) of the service can be hit with request in any order as the instances are round-robin load-balanced and have no session state. The layer-7 load-balancer (nginx reverse-proxy) terminates SSL for incoming TCP and keeps persistent sessions on the cluster.

Each instance has simple HTTP liveness and readiness probing for Kubernetes cluster management.

Overall liveness probing and alerting is done by https://uptimerobot.com, which sends emails should problems arise in getting answers from the cluster.

The cluster is instrumented with Prometheus and visualization–for tuning and introspecting the cluster–is done with Grafana (Azure Data Explorer as of 2020).

The service is a simple abstraction leveraging https://etherscan.io. There is no data stored by the service.

Scaling

As of this writing the Node.js instance nodes go straight to https://etherscan.io. As such this remuneration provider is limited by rate-limits on https://etherscan.io.

If these rate-limits prove insufficient in the future we will cache Etherscan results locally in our cluster: see faded-red nodes in diagram above. The cache database can be read-replicated as needed when scaling the cluster through observation.