On the heels of our major update to the Pocket Portal, we’ve been sharing some of the key changes that we’re bringing to the table, including a new Pay As You Go plan, other flexible service options for developers, and POKT rewards that help eliminate sunk costs.
In this post, we’ll take a closer look at how the updated Pocket Portal and our “cherry picker” mechanism is also helping developers lock in low latency across multiple chains and regions.
The Pocket Portal is our one-stop shop for developers to access highly performant infrastructure for their dApps. As we’ve built and improved v0, and laid the groundwork for v1, latency has always been top of mind for us. Low latency has a crucial role in delivering a positive user experience for end users, and it’s dependent on having RPC infrastructure that can scale properly with your dApp.
Pocket’s “cherry picker” mechanism plays an important part in achieving the latency that developers expect from the best infrastructure solutions.
Optimizing the Portal API
The Portal API is the gateway that we use to connect dApps with nodes. Through the API, data relays are continuously routed to sessions that contain a collection of nodes to service the relays (sessions are about an hour long). In that process, the Portal API leverages specific Quality of Service (QoS) checks to ensure that nodes are staked correctly and on the latest blockheight for corresponding chains. This acts as one layer of “checks” to connect dApps with the healthiest Service Nodes to process their data relays.
In addition to QoS checks like these, we’ve designed our cherry picker to be another layer of checks that further sharpens the focus on latency. More specifically, the cherry picker takes on 3 aspects of service:
- Underperforming nodes
- Slow nodes
- Nodes failing under higher load
To address these challenges, the cherry picker measures the latency of each node’s relay response, and compares it to other nodes within the same session. Two data points are gathered: the latency for successful relays; and the number of failures over the previous 5 minutes. Based on the success time and success rate, a node’s ID is then copied into a weighting array, anywhere from 1 (least performant nodes) to 10 (most performant nodes) times – similar to a node entering a certain amount of tickets into a “raffle” to earn the right to provide service. It is this weighting array that the cherry picker then uses to pseudo-randomly select a node to service relays.
Generally speaking, slower nodes are pushed down the rankings and limited in terms of relays that are routed to them, while better optimized nodes are rewarded with a higher chance to service more relays during the session. There are also some more specific checks and balances in place: nodes with less than a 95% success rate are automatically given lower weighting, while nodes with latency under 150ms are automatically prioritized.
Furthermore, the Portal API is deployed in 16 different regions, and each of them has its individual cherry picker. In other words, a node that is closer to the region where the data is being requested will have lower latency, perform better in the weighting array, and therefore have a better chance to service relays.
Through these checks and the cherry picker’s regional focus, we’re able to ensure that developers get the latency and performance they demand through the Pocket Portal, and end users benefit from fast, uninterrupted access to their dApps.
Performance in Action
You can see the results from some of the recent speed tests we’ve run on several of our top relay chains, across some of the most trafficked regions.
For example, we’ve seen that relays in the US East and Germany regions typically clock in at less than 200ms, with US West and Singapore not far behind that performance (often around 200 to 300ms). Overall, the average of all of the chain/region results shown below is about 215 milliseconds – stacking up well against other, more centralized infrastructure providers.
Here’s a closer look at the numbers across different chains and regions. Additionally, our friends at Thunderhead have put together a handy tool to compare the latency of different RPC providers across different regions and chains.
Web3 developers can get RPC endpoints up and running in the Pocket Portal within minutes, and instantly tap into low latency performance across multiple chains and regions.
Take your dApp to the next level, with optimized performance built right into its foundation!