How Kuhirō CDN works


Kuhirō CDN processes end-user HTTPS requests via customer defined functions that access a local data-layer ... and all of it happens at the CDN Edge
Below is a brief explanation of how we do it :)

# 1: End User HTTPS Requests

End user devices (ranging from SmartPhones to IoT devices) make HTTPS requests to Kuhirō's distributed cloud presence
Kuhirō processes these requests and replies with a HTTPS response


# 2: Serverless Stack with local Data Layer

End user requests are handled by customer defined functions (referred to as serverless functions)
These functions access a local data-layer to read and write state
100% of the processing is done (locally) at the Edge


# 3: Request to Response Flow

Flow of a HTTPS request:
  1. HTTPS Request is received by Kuhirō's servers
  2. Customer is determined from request hostname (e.g. -> customer: 'x')
  3. Lookup is done in customer's handler table (using the requests 'path' as the lookup key) that evaluates to a handler function.
    Path '/hello' evaluates to handler 'hello.Go()'
  4. The request (path, query string, headers, post body, etc..) is transformed into an event object
  5. The event object is passed to the handler function
  6. The handler function runs, accessing the local data layer for state
  7. The handler function returns a response object
  8. The response object is transformed into a HTTPS response
  9. The end user gets a HTTPS response


# 4: Geographically Distributed Presence

Kuhirō has mini data centers all over the world
End users are directed to the lowest latency MiniDC via latency based DNS
NOTE: To not be crowded this graphic uses a small number of MiniDCs
Kuhirō has many MiniDCs, located in major population centers across the globe


# 5: Synchronized Data Layer

What makes Kuhirō truly special is how data is synchronized between geographically distant MiniDCs without introducing latency
Kuhirō utilizes a new form of replication referred to as CRDTs
The end result for Customer's business logic:
  1. No code changes needed (what runs centralized will run geographically distributed).
  2. Process requests very close to your global userbase.
  3. Access & modify real-time data at the edge
  4. This all adds up to amazingly low latency request-processing and fantastic robustness


# 6: Game Changer

In the past companies have relied on centralized architectures which have shortcomings:
  • High Latency: all requests go to a single physical location
  • Single Point of Failure: if the central data center is saturated or dies, you are dead in the water
Kuhirō's distributed architecture is the future:
  • Low Latency: your global user base is close to a Kuhirō mini data center
  • Highly Robust: MiniDC saturation or death is not a problem, end users are directed to the next closest MiniDC and business goes on uninterrupted



Back to Top