In the past week we faced an infrastructure challenge that brought our HoloPort auto-update service offline. We managed to resurrect the service in a way that made it more robust—and caused us to reexamine our knowledge-sharing practices to make sure our critical services remain robust.

The Holo host Web SDK is getting ready for general use. It promises to make hApp development easier by allowing front-end devs to target native Holochain users and Holo-hosted users with one API.

In a previous Dev Pulse I shared an exciting sneak peek of an upcoming project—H-Wiki, from EYSS. It’s now ready to try; I’ll share installation instructions below.

And finally, the next HoloPort update is getting palpably closer—the dev teams are coordinating on a few small fixes that are resulting in decent performance improvements.


  • Facing the Hydra and winning—lessons in maintaining resilience
  • HoloPort update with admin dashboard and test HoloFuel getting closer to release
  • Download and try the H-Wiki hApp
  • New Holochain hApp developers’ blog!

Facing the Hydra and winning—lessons in maintaining resilience

As we’ve reported before, the various projects under our roof use Nix and NixOS heavily. It’s both a package manager and a Linux distribution, which means we can use the same tools to:

  • create consistent environments for our dev teams on Linux and macOS,
  • distribute a ready-to-use development environment for hApp devs, and
  • build and deploy the HoloPort.

We use one of the Nix project’s tools, Hydra, to deploy HoloPort updates. Hydra is a continuous integration server built on the Nix package manager, which means it inherits Nix’s promises of consistent, reliable environments. We use Hydra to test every change to the HoloPort, and we also use it to distribute updates to HoloPorts. Every so often, the HoloPort wakes up its auto-update service, which pings our Hydra server to see if there’s anything new. If there is, it downloads it and patches the system without a reboot. When it’s all working correctly, it’s quite magical.

Our Hydra server recently suffered a hardware failure, which meant that we couldn’t deploy internal or public updates and HoloPorts couldn’t auto-update. After we got a new server provisioned, we had some issues reproducing the old server’s setup and getting the new server online.

Frustrating as it was, it was a great opportunity to discover gaps in the resilience of the things that we, the Holo organisation, do to support the Holochain project and deliver the Holo host network. We were already aware of the risk of a ‘bus factor’ for this mission-critical machine: after our organisational reset, a couple of our NixOS developers moved to other projects. Our newest team members, both of whom are Nix experts, knew they had an incomplete picture of how the Hydra server setup worked. This hardware failure forced them and the rest of the Holo Host team to build that knowledge quickly, record it, and share it.

The beauty of Nix is that you can recreate entire operating systems from a collection of configuration files. So once we understood and debugged the setup process, it was easy to deploy our Hydra server on a new machine.

HoloPort software update with admin dashboard and test HoloFuel getting closer to release

Our dev teams are still swarming on removing blockers to deploying the test version of HoloFuel to HoloPorts. We’ve seen some impressive performance gains—in fact, I was going to share that we had successful tests with 400 nodes, but as I was writing, I got news that we’re now up to 500 nodes! These come from two fixes in Holochain:

  • Make zome calls async—long-running zome function calls (including validation functions) were blocking other calls, which made the conductor unresponsive at times. This fix allows the conductor to process more than one call at a time.
  • Check myself first if I’m an authority—this makes a node check their own DHT shard for links if they know they’re an authority for the link base. If they are, they don’t bother hitting the network. This was a big boost for HoloFuel, in which agents regularly poll their own agent ID for links to incoming transactions.

There have also been some performance fixes in the HoloFuel DNA and front end themselves, primarily around caching DHT/source chain data and reducing traffic between UI and DNA instances. Along with an upcoming bugfix in Holochain’s handling of link retrieval, we’re getting ready for another round of field testing as soon as possible, likely in the next few days.

Download and try the H-Wiki hApp

The H-Wiki hApp from our friends at dev shop EYSS, introduced in a previous Dev Pulse, is now ready for testing. Their team has put in a lot of hard work and even at this early stage it’s already quite attractive and easy to use. You can read all about why it matters and schedule to watch a demo in this announcement (in español too). This is an early release, so you’ll need to be comfortable with the command line and have the Holochain development environment installed.

How to get started

  1. Go to the eyss/h-wiki-back GitHub repo, download the code, and follow the instructions in the readme to get the sim2h server, conductor, and DNA instance running.
  2. Go to the eyss/h-wiki-front GitHub repo and do the same.

A little tour of the codebase

H-Wiki is a fairly advanced hApp, so let’s take a look at what’s going on under the hood. If you’re developing your own hApp, there are some good practices to copy.

‘Mixin’ packages. The wiki zome uses a pattern we’ve been calling ‘mixins’—third-party libraries that implement useful patterns you can use in your own zomes. As you can see, with the Rust HDK these mixins are brought into the zome using Rust’s own Cargo package manager:

holochain_anchors = { git = "" }
holochain_roles = { git = "" }

If you look at the code, you can see how the wiki mixes these two libraries’ entry types into its own type definitions. It’s quite simple; the mixin just exposes an entry definition function that the wiki zome calls:

fn role_entry_def() -> ValidatingEntryType {

fn anchor_def() -> ValidatingEntryType {

Later, when an entry wants to connect to an anchor from the anchors mixin, it just gives a link type definition with the mixin’s entry type as the base:

pub fn page_def() -> ValidatingEntryType {
        name: "wikiPage",
        description: "this is an entry representing some profile info for an agent",
        // ...
        links: [
                link_type: "anchor->page",
                // ...

Anchor pattern. Let’s take a look at one of those mixins, holochain_anchors. It’s our standard implementation of the Anchor pattern, which makes DHT data easy to discover. Linking all entries of a certain type to an anchor recreates some of the functionality you’re used to in traditional databases, such as tables and views. Here’s one example: each wiki page has its own anchor that acts as its consistent unique identity across content updates (remember, each update to an entry gives it a new ID, so it’s often good to create something more stable to ‘anchor’ it to).

let anchor_address = holochain_anchors::anchor("wiki_pages".to_string(), title.clone())?;

Behind the scenes, the anchors mixin automatically links the new page anchor to a base “wiki_pages” anchor so that you can get a list of all page anchors. Then the current version of the page is linked to its anchor.

hdk::link_entries(&anchor_address, &address, "anchor->page", "")?;

Progenitor pattern and role-based CRUD validation. In order to determine who gets to create wiki pages, this DNA uses the Progenitor pattern. This pattern creates an ‘admin user’ on the DHT who can then delegate their privileges to others.

When a page create, update, or delete is validated, the DNA checks that its author is allowed to do that action by calling a 𝚟𝚊𝚕𝚒𝚍𝚊𝚝𝚎_𝚊𝚐𝚎𝚗𝚝_𝚌𝚊𝚗_𝚎𝚍𝚒𝚝 helper function:

validation: | _validation_data: hdk::EntryValidationData<Page>| {
    match _validation_data {
        hdk::EntryValidationData::Create { validation_data, .. } => validate_agent_can_edit(validation_data),
        hdk::EntryValidationData::Modify { validation_data, new_entry, old_entry, .. } => {
            if old_entry.title==new_entry.title {
            } else {
                Err("no se puede actualizar un titulo".to_string())
        hdk::EntryValidationData::Delete { validation_data, ..} => validate_agent_can_edit(validation_data)

That function uses EYSS’ holochain_roles mixin to determine whether the author is either an editor or admin:

pub fn validate_agent_can_edit(validation_data: hdk::ValidationData) -> Result<(), String> {
    let editor = holochain_roles::validation::validate_required_role(
    let admin = holochain_roles::validation::validate_required_role(

    match (editor, admin) {
        (Err(_), Err(_)) => Err(String::from("Only admins and editors edit content")),
        _ => Ok(()),

How does the roles mixin work? Isn’t everyone equal on the DHT? At the low level, yes, but at the application level you can write validation rules that give some people special privileges. Here’s an example.

In the DNA JSON file you can include ‘properties’—arbitrary key/value pairs that you can use in your zome code to do anything you like. With the Progenitor pattern, you add a ‘progenitor’ property with the public key of a certain privileged agent. Your zome code then checks this property whenever it needs to know if an agent is allowed to publish an entry to the DHT. Here’s how the roles mixin finds the progenitor’s key:

let progenitor_json = hdk::property("progenitor")?;
let progenitor: Result<Address, _> = serde_json::from_str(&progenitor_json.to_string());

At the start, the progenitor has one built-in role: the admin role, which can never be taken away from them. This gives them the power to create role assignment entries for others. Those entries’ validation function checks whether the author has the admin role. If they don’t, validation fails and the DHT rejects the role assignment. In that way, only the progenitor and the agents they set up as admins can create and assign roles to other agents.

The magic happens in the 𝚟𝚊𝚕𝚒𝚍𝚊𝚝𝚎_𝚛𝚎𝚚𝚞𝚛𝚎𝚍_𝚛𝚘𝚕𝚎 function, which checks whether the author of an entry had the specified role at the time the entry was committed. (Note: We check this at commit time rather than the current validation time, because a validation function can be run anytime, even years after the entry was committed. An entry should always be either valid or invalid, no matter who validates it or when they validate it.)

New Holochain hApp developers’ blog!

Three hApp developers have joined forces to create the Holochain Open-Dev blog. Guillem Córdoba, Hedayat Abedijoo, and Tatsuya Sato are sharing their wisdom as they build hApps and reflect on their experiences educating other hApp developers. Their articles are focused on showing you how to understand Holochain and translate that knowledge into well-architected applications. I’m very excited about this blog, because I know that these three are solid developers and educators—they’re disciplined, knowledgeable, and also some of the most helpful people you could meet!

The authors also invite you to get involved—this blog is meant to be a collaborative work that records and shares the community’s agreements on best practices. Contributing your valuable learnings will help make it more comprehensive!

Development status


  • Holochain Core: 0.0.47-alpha1 (blessed) | Changelog
  • Holonix: 0.0.73 (blessed)
  • Tryorama: 0.3.4 (blessed)
  • Holoscape: 0.0.9-alpha (contains Holochain Core 0.0.47-alpha1) | Download

Blessed (available via

  • Holonix: 0.0.73
  • Holochain Core: 0.0.47-alpha1