QoS

Long Computation and Large Storage

Long Computation

All non-cron computation nodes will timeout after 30 seconds. If you want to make it into a long computation node you have to request it through qos by setting the long property to true. This will give the node up to 30 minutes of computation.

nodes = [{
  "#": "node1",
  "implementation": {
    "javascript": "server/node1_impl.js",
    "qos":{
      "long":true
    }
  }
}]

๐Ÿ“˜

Limits for regular nodes

  • 30 seconds of execution (default)
  • 30 minutes of execution with long flag

๐Ÿ“˜

Limits for cron nodes

Cron nodes are by default long computation nodes.

โ—๏ธ

Premium functionality

It is intended that long computation nodes will only be available to premium users.

Large Storage

A node can request to read/write to the filesystem by specifying a large-storage attribute in the qos section of a node's implementation.

It takes in an array containing elements in any of these two formats:

  • A string of the form bucket:permission where bucket is the name of the storage and permission is either ro (read-only) or rw (read-write). Omitting the permission parameter is the same as specifying read only permissions.
  • A dictionary of the form {โ€œbucketโ€: โ€œnameโ€, โ€œwriteโ€: true/false}. Omitting the write parameter defaults to false.
"nodes": [{
  "#": "node1",
  "implementation": {
    "javascript": "server/node1_impl.js",
    "when": {
      "interval": 1000
    },
    "qos":{
      "large-storage": [
        "bucket0:ro",
        { 
          "bucket": "bucket1",
          "write": true 
        },
        { 
          "bucket": "bucket2"
        },
        "attachments"
      ]
    }
  }
}]
  • Attachments: the relevant data will automatically populate the attachments field in the relevant JMAP representation of an email

  • General large storage buckets: to access the data stored on the filesystem you can fetch the path of the relevant directory from the _LARGE_STORAGE_<bucket-name> environment variable in a node. e.g. for "bucket2" it will be under _LARGE_STORAGE_bucket2

๐Ÿšง

Write permissions

Note that only one node can request a large-storage bucket with write permissions. However any number of nodes can request the bucket as read-only. This is to avoid concurrency issues.

๐Ÿšง

Naming of large-storage buckets

Large-storage bucket names should not clash with the store and export buckets defined in the DAG.

๐Ÿ“˜

Attachments

"attachments" is the default name for the email attachments storage. If you rename it in the port definition you should use the same name to get access to it. You can only request ro permissions to attachments.

๐Ÿ“˜

SDK

When using the SDK large storage is available in the folder sdk_tmp/large-storage/[email protected]/sift. You can inspect this folder to debug large-storage files written from your node.

โ—๏ธ

Premium functionality

It is intended that large storage access to nodes will only be available to premium users.