Branch to Delta Lake

This page provides you with instructions on how to extract data from Branch and load it into Delta Lake. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Branch?

Branch Metrics lets businesses generate deep links they can use to track conversions and user engagement on web and mobile transactions. It provides a business analytics dashboard to surface user behavior data.

What is Delta Lake?

Delta Lake is an open source storage layer that sits on top of existing data lake file storage, such AWS S3, Azure Data Lake Storage, or HDFS. It uses versioned Apache Parquet files to store data, and a transaction log to keep track of commits, to provide capabilities like ACID transactions, data versioning, and audit history.

Getting data out of Branch

Branch exposes data for things like install, open, clicks, and web session start through webhooks to user-defined HTTP POST callbacks. You can add a webhook through the Branch dashboard.

Sample Branch data

Branch exchanges data in JSON format. Here’s an example of the data returned for a clicks endpoint:

POST
User-agent: Branch Metrics API
Content-Type: application/json
{
    click_id: a unique identifier,
    event: 'click',
    event_timestamp: 'link click timestamp',
    os: 'iOS' | 'Android',
    os_version: 'the OS version',
    metadata: {
        ip: 'click IP',
        userAgent: 'click UA',
        browser: 'browser',
        browser_version: 'browser version',
        brand: 'phone brand',
        model: 'phone model',
        os: 'browser OS',
        os_version: 'OS version'
    },
    query: { any query parameters appended to the link },
    link_data: { link data dictionary - see below }
}

// link data dictionary example
{
    branch_id: 'unique identifier for unique link',
    date_ms: 'link creation date with millisecond',
    date_sec: 'link creation date with second',
    date: 'link creation date',
    domain: 'domain label',
    data: {
        +url: the Branch link,
        ... other deep link data
    },
    campaign: 'campaign label',
    feature: 'feature label',
    channel: 'channel label'
    tags: [tags array],
    stage: 'stage label',
}

Preparing Branch data

If you don’t already have a data structure in which to store the data you retrieve, you’ll have to create a schema for your data tables. Then, for each value in the response, you’ll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Branch's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you’ll likely have to create additional tables to capture the unpredictable cardinality in each record.

Loading data into Delta Lake on Databricks

To create a Delta table, you can use existing Apache Spark SQL code and change the format from parquet, csv, or json to delta. Once you have a Delta table, you can write data into it using Apache Spark's Structured Streaming API. The Delta Lake transaction log guarantees exactly-once processing, even when there are other streams or batch queries running concurrently against the table. By default, streams run in append mode, which adds new records to the table. Databricks provides quickstart documentation that explains the whole process.

Keeping Branch data up to date

Once you’ve set up the webhooks you want and have begun collecting data, you can relax – as long as everything continues to work correctly. You’ll have to keep an eye out for any changes to Branch’s webhooks implementation.

Other data warehouse options

Delta Lake on Databricks is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, or Snowflake, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Panoply, and To S3.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to move data from Branch to Delta Lake automatically. With just a few clicks, Stitch starts extracting your Branch data, structuring it in a way that's optimized for analysis, and inserting that data into your Delta Lake data warehouse.