Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Website] Add "Data wants to be free" #586

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

lidavidm
Copy link
Member

@lidavidm lidavidm commented Feb 4, 2025

No description provided.

@lidavidm
Copy link
Member Author

lidavidm commented Feb 4, 2025

I did not finish annotating the binary blobs, I'll finish that if we're OK with the scheme I've chosen so far

@lidavidm
Copy link
Member Author

lidavidm commented Feb 4, 2025

Copy link
Member

@kou kou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Instead of putting lengths of values everywhere, Arrow groups values of the same column (and hence same type) together, so it just needs the length of the buffer. Strings do still require a length per value, but the overhead isn’t added where it isn’t otherwise needed. And nullability is instead stored in a bitmap, which is omitted if there aren’t any NULL values in the first place, saving space. Because of that, more rows of data doesn’t increase the overhead; instead, the more data you have, the less you pay\!

Even the header isn’t actually the disadvantage it looks like. The header contains the schema, which makes the data stream self-describing. With PostgreSQL, you need to get the schema from somewhere else. So we aren’t making an apples-to-apples comparison in the first place: PostgreSQL still has to transfer the schema, it’s just not part of the “binary format” that we’re looking at here.
Meanwhile, there’s actually a more insidious problem with PostgreSQL we’ve overlooked so far: alignment. Remember that 2 byte field count at the start of every row? Well, that means all the 4 byte integers after it are now unaligned…so you can’t use them without copying them (or doing a very slow unaligned load). Arrow, on the other hand, strategically adds some padding (overhead) to align the data, and lets you use little-endian or big-endian byte order depending on your data. And Arrow doesn’t apply expensive encodings to the data that require further parsing; there’s just optional compression that can be enabled if it suits your data[^2]. So **you can use Arrow data as-is without having to parse every value**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it's a been a long time since unaligned loads were "very slow". You might get some overhead from time to time, especially if a cache line or a page is straddled, but I'm not sure the difference is really important.

Also, the alignment really only comes into play if the file is being memory-mapped.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though we have found issues in the past with Rust since Rust gets very upset with unaligned data buffers.... maybe mention that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rust vs unaligned data is more of a "source-level" issue AFAIU. I suppose you just have to be careful and use something like https://doc.rust-lang.org/std/ptr/fn.read_unaligned.html (as we also do in C++, actually).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that true for all architectures, and isn't it still undefined behavior to have a pointer to an unaligned value? (In which case either you have to rewrite your code to memcpy explicitly, or parse the values...)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that's a good question. From what I understand:

  1. At least x86 and ARM allow unaligned accesses; SPARC doesn't (but who cares nowadays? :-)); not sure about RISC-V and POWER
  2. on the source language side, it's true that dedicated constructs may be required (such as the memcpy trick, which is typically optimized by the compiler into a regular or unaligned load instruction)

In any case, I think we can relax the wording in that sentence a bit to explain the inconvenience more clearly and without overblowing the issue:

Well, that means all the 4 byte integers after it are now unaligned, which then require care to handle properly (dedicated constructs can be required at the source language level - such as C++ or Rust - and performance may suffer depending on the CPU)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I adjusted the wording here.


# PostgreSQL vs Arrow: Data Serialization

Let’s compare the [PostgreSQL binary
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the PostgreSQL binary format different from the data serialization format used in the PostgreSQL wire protocol? If it is, I'm not sure the comparison between it and Arrow is fair. If it is, I think a couple sentences to draw the user in would be good.

When you connect to a PostgreSQL database and execute a query, PostgreSQL sends the result back to you using the PostgreSQL wire protocol. The data in your query result is encoded within this protocol using the PostgreSQL binary format.

(Note: The above could be false but it's the basic thing I was after)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, for where we mention "PostgreSQL wire protocol" lower down, I think we should add a link to each instance, such as to https://www.postgresql.org/docs/current/protocol.html.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this context, I'm pretty sure the PostgreSQL binary format is the PostgreSQL wire protocol

000003c8: 00 01 00 00 41 52 52 4f ....ARRO
000003d0: 57 31 W1</code></pre></div></div>

Arrow looks quite…intimidating…at first glance. There’s a giant header, and lots of things that don’t seem related to our dataset at all, plus mysterious padding that seems to exist solely to take up space. But the important thing is that **the overhead is fixed**. Whether you’re transferring one row or a billion, the overhead doesn’t change. And unlike PostgreSQL, **no per-value parsing is required**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we couldn't come up with a graphical demonstration of the overhead. As I'm reading this, I find I want a chart or table or something to compare statements like the one you've made here about overhead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A summary chart makes sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started to tinker around with a graphical comparison of overhead by comparing PostgreSQL binary files to Arrow IPC files for various rows counts and schemas and the result was not very interesting. For the schema in this post (bigint, text, bigint), plotting file size against number of rows/records on a log10 scale,

image

By 100 records, Arrow and Postgres are close and by 1000 records the overhead increases at a constant factor. Feel free to resolve this conversation unless you think of anything.

Copy link
Member

@amoeba amoeba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍. Left more than a few minor edits. Let me know if you'd like me to chip in on doing the work of any of the changes.

@lidavidm
Copy link
Member Author

I've been distracted with other work but I'll try to integrate the feedback and replace the missing diagrams in the next week or so, thanks everyone!

@lidavidm
Copy link
Member Author

I've mostly addressed things here; I do plan to take another pass and shorten the prose (and probably de-emphasize size further vs other features)

@lidavidm
Copy link
Member Author

Copy link
Member

@amoeba amoeba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great.

layout: post
title: "Data wants to be free: fast data exchange with Apache Arrow"
description: ""
date: "2025-02-04 00:00:00"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should change the date here (and in the filename) to whatever day this actually posts.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I set the date to the 28th

@lidavidm
Copy link
Member Author

@ianmcook
Copy link
Member

Should we make the title be in title case?

Data Wants to Be Free: Fast Data Exchange with Apache Arrow

@lidavidm
Copy link
Member Author

Fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants