Showing revision 3.0

JData 101

Please believe me, JData is here to help! It is not a new format, and is very simple, easy-to-adopt. Read below to see why.

Why do we need JData?

No matter if you are a believer or not that we are living in the era of big data, we all have to deal with data and files from time to time, and we all know that there are numerous file formats that require specific software to open them. In most cases, you can't just open a file using a text editor and make sense out of it because the files are mostly binary and not directly human-readable.

This is bad for scientists -- people who generated a lot of complicated data and process them to understand what's going on in our world -- because they have to deal with increasing types of data files from increasingly complicated experiments. This is also bad for software developers -- people who write software to read/process/display certain type(s) of data and help you understand and process them -- because they have to constantly add libraries or parsers to deal with newly emerged formats and, if the formats are evolving, their future revisions. This is a lot of work and overhead to keep-up, and costly too.

The result of this is that software becomes bulky, with large number of dependencies, and increasingly difficult to maintain and improve. On the other hand, the numerous language-/application-/device-specific data formats, sometimes storing precious experimental data that costed a fortune to collect, are difficult to be shared, reused and understood, because you need a lot of dependent software tools to open them and understand. Even worse, many existing data formats utilizes rigid binary structures that are nearly impossible to extend (such as adding or removing something) without breaking backward-compatibility! Once a data format is revised, most data stored in the old format inevitably become outdated because no developer is interested in maintaining a parser of the old format.

People have noticed this problem, and had came up with a number of solutions, but each solution has met their challenges as well. For example:

  • Extensible Markup Language (XML): a general-purpose data format that is designed to be easily extensible, human-readable and versatile, but it suffers from verbosity, and difficulties when parsing
  • JavaScript Object Notation (JSON): a light-weight, simple yet highly extensible, human-readable format for storing complex hierarchical data (probably the most popular data format currently supported), but it suffers from 1) lack of strongly-typed data support, 2) large file sizes and parsing overhead compared to binary files
  • Binary JSON-like formats such as BSON, MessagePack, CBOR: these formats inherit the simplicity and versatility of JSON, but provide better efficient and smaller file size, but nearly all of them lost human-readability
  • Hierarchical Data Format (HDF5): a sophisticated, highly versatile, extensible and high-performance binary format targeted for storing large amount of scientific data, but to support HDF5 files you will have to know how to program its APIs and those have a learning curve. The file is also not human-readable and you must have the libraries pre-installed to understand the files. The overheads to store lightweight datasets can also be significant.

On top of this, most popular programming languages that used to make data processing software had developed a series of their own complex data types to efficiently processing complicated inputs, but it is currently lacking a common interface for these software to exchange such complex data types from software wrote in one language to another. For example, the data types data scientists or researchers may often use include

Data types Python MATLAB Perl JavaScript
flexible array list() cell() array array
associative array dict() struct()/containers.Map() %associative array object, Map()
packed-typed-arrays numpy.NDArray matrices PDL::matrix TypedArray, numjs.NdArray
tables pandas.DataFrame table Data::Table -
linked lists not built-in - - -
graph - graph()/digraph() - -
blob bytes/bytearray uint8/int8/char vector File::BLOB Blob()
complex complex/numpy.dtype=complex complex Math::Complex numjs.ndarray(dtype='complex')
sparse array scipy.sparse sparse - Map()
special matrices scipy.sparse sparse - -

To solve this problem, we need a data representation (and/or) storage format that is

  1. highly extensible - you can add/remove/merge data without breaking the parser
  2. highly versatile - to encapsulate complex hierarchical data and allow easy split, merge, and integration
  3. extremely simple - that anyone without training can easily understand and adopt, and
  4. ideally, human-readable - human-readability is the ultimate criteria for long-lived future-proof data, because the data can be easily understood without special tools

What exactly is JData?

How do you use it?

Powered by Habitat