this post was submitted on 30 Mar 2026
13 points (100.0% liked)

Programming

26316 readers
389 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.

Would love to hear how you would design it.

top 14 comments
sorted by: hot top controversial new old
[–] ExperimentalGuy@programming.dev 1 points 15 hours ago* (last edited 15 hours ago)

Could u use an already parallelized solution like ripgrep? I think someone else also mentioned putting it in a database, that shouldnt be too bad either.

[–] Jayjader@jlai.lu 2 points 1 day ago* (last edited 1 day ago)
  1. chunk_size := file_size / cpu_cores. Compile regex.

  2. spawn cpu_cores workers:
    2.a. worker #n starts at n * chunk_size bytes. If n > 0, skip bytes until newline encountered.
    2.b worker starts feeding bytes from file/chunk into regex. When match is found, write to output (stdout or file, whichever has better performance). When newline encountered, restart regex state automata.
    2.c after having read chunk_size bytes, continue until encountering a newline to ensure the whole file is covered by the parallel search.

Optionally, keep track of byte number and attach them to the found matches when outputting, to facilitate eventually de-duplicating and/or navigating to said match in the file.

To avoid problems, have each worker output to a separate file, and only combine these output files when the workers are all finished.

As others have said, it's going to be hard to get more speedup than this, and you will ultimately be limited by your storage's read speed and throughput if the whole file cannot fit into memory.

[–] mvirts@lemmy.world 2 points 1 day ago

If you're writing a program, definitely multiple threads or processes that each scan a chunk of the file, which basically means seek to the start of the chunk, read lines into the scan code until you hit the end of the chunk. For jsonl each chunk will need an alignment step to not break the jsonl.

For command line trickery, maybe the file could be chunked up by running multiple dd instances with an offset parameter piped into grep. This has many synchronization issues and all the outputs should be captured separately then combined afterwards. I can't think of a good way to align this method to line edges but maybe you can put some fancy regular expression magic into the grep step to ignore malformed json at the beginning and end and overlap the chunks?

Grep is fast already, maybe test the simple approach and see how long it takes.

Read the JSONL into a real database like Postgres.

[–] Bazell@lemmy.zip 7 points 2 days ago* (last edited 2 days ago) (1 children)

Splitting file in equal parts and analyzing in threads each part is basically the only efficient option to utilize modern CPU architectures efficiently for your task that I can think about. Since I doubt that the data stored in your files can be quickly processed by the GPU(I assume that you have text data).

[–] bleistift2@sopuli.xyz 4 points 2 days ago (2 children)

Can a file really be split efficiently? And is reading from multiple files on the same disk really faster than scanning a single file from top to bottom?

[–] entwine@programming.dev 6 points 2 days ago (1 children)

You don't actually need to "split" anything, you just read from different offsets per thread. Mmap might be the most efficient way to do this (or at least the easiest)

Whether or not that's going to run into hardware bottlenecks is a separate issue from designing a parallel algorithm. Idk what OP is trying to accomplish, but if their hardware is known (eg this is an internal tool meant to run in a data center), they'll need to read up on their hardware and virtualization architecture to squeeze the most IO performance.

But if parsing is actually the bottleneck, there's a lot you can do to optimize it in software. Simdjson would be a good place to start.

[–] FizzyOrange@programming.dev 2 points 15 hours ago

I think mmap is unlikely to be the best option seeing as you'd be doing large sequential reads.

[–] Bazell@lemmy.zip 1 points 2 days ago* (last edited 2 days ago)

If the task to just read the data quickly without processing it(doing calculations, sorting, transformation, etc.), then yes, reading line by line is the fastest way. But the OP mentioned some processing operations on data, which may require additional time and computing power, thus it will be efficient to firstly load file into ram splitting it into chunks, give each thread a chunk to process and then combine results.

In fact, my first comment suggested that you can read file line by line and once enough lines were read in RAM, thread 1 can start processing them while thread 0 still reads new lines from hard drive. Once another chunk is ready, thread 2 can start processing it and so on.

In conclusion, it all depends on what exactly you need to do with data. Simply transferring it from HDD to RAM must be done by reading line by line. But processing of data can be split among cores of CPU to maximize the speed of computations.

[–] eager_eagle@lemmy.world 4 points 2 days ago (1 children)
  1. How many grep-like ops per file?
  2. Is it interactive or run by another process?
  3. Do you know which files ahead of time?
  4. Do you have any control over that file creation?
  5. Is the JSONL append only? Is the grep running while the file is modified?
  6. How large is very large? 100s of MB? Few GB? 100s of GB? Whether or not it fits in memory could change the approach.
  7. You're using files, plural, would parallelizing at the file level (e.g. one thread per file) be enough?
  8. How many files and how often is that executed?
[–] dhruv3006@lemmy.world 3 points 2 days ago

100s of GBs yes.

[–] vfscanf@discuss.tchncs.de 4 points 2 days ago (1 children)

The question is, what will be your limiting factor: CPU or disk I/O? Parallel processing doesn't do much good if the workers have to wait on the disk to deliver more data. I'd start with an async architecture, where the program can do its processing while it is waiting on more data.

[–] pelya@lemmy.world 4 points 2 days ago (1 children)

One additinal trick is to compress your files before writing them to disk, using some kind of fast lightweight compression like parallel gzip (pigz command) or lzop. When parsing them, you will have smaller disk reads but higher CPU usage, which will give speed advantage if you have server-class CPU with lots of cache.

[–] towerful@programming.dev 2 points 13 hours ago

Yeh, JSON will compress well.