Hmm. Good intent, perhaps poor implementation?
One of the unfortunate truths about UART / TTL based communications is that our error-rates are higher than protocols like Ethernet or WiFi. We have to handle errors on our own.
This style of writing code (and storing data) is therefore extremely dangerous. You must assume that any character has a chance of failure or corruption.
-
Failure -- Clock rate errors, desync, and so forth. UARTs do NOT transmit the clock, and if your clock drifts too far away, you will straight up lose data.
-
Errors -- UARTs exist in a noisy environment. Too much of a voltage change along any critical path can turn a 0 into 1, or vice versa.
A good rule of thumb for our field is to assume that ~1 in a million bits are either flipped or corrupted. That is: if you send 125kB of data, then there's a chance that somewhere in that big datastream, a byte is missing and/or corrupted. What if that byte is a missing \n? What if that missing byte is a missing "," comma? Well, the data is screwed on that line.
CSV still has almost the right format / pattern. What you need is a checksum. Serious protocols use CRC16 or CRC32 and calculate the probabilities of undetected errors, but a simple 16-bit checksum is fine enough for beginner or hobbyist use. Very serious / data-storage code would go as far as using error-correction codes (Reed Solomon) or other complex codes that not only detect errors but also corrects simple errors.
In any case, a simple ", checksum" at the end of each CSV line would help determine when corruption happens, and allow the downstream code to automatically "throw out" CSV-lines that failed a checksum test.