Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overwriting log file doesn't work #1015

Open
EtienneBasilik opened this issue Aug 8, 2024 · 3 comments
Open

Overwriting log file doesn't work #1015

EtienneBasilik opened this issue Aug 8, 2024 · 3 comments

Comments

@EtienneBasilik
Copy link

I am using LittleFS 2.8.1 to implement log files. The files are opened as follows:
open(path, O_CREAT | O_WRONLY, S_IWUSR);

The idea is when a write operation returns LFS_ERR_NOSPC because the disk is full, I seek to the beginning of the file as follows:
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET);

The problem is on the next write to that file, LFS_ERR_NOSPC is returned. It seems like if LittleFS is creating a new block to do the overwriting (whereas there is none left).

Am I doing something wrong? Or is the whole idea of rewinding files to their beginning when the memory is full not achievable with LittleFS?
I thought about alternatives where I would perform the seek when memory is almost full (e.g. 98%), but it wouldn't work because I would continuously seek (since from that point on, the memory would always be >=98% full...)

Thanks,
Étienne

@bmcdonnell-fb
Copy link

Or is the whole idea of rewinding files to their [overwriting a file from its] beginning when the memory is full not achievable with LittleFS?

I'd imagine that to be the case with any file system that implements wear-levelling.

@wdfk-prog
Copy link

littlefs does not always occupy one or more blocks per file, and may leave some data in each block; You're talking about methods where, on some devices, there's no way to overwrite anywhere, only a block or a few bytes can be erased or written

@geky
Copy link
Member

geky commented Sep 20, 2024

This is a common problem for copy-on-write (COW) file systems. If metadata updates require block allocations, filling up the filesystem completely can get the filesystem "stuck" and unable to write new updates, even if that update would free up blocks.

As far as I know there isn't really a good solution to this. A COW filesystem is "healthy" if it has a decent amount of free space to allocate copies into.

Looking at other COW filesystems, they general solve this by keeping a number of blocks reserved for "critical" operations, such as deleting files, syncing write, etc (#901 (comment)):

  • ZFS => reserves ~3.2% [ref]
  • btrfs => reserves ~512MiB [ref]

512 MiB is probably a bit too much for littlefs, but we can do something similar and error with LFS_ERR_NOSPC early, when only some user configurable number of blocks are free. Implementing this has just been low priority.


As for temporary solutions, the best thing is to just avoid filling up the filesystem completely. You can find the current usage with lfs_fs_size, though it currently requires a full filesystem traversal, which can be expensive.

It should also be noted that littlefs currently has problematic performance issues with writes to the beginning of files (#27).

A more efficient littlefs-specific scheme would be to have two files that you alternate writing to, with each file being limited to roughly ~44% (7/16) of the disk.

It's not great, but this is the current state of things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants