Catfs is a caching filesystem written in Rust

Catfs allows you to have cached access to another (possibly remote)

Catfs is a caching filesystem written in Rust.

Overview

Catfs allows you to have cached access to another (possibly remote) filesystem. Caching semantic is read-ahead and write-through (see Current Status). Currently it only provides a data cache and all metadata operations hit the source filesystem.

Catfs is ALPHA software. Don't use this if you value your data.

Installation

  • On Linux, install via pre-built binaries. You may also need to install fuse-utils first.

  • Or build from source which requires Cargo.

Usage

Catfs requires extended attributes (xattr) to be enabled on the filesystem where files are cached to. Typically this means you need to have user_xattr mount option turned on.

Catfs will expose files in <from> under <mountpoint>, and cache them to <to> as they are accessed. You can use --free to control how much free space <to>'s filesystem has.

To mount catfs on startup, add this to /etc/fstab:

Benchmark

Compare using catfs to cache sshfs vs sshfs only. Topology is laptop - 802.11n - router - 1Gbps wired - desktop. Laptop has SSD whereas desktop has spinning rust.

Compare running catfs with two local directories on the same filesystem with direct access. This is not a realistic use case but should give you an idea of the worst case slowdown.

Write is twice as slow as expected since we are writing twice the amount.

To run the benchmark, do:

The docker container will need to be able to ssh to [email protected]. Typically I arrange that by mounting the ssh socket from the host

License

Copyright (C) 2017 Ka-Hing Cheung

Licensed under the Apache License, Version 2.0

Current Status

Catfs is ALPHA software. Don't use this if you value your data.

Entire file is cached if it's open for read, even if nothing is actually read.

Data is written-through to the source and also cached for each write. In case of non-sequential writes, catfs detects ENOTSUP emitted by filesystems like goofys and falls back to flush the entire file on close(). Note that in the latter case even changing one byte will cause the entire file to be re-written.

References

  • Catfs is designed to work with goofys
  • FS-Cache provides caching for some in kernel filesystems but doesn't support other FUSE filesystems.
  • Other similar fuse caching filesystems, no idea about their completeness:
    • CacheFS - written in Python, not to be confused with FS-Cache above which is in kernel
    • fuse-cache
    • gocachefs
    • mcachefs
    • pcachefs
Issues

Collection of the latest Issues

hayk-skydio

hayk-skydio

0

I am using goofys + catfs and observing that if I read the first N bytes of a file, it seems to cache only those bytes and not the whole file as stated in the README.

Mount command:

Example of reading first N bytes (same behavior happens when reading from a python script:

The size of the large file here is 158M:

However the size of the cached file after the read is 128K:

I have tested this with multiple files and reads from 10 bytes up to 100M, resulting in the same behavior of only caching the read part. If instead the last N bytes are read, the entire file is cached.

This seems to contradict the README, which states Entire file is cached if it's open for read, even if nothing is actually read..

My desired behavior would be to have a flag that toggles these two behaviors, and appropriate documentation. As it currently stands, one of our use cases is failing because it depended on caching the entire file on touching it.

Finally, please comment if this should be opened as a goofys issue instead.

System:

  • goofys version 0.24.0-45b8d78375af1b24604439d2e60c567654bcdf88
  • catfs 0.9.0 (also tried 0.8.0 and 0.7.0)
  • Linux 4.15.0-142-generic #146~16.04.1-Ubuntu SMP Tue Apr 13 09:27:15 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
ole-tange

ole-tange

2

phone is an sshfs.

I did a grep -r foo . and got:

$ catfs phone phone.cache phone.cached
thread '<unnamed>' panicked at 'index out of bounds: the len is 0 but the index is 0', /home/tange/.cargo/registry/src/github.com-1ecc6299db9ec823/catfs-0.8.0/src/catfs/error.rs:56:30
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread '<unnamed>' panicked at 'index out of bounds: the len is 0 but the index is 0', /home/tange/.cargo/registry/src/github.com-1ecc6299db9ec823/catfs-0.8.0/src/catfs/error.rs:56:30
stack backtrace:
   0:     0x564b76983fe0 - std::backtrace_rs::backtrace::libunwind::trace::h577ea05e9ca4629a
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/../../backtrace/src/backtrace/libunwind.rs:96
   1:     0x564b76983fe0 - std::backtrace_rs::backtrace::trace_unsynchronized::h50b9b72b84c7dd56
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/../../backtrace/src/backtrace/mod.rs:66
   2:     0x564b76983fe0 - std::sys_common::backtrace::_print_fmt::h6541cf9823837fac
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/sys_common/backtrace.rs:79
   3:     0x564b76983fe0 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hf64fbff071026df5
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/sys_common/backtrace.rs:58
   4:     0x564b769a052c - core::fmt::write::h9ddafa4860d8adff
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/core/src/fmt/mod.rs:1082
   5:     0x564b76981057 - std::io::Write::write_fmt::h1d2ee292d2b65481
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/io/mod.rs:1514
   6:     0x564b76986440 - std::sys_common::backtrace::_print::ha25f9ff5080d886d
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/sys_common/backtrace.rs:61
   7:     0x564b76986440 - std::sys_common::backtrace::print::h213e8aa8dc5405c0
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/sys_common/backtrace.rs:48
   8:     0x564b76986440 - std::panicking::default_hook::{{closure}}::h6482fae49ef9d963
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/panicking.rs:200
   9:     0x564b7698618c - std::panicking::default_hook::he30ad7589e0970f9
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/panicking.rs:219
  10:     0x564b76986aa3 - std::panicking::rust_panic_with_hook::haa1ed36ada4ffb03
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/panicking.rs:569
  11:     0x564b76986679 - std::panicking::begin_panic_handler::{{closure}}::h7001af1bb21aeaeb
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/panicking.rs:476
  12:     0x564b7698446c - std::sys_common::backtrace::__rust_end_short_backtrace::h39910f557f5f2367
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/sys_common/backtrace.rs:153
  13:     0x564b76986639 - rust_begin_unwind
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/panicking.rs:475
  14:     0x564b7699f851 - core::panicking::panic_fmt::h4e2659771ebc78eb
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/core/src/panicking.rs:85
  15:     0x564b7699f812 - core::panicking::panic_bounds_check::h2e8c50d2fb4877c0
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/core/src/panicking.rs:62
  16:     0x564b768678cb - catfs::catfs::error::RError<E>::from::h1d8d9839a102ee7f
  17:     0x564b76861330 - <catfs::catfs::file::Handle as core::ops::drop::Drop>::drop::h7da3221858c64a46
  18:     0x564b7685ad6c - core::ptr::drop_in_place::h3e2659181536cf18
  19:     0x564b7685af2c - core::ptr::drop_in_place::hb43e9bcb923eae34
  20:     0x564b7685ac7d - <F as threadpool::FnBox>::call_box::h0ea576581ced0b63
  21:     0x564b768a36db - std::sys_common::backtrace::__rust_begin_short_backtrace::h0b88fcd08d2363c9
  22:     0x564b768a3bda - core::ops::function::FnOnce::call_once{{vtable.shim}}::hebe05edcb5c35f61
  23:     0x564b7698942a - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::h670c50864ac2cb92
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/alloc/src/boxed.rs:1042
  24:     0x564b7698942a - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::h2511952749086d81
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/alloc/src/boxed.rs:1042
  25:     0x564b7698942a - std::sys::unix::thread::Thread::new::thread_start::h5ad4ddffe24373a8
                               at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/sys/unix/thread.rs:87
  26:     0x7f2c74d8b609 - start_thread
  27:     0x7f2c74c97293 - clone
  28:                0x0 - <unknown>
thread panicked while panicking. aborting.
Illegal instruction (core dumped)
HenrikBengtsson

HenrikBengtsson

3

From experimenting with the devel version of catfs (commit cbd7ab7), I noticed that it uses multi-threading. Looking at htop, it appears that catfs is using all(?) CPU cores by default. Is that correct?

Also, is there a way, e.g. a command-line option, to limit the number of cores that a specific catfs instance will use?

(I know zero Rust, otherwise I'd try to figure this out myself from the source code.)

hayk-skydio

hayk-skydio

0

When using with goofys, can we point catfs to an NFS volume, so that the cache can be shared between multiple users on multiple machines? Are there thread safety issues with multiple concurrent writes? Requesting a note in the README regarding this.

aacebedo

aacebedo

0

I tried catfs with SeaDrive because it could be a great way to selectively sync part of my drive. Unfortunately it doesn't seem to work. When I click on one of the folder, it just keeps displaying them again in a kind of loop. Don't know if it is related to catfs or seadrive through.

jamsinclair

jamsinclair

0

I've compiled my own catfs binary from the latest master branch

When mounting with the following:

I get the following error:

I'm either likely using this software wrong or have some permissioning issues 😅. Let me know if there's something I need to fix on my end or if you need me to provide additional information 🤓

Edit: I am trying to allow other users, or if possible, users of a certain group also access the cached volume.

Thanks for both goofys and catfs these are really cool tools and appreciate your work on them 😊

Carlos-ZRM

Carlos-ZRM

1

Hello everybody.

I need your help, please. I have been trying install Catfs into AWS Graviton instance but i was found that error. I wonder If you could help me with this.

   Compiling catfs v0.8.0
error[E0308]: mismatched types
  --> /home/carlos.zrm/.cargo/registry/src/github.com-1ecc6299db9ec823/catfs-0.8.0/src/catfs/rlibc.rs:92:26
   |
92 |                 d_name: [0i8; 256], // FIXME: don't hardcode 256
   |                          ^^^ expected `u8`, found `i8`
   |
help: change the type of the numeric literal from `i8` to `u8`
   |
92 |                 d_name: [0u8; 256], // FIXME: don't hardcode 256
   |                          ^^^

error: aborting due to previous error

For more information about this error, try rustc --explain E0308. error: failed to compile catfs v0.8.0, intermediate artifacts can be found at /tmp/cargo-installtpQEXa

Caused by: could not compile catfs

Versions

  • Cargo version
rustc 1.54.0 (a178d0322 2021-07-26)
  • Linux version
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS"
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
  • CPU version AWS instance m6g.medium
Architecture:                    aarch64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
CPU(s):                          1
On-line CPU(s) list:             0
Thread(s) per core:              1
Core(s) per socket:              1
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       ARM
Model:                           1
Model name:                      Neoverse-N1
Stepping:                        r3p1

terabyte84

terabyte84

0

Please add support for caching random reads and not only for the entire file. This would be great to read small parts of huge files (i.e. chia plots).

GeorgiaM-honestly

GeorgiaM-honestly

1

Hello,

I love catfs. I'm using it to provide caching for a goofys s3 "mount". It makes a huge difference!

I do however have some nagging issues which happen on a reglar basis. I hope I have provided enough information and would like to help resolve this if it's a non-local specific issue.

System:

  • Gentoo linux
  • uname -a: Linux deleted 5.12.13-gentoo-x86_64 #1 SMP Thu Jun 24 12:24:13 DELETED 2021 x86_64 Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz GenuineIntel GNU/Linux
  • 16 cores
  • 32GB RAM
  • Filesystem which the cache is using: 100GB, EXT4-fs (sde1): mounted filesystem with ordered data mode. Opts: errors=remount-ro,user_xattr. Quota mode: disabled.

catfs issues:

  • When viewing the data of a directory full of .tif files, the first one requested is always 8 bytes, which appears to be the bare minimum of data to get 'file' to report it is a tif image.

  • Indeed the first 8 bytes of real source file is the same, however, it is far larger than 8 bytes.

  • When viewing the data of a directory full of .tif files, first request or not, screenful of errors are printed out by catfs. Examples:

2021-06-30 16:53:22 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0902830w295700n.tif" is not a valid cache file, deleting 2021-06-30 16:53:23 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0902830w295830n.tif" is not a valid cache file, deleting 2021-06-30 16:53:23 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0902830w300000n.tif" is not a valid cache file, deleting 2021-06-30 16:53:24 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0902830w300130n.tif" is not a valid cache file, deleting 2021-06-30 16:53:24 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0903000w295530n.tif" is not a valid cache file, deleting 2021-06-30 16:53:25 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0903000w295700n.tif" is not a valid cache file, deleting 2021-06-30 16:53:25 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0903000w295830n.tif" is not a valid cache file, deleting 2021-06-30 16:53:26 ERROR - "2005_Hurricane_Katrina/aug31JpegTiles_GCS_NAD83/aug31C0903000w300000n.tif" is not a valid cache file, deleting

  • During this time, which can be several minutes, the application trying to use the data is hung and usually needs force killed.

  • While catfs is "deleting" files, the total size of the cache directory root grows

  • After these few minutes, everything works, except for that first invalid 8 byte file.

  • Often but not always after this delay, despite having fetched files before, they are fetched new again (increased inbound network traffic is observed)

  • Gwenview STDERR on the 8 byte file: gwenview.libtiff: Error JPEGLib "Not a JPEG file: starts with 0xd5 0x7e" org.kde.kdegraphics.gwenview.lib: Could not generate thumbnail for file "#404" gwenview.libtiff: Error JPEGLib "Not a JPEG file: starts with 0xd5 0x7e" org.kde.kdegraphics.gwenview.lib: Could not generate thumbnail for file "#404"

  • Sometimes after all the other images have been downloaded and thumbnails created, I can double-click on that first file that had been 8 bytes, and it's now downloaded, and will display without error. Other times, it stays 8 bytes.

  • Reproduction

  • Mount an aws s3 public bucket with goofys: goofys -o allow_other -o ro noaa-eri-pds /media/s3-noaa-eri-pds-LOCAL

  • Start catfs appropriately: catfs -o ro --free 1G /media/s3-noaa-eri-pds-LOCAL /home/myusername/.goofys-cache /media/s3-noaa-eri-pds-LOCAL-catfs

  • Explore the images with a tool such as gwenview that will show thumbnails

  • Optional: Start gwenview on the command line, outputting STDOUT and STDERR to their own files and then watch these files: gwenview 1> /tmp/gwenview-STDOUT.txt 2> /tmp/gwenview-STDERR.txt & tail -f /tmp/gwenview-STD*

ror3d

ror3d

0

I use (or try to, not entirely set-up yet) catfs together with mergerfs. My directory structure is:

  • /mnt/disk1-3 merged into /mnt/uncached_storage
  • /mnt/cache is an ssd
  • /mnt/storage is /mnt/uncached_storage with /mnt/cache as the cache.

When setting things up for another application I mistakenly created a link to a directory inside that same directory. For example:

When trying to ls /mnt/storage/Documents then, it hangs completely.

gaul

gaul

0

Now that #45 has merged, CI should include it. Note that this may be difficult since Homebrew is removing support for FUSE.

Cebtenzzre

Cebtenzzre

0

I have a brand-new, read-only catfs filesystem. I am using it to accelerate a python script that reads many small files from NFS. After the script reads 1,000-20,000 files, the read system call fairly reliably returns -ECANCELED. Python throws an exception and the script terminates. catfs does not log anything when this occurs.

ECANCELED is not mentioned as a possible error in the read(2) or read(3p) man pages. Does catfs actually intend to return this error? If so, what does it mean, and what are programs expected to do about it?

A quick grep shows this line of code as a potential source of the error: https://github.com/kahing/catfs/blob/daa2b85798fa8ca38306242d51cbc39ed122e271/src/catfs/file.rs#L499

rickysarraf

rickysarraf

7

Consider this scenario:

  • The backing device (a remote share accessible over sshfs) is not persistent
  • But the local cache is persistent

Under circumstances where the backing device is lost (network interruption, network change, roaming profile etc), the data should still be served transparently from the cache

hacktek

hacktek

enhancement
0

For small files that's fine but for large, multi-gigabyte files it's inefficient to download the entire thing on access because certain operations might only require a quick open/close of the file to read the first few megabytes (for instance reading media metadata using mediainfo or ffmpeg).

grayfallstown

grayfallstown

1

I am currently designing a file system agnostic automated tiered storage solution in rust, which I plan to release under the same MIT/Apache2 dual licence rustlang uses.

I could safe quite a lot of time If I was able to reuse some catfs code, but to do so catfs would need to be under the same dual license.

Do you mind dual licensing catfs as Apache2/MIT? I would highly appreciate it.

Jieiku

Jieiku

6

catfs was actually one of the very first ones I tried!

I am running ubuntu 18.04.1 and here is how I installed catfs:

Here is how I tried it out using a sshfs mount /media/remote with catfs :

and then launched kodi media player (which is configured to read from /media/video), kodi just gave me a spinning progress indicator while trying to play a file from the catfs /media/video mount, and I could see monitoring bandwidth that the file was transferring, so it seemed to be trying to transfer the entire file before playback. I did not have this issue with mcachefs or pcachefs.

Currently I am using mcachefs because of the ones I tried it performed the fastest.

Is there anything else I should try? maybe I am just missing a small option or setting to get this party going :)

Nutomic

Nutomic

needinfo
3

I can reproduce this crash most times (but not always) with the following steps:

  1. create a large media file in (in my case, a 1.5GB mp4)
  2. run cat file.mp4
  3. press Control+C *multiple times) to cancel the cat command

Of course it doesn't make much sense to do this, I was just testing stuff. And if the error occurs here, it might also occur in other cases.

SSHFS version 2.8 FUSE library version: 2.9.7 fusermount version: 2.9.7 using FUSE kernel interface version 7.19 catfs 0.8.0 cargo 0.26.0 Ubuntu 18.04 Linux 4.15.0-23-generic

ba1dr

ba1dr

4
./catfs /mnt/gsdata /tmp/catfsdata /mnt/data

Hangs in foreground, but works (mounted folder exists)

Version: downloaded as 0.8.0, reported 0.7.0 (see #12 )

Caching folder from mounted goofys S3 folder.

How to run it in background?

ba1dr

ba1dr

0
wget https://github.com/kahing/catfs/releases/download/v0.8.0/catfs
chmod a+x ./catfs
./catfs -h

catfs 0.7.0 Cache Anything FileSystem ...

ls -la catfs

-rwxrwxr-x 1 ubuntu ubuntu 6453456 Jan 9 06:49 catfs

Seems to be just a typo, as 0.7.0 downloaded has another size.

kahing

kahing

2

the "simple" implementation would be logging all the dirty files and copy back in the background, while making sure they are never evicted. Ideally provide ways to let the user know how much dirty data are yet to be replicated

Versions

Find the latest versions by id

v0.9.0 - Jan 23, 2022

  • Add compatibility with 32-bit systems, #14
  • Add compatibility with macOS, #10
  • Daemonize by default
  • Pass through FUSE options, #6
  • Plug leaks in copy_splice
  • Support mounting via fstab
  • Update many dependencies

Built with rustc 1.58.1

Thanks @jonringer and @yjh0502 for opening pull requests to improve catfs!

Twitter Follow me on Twitter.

Github Release

v0.8.0 - Jan 09, 2018

Fixed cargo package which pulled in new rust-fuse with updated API. Built with rustc 1.23.0

Twitter Follow me on Twitter.

Github Release

v0.7.0 - Jan 09, 2018

Continuation of 0.6.0 and quashed more dead locks and implemented statfs(1). Updated code to build with rustc 1.22.1

Twitter Follow me on Twitter.

Github Release

v0.6.0 - Sep 18, 2017

Fixed a deadlock caused by many parallel cold reads.

Twitter Follow me on Twitter.

Github Release

v0.5.0 - Sep 09, 2017

All fuse operations are now running in a thread pool, ie: operations on unrelated files won't block each other. Fixed a readahead race. Handle ENOSPC by running the evicter even if --free is not used.

Twitter Follow me on Twitter.

Github Release

v0.4.0 - Aug 30, 2017

Fixed a compatibility issue with goofys. Pass-through ftruncate call instead of converting them to posix_fallocate when the file is extended.

Twitter Follow me on Twitter.

Github Release

v0.3.0 - Aug 27, 2017

faster read-through and write-through and a couple small fixes

Twitter Follow me on Twitter.

Github Release

v0.2.0 - Aug 17, 2017

Eviction support configurable via --free. Better cache invalidation. rename support. Many misc fixes.

Github Release

v0.1 - Jul 21, 2017

First ever release of catfs.

Github Release

Information - Updated May 19, 2022

Stars: 560
Forks: 48
Issues: 29

Repositories & Extras

Rust bindings for libinjection

Add libinjection to dependencies of Cargo

Rust bindings for libinjection

Rust bindings for the C++ api of PyTorch

LIghtweight wrapper for pytorch eg libtorch in rust

Rust bindings for the C++ api of PyTorch

Rust leveldb bindings

Almost-complete bindings for leveldb for Rust

Rust leveldb bindings

Rust FUSE - Filesystem in Userspace

Rust library crate for easy implementation of Crate documentation

Rust FUSE - Filesystem in Userspace

Cross-platform filesystem notification library for Rust

Add file-system notifications via this library for Rust

Cross-platform filesystem notification library for Rust

a tool to analyze file system usage written in Rust

coloured output, according to the LS_COLORS environment variable

a tool to analyze file system usage written in Rust

FUSE (Filesystem in Userspace) for Rust

FUSE-Rust is a FUSE filesystems in userspace

FUSE (Filesystem in Userspace) for Rust

Rust library to manipulate file system access control lists (ACL) on macOS, Linux, and FreeBSD

This module provides two high level functions, getfacl and setfacl

Rust library to manipulate file system access control lists (ACL) on macOS, Linux, and FreeBSD

A file abstraction system for Rust

Chicon is a library meant to provide a simple, uniform and universal API interacting with any filesystem

A file abstraction system for Rust

Raw Filesystem API for Rust — enable simpler browsing with ease

RFSAPI requests are made by setting a GET request's X-Raw-Filesystem-API header to 1

Raw Filesystem API for Rust — enable simpler browsing with ease
Facebook Instagram Twitter GitHub Dribbble
Privacy