maciejhirsz/logos

Create ridiculously fast Lexers

To make it easy to create a Lexer, so you can focus on more complex problems

Logos

.

Logos has two goals:

  • .
  • To make the generated Lexer faster than anything you'd write by hand.

To achieve those, Logos:

  • Combines all token definitions into a single deterministic state machine.
  • Optimizes branches into lookup tables or jump tables.
  • Prevents backtracking inside token definitions.
  • Unwinds loops, and batches reads to minimize bounds checking.
  • Does all of that heavy lifting at compile time.

Example

use logos::Logos;

#[derive(Logos, Debug, PartialEq)]
enum Token {
    // Tokens can be literal strings, of any length.
    #[token("fast")]
    Fast,

    #[token(".")]
    Period,

    // Or regular expressions.
    #[regex("[a-zA-Z]+")]
    Text,

    // Logos requires one token variant to handle errors,
    // it can be named anything you wish.
    #[error]
    // We can also use this variant to define whitespace,
    // or any other matches we wish to skip.
    #[regex(r"[ \t\n\f]+", logos::skip)]
    Error,
}

fn main() {
    let mut lex = Token::lexer(".");

    assert_eq!(lex.next(), Some(Token::Text));
    assert_eq!(lex.span(), 0..6);
    assert_eq!(lex.slice(), "Create");

    assert_eq!(lex.next(), Some(Token::Text));
    assert_eq!(lex.span(), 7..19);
    assert_eq!(lex.slice(), "ridiculously");

    assert_eq!(lex.next(), Some(Token::Fast));
    assert_eq!(lex.span(), 20..24);
    assert_eq!(lex.slice(), "fast");

    assert_eq!(lex.next(), Some(Token::Text));
    assert_eq!(lex.span(), 25..31);
    assert_eq!(lex.slice(), "Lexers");

    assert_eq!(lex.next(), Some(Token::Period));
    assert_eq!(lex.span(), 31..32);
    assert_eq!(lex.slice(), ".");

    assert_eq!(lex.next(), None);
}

Callbacks

Logos can also call arbitrary functions whenever a pattern is matched, which can be used to put data into a variant:

use logos::{Logos, Lexer};

// Note: callbacks can return `Option` or `Result`
fn kilo(lex: &mut Lexer<Token>) -> Option<u64> {
    let slice = lex.slice();
    let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip 'k'
    Some(n * 1_000)
}

fn mega(lex: &mut Lexer<Token>) -> Option<u64> {
    let slice = lex.slice();
    let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip 'm'
    Some(n * 1_000_000)
}

#[derive(Logos, Debug, PartialEq)]
enum Token {
    #[error]
    #[regex(r"[ \t\n\f]+", logos::skip)]
    Error,

    // Callbacks can use closure syntax, or refer
    // to a function defined elsewhere.
    //
    // Each pattern can have it's own callback.
    #[regex("[0-9]+", |lex| lex.slice().parse())]
    #[regex("[0-9]+k", kilo)]
    #[regex("[0-9]+M", mega)]
    Number(u64),
}

fn main() {
    let mut lex = Token::lexer("5 42k 75M");

    assert_eq!(lex.next(), Some(Token::Number(5)));
    assert_eq!(lex.slice(), "5");

    assert_eq!(lex.next(), Some(Token::Number(42_000)));
    assert_eq!(lex.slice(), "42k");

    assert_eq!(lex.next(), Some(Token::Number(75_000_000)));
    assert_eq!(lex.slice(), "75M");

    assert_eq!(lex.next(), None);
}

Logos can handle callbacks with following return types:

Return type Produces
() Token::Unit
bool Token::Unit or <Token as Logos>::ERROR
Result<(), _> Token::Unit or <Token as Logos>::ERROR
T Token::Value(T)
Option<T> Token::Value(T) or <Token as Logos>::ERROR
Result<T, _> Token::Value(T) or <Token as Logos>::ERROR
Skip skips matched input
Filter<T> Token::Value(T) or skips matched input
FilterResult<T> Token::Value(T) or <Token as Logos>::ERROR> or skips matched input

Callbacks can be also used to do perform more specialized lexing in place where regular expressions are too limiting. For specifics look at Lexer::remainder and Lexer::bump.

Token disambiguation

Rule of thumb is:

  • Longer beats shorter.
  • Specific beats generic.

If any two definitions could match the same input, like fast and [a-zA-Z]+ in the example above, it's the longer and more specific definition of Token::Fast that will be the result.

This is done by comparing numeric priority attached to each definition. Every consecutive, non-repeating single byte adds 2 to the priority, while every range or regex class adds 1. Loops or optional blocks are ignored, while alternations count the shortest alternative:

  • [a-zA-Z]+ has a priority of 1 (lowest possible), because at minimum it can match a single byte to a class.
  • foobar has a priority of 12.
  • (foo|hello)(bar)? has a priority of 6, foo being it's shortest possible match.

If two definitions compute to the same priority and can match the same input Logos will fail to compile, point out the problematic definitions, and ask you to specify a manual priority for either of them.

For example: [abc]+ and [cde]+ both can match sequences of c, and both have priority of 1. Turning the first definition to #[regex("[abc]+", priority = 2)] will allow for tokens to be disambiguated again, in this case all sequences of c will match [abc]+.

How fast?

Ridiculously fast!

test identifiers                       ... bench:         647 ns/iter (+/- 27) = 1204 MB/s
test keywords_operators_and_punctators ... bench:       2,054 ns/iter (+/- 78) = 1037 MB/s
test strings                           ... bench:         553 ns/iter (+/- 34) = 1575 MB/s

Acknowledgements

  • Pedrors for the Logos logo.

Thank you

Logos is very much a labor of love. If you find it useful, consider getting me some coffee. ☕

License

This code is distributed under the terms of both the MIT license and the Apache License (Version 2.0), choose whatever works for you.

See LICENSE-APACHE and LICENSE-MIT for details.

Issues

Collection of the latest Issues

lunabunn

lunabunn

Comment Icon0

The above returns a single Error when fed the input "aba" unless Bar is commented out, in which case a Foo token is returned as expected.

Some observations:

  • Both * and + have this issue
  • I was not able to reproduce the issue with a "suffix" (the trailing a in the example above) that does not overlap with the "body" (the [ab]* in the example above)
  • If the regex for Bar is changed to a+, the lexer will continue to emit an Error, but if it is changed to [abcd]+, the lexer will still emit a Foo
HackerWithoutACause

HackerWithoutACause

Comment Icon1

I'm trying to implement a Python-like parse that uses indention as scoping by matching a regex against \n[ \t]* then passing it to a special callback:

however doing so would require the callback to return multiple tokens in some cases. How would I go about do this?

stepancheg

stepancheg

Comment Icon0

Consider this example:

Parse errors are converted to Token::Error which contains no information. There's no way (to my knowledge) to propagate parse_int or parse_string_literal error to the caller.

farisshajahan

farisshajahan

Comment Icon0

The program below detects EvenNumber and OddNumber from CLI arguments. It works correctly as seen in below run.

However if I remove the EvenNumber variant of the enum, I expect it to detect OddNumbers as before as return Error for EvenNumbers. It works as expected for small number for larger numbers I see Error instead for all the tokens.

pczarn

pczarn

Comment Icon1

At every position I need to match longest tokens out of a given set. I have a parser that provides a set of acceptable tokens. So if an unexpected token is longer than another expected token, ignore it and consume a shorter substring expected by the parser.

Further, I need to get a list of ALL tokens that match at the longest span, regardless of their priority, not just one of highest priority.

Do you think I should modify Logos myself to provide these features? Any ideas on how to proceed?

evbo

evbo

Comment Icon2

I see the documentation covers perfectly how to use priority, which is why I'm confused how I could still be getting this wrong.

I have a number regex (build from subregex):

and then I also have this regex for a "range", which is just two comma separated numbers:

Parsing the following fails if I have that Range defined while parsing Row defined below: "1,2,3"

Because it seems to think "1,2" is wrong since it's expecting Number. If I delete the Range this issue goes away entirely.

The parser code is simple Lalrpop straight from their hello world example:

Now, it turns out I only ever use Range when the text is preceded by a colon :, so I was able to avoid this issue by simply redefining Range as:

But what's interesting about this is the only explanation I can think of for why this works is because Range was colliding with Number, so why didn't setting the priority on either Number or Range to 1 or 100 not make any difference? Is it faster/better to uniquify regexes the way I did in this case?

Jeremy-Stafford

Jeremy-Stafford

Comment Icon0

Currently, logos does not allow nesting Skip and Filter in Option or Result.

This makes the following pattern very difficult (impossible?) to express:

  • Match a "start" pattern.
  • Compute the number of characters to skip, raising an error if this is not possible.
  • Match a "end" pattern.
  • Skip the whole token.

This pattern appears when lexing C-style block comments (/* ... */). For languages which forbid nesting (including C itself), this can be done using a regex (read: unreadable mess of stars and slashes). For languages which allow nesting, this cannot be fixed.

The current callback-result system is implemented as a trait. It would therefore be impossible to allow Option<Skip> (that would introduce overlapping implementations). This feature would therefore require a type defined as

kaylynn234

kaylynn234

bug
Comment Icon0

More specifically, the derive macro panics when there is an invalid alternation present within a subpattern's repetition, and that subpattern is later repeated.

This is the minimum I was able to cause a panic with:

Notably, removing the repetition in Subpattern (like below) appears to cause compilation to hang forever:

I was able to avoid a panic by removing the alternation:

I've tested with both rustc 1.57.0 (f1edd0429 2021-11-29) (latest stable at time of writing) and rustc 1.58.0-nightly (c9c4b5d72 2021-11-17) (ol' reliable) and the issue appears to remain. I'm unsure if it's a compiler bug, but given the above I assume it's Logos-related. Both rustc versions are of the x86_64-pc-windows-gnu variety.

dtel

dtel

Comment Icon2

Started using logos and getting the following error for the following simple scenario based on docs. Let me know what i'm missing.

Using rust 1.56.1, 2021 edition.

`use logos::{Lexer, Logos};

#[derive(Debug, Copy, Clone, PartialEq, Logos)] pub enum Token { #[regex(r#""[^"]*""#, string)] String, #[error] Error } fn string<'a> (lex: &'a mut Lexer) -> Option<&'a str> { let slice = lex.slice(); Some(&slice[1..slice.len() - 1]) } the trait boundstd::option::Option<&str>: logos::internal::CallbackResult<'_, (), lexer::Token>` is not satisfied the following implementations were found: <std::option::Option

as logos::internal::CallbackResult<'s, P, T>>

jeertmans

jeertmans

Comment Icon0

Hello,

First, I really like your project, and I'd like to thank you for that! Second, I am quite new to this project, so I hope this question is not a duplicate (if so, I will delete it).

I was wondering if it was possible to use Logos (and its Lexer) on "lazy" file readers? My goal is to skim through a file only once and copy bits of contents that match specific patterns, discarding the rest. The file will then, hopefully, only be read blocks by blocks (the size of the block depending on how much characters Logos must read before encountering the next Token).

Thanks in advance

dhcn

dhcn

Comment Icon3

.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:55:20 | 55 | extras.insert(util::ident(&ext), |_| panic!("Only one #[extras] attribute can be declared.")); | ^^^^^^ ----------------- ----------------------------------------------------------- supplied 2 arguments | | | expected 1 argument | note: associated function defined here

.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:89:23 | 89 | error.insert(variant, |_| panic!("Only one #[error] variant can be declared.")); | ^^^^^^ ------- -------------------------------------------------------- supplied 2 arguments | | | expected 1 argument | note: associated function defined here

.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:93:21 | 93 | end.insert(variant, |_| panic!("Only one #[end] variant can be declared.")); | ^^^^^^ ------- ------------------------------------------------------ supplied 2 arguments | | | expected 1 argument | note: associated function defined here

arnaudgolfouse

arnaudgolfouse

Comment Icon0

Hello ! Running clippy on the code, it triggered clippy::derive_ord_xor_partial_ord in logos-derive/src/graph/range.rs:

Indeed Ord's documentation explicitly states that Ord and PartialOrd must agree, which is not the case here... Is this normal, maybe done for a particular optimization ?

Kixiron

Kixiron

Comment Icon0

I'm currently using ropey in my LSP where logos is the backing lexer, and I'd like to be able to use a Rope natively within my generated lexer. The Source trait looks perfect, except there's no way I can find to be able to use arbitrary implements with logos::Lexer

jawadcode

jawadcode

Comment Icon2

Firstly, here is the output of cargo tree:

And here is code that demonstrates the issue:

As you can see, the same regex is being used and the same test string but, the first assert passes and the second panics. I'm not sure if I'm missing something or this is a bug

rogpeppe

rogpeppe

handbook
Comment Icon1

I might have missed it, but I've been looking for proper reference docs for what can be put in the derive macro directives and all I've found is https://docs.rs/logos/0.12.0/logos/derive.Logos.html which doesn't have any docs at all, and the "by example" docs at the top level which don't provide enough detail.

As an example of a few questions I have that don't seem to be answered by the existing docs:

  • what flavour of regexps are supported? (PCRE? POSIX? backtracking? capture groups? etc)
  • how does the "skip" thing work - in the examples, a skip rule is attached to an Error variant, but does it have to be that way? Could it be attached to any variant without the behaviour changing?
  • what do the #[end], #[extras] and #[logos] directives do?
  • if there are several directives attached to a single enum variant, is it possible for the callback to tell which one was selected?

As the least formal part of the Logos API, the derive macros could really use some more comprehensive documentation. Perhaps it exists already - if so, it would be good to link to it from a prominent place :)

weijunji

weijunji

bug
Comment Icon2

Why this failed while match 0b0000_1111.

It works with #[regex(r"0b[01_]*", bin_int)].

rdearman

rdearman

Comment Icon0

I'm expecting the regular expression string to return the values inside the parentheses () similar to perl 's $1 so for a string of:

I expect

to return

without the : at the end. But it returns the entire string.

Krantz-XRF

Krantz-XRF

Comment Icon0

The current logos-derive use derive procedural macros and attribute annotations on a enum as a front-end. I know this design guarantees that the enum declaration is presented as-is, but I personally find regular expressions written as string literals hard to read and reason about:

  • No white spaces for visual grouping
  • Unbalanced line width (long for regex attributes and short for token names)
  • Error report (if any) cannot span over part of a string literal

I would prefer a DSL processed by a function-like procedural macro, like the following (the syntax could be revised, of course):

Here $Digit $Letter should resolve to the corresponding Unicode character properties or general categories. entries without pub serves as sub-patterns, and entries with pub are exposed as enum variants of the token type.

I am currently working on this, with the new front-end targeting Mir. However, I don't know whether or not you would want to merge it. If you decide not, would you consider extracting the back-end code-generation logic in logos-derive to a non-procedural macro crate (so that it becomes a publicly available API for reuse)?

michalmuskala

michalmuskala

bug
Comment Icon7

Given the following lexer:

As expected, the regex doesn't match since we require at least one digit after e. However, lex matches the string even though it seems to me it shouldn't

Versions

Find the latest versions by id

v0.12.1 - Jun 08, 2022

This is a long overdue patch release that includes a number of fixes from community:

  • Subpattern definitions now accept nesting other subpatterns within them (#237 by @Jezza)
  • Callbacks can now use a new FilterResult type where emitting an error or skipping would be necessary (#236 by @Jeremy-Stafford)
  • Chunk is now implemented for all array sizes via const generics, which fixes issues with long regex patterns (#221 by @icewind1991)

v0.12.0 - Feb 01, 2021

  • Added case insensitive flags via ignore(case) and ignore(ascii_case) (by @gymore-io #198).
  • Removed the lookup! macro.
  • Updated dependency in logos-derive that had a security alert on it.

v0.11.4 - Apr 27, 2020

  • Expanded the maximum amount of bytes Logos can read in a single chunk from 16 to 32. This fixes compile errors for longer literal string matches (by @Luro02, #141, #142).

v0.11.3 - Apr 26, 2020

  • Fixed a bug that caused the derive macro to panic when attempting to merge regex definitions with overlapping asymmetric loops (#132).
  • Fixed a bug where a lexing error at first byte of a token could slice into a char boundary (#138).

v0.11.2 - Apr 25, 2020

  • Adds the ability to write #[logos(subpattern name = r"regex")] on the token enum, which is then used as a "subroutine" in regex rule definitions with the syntax (?&name) (by @CAD97, #131). Example:

  • Fixed an issue where the compilation would fail when a looping group begun with a loop: (f*oo)*

v0.11.1 - Apr 23, 2020

Manual token disambiguation

Previously when two definitions computed could match the same input and were assigned the same priority, Logos would make an arbitrary choice about which token to produce. This behavior could produce unexpected results, it is therefore now considered a compile error, and will be reported as such.

Example

Consider two regexes, one matching [abc]+ while the other matches [cde]+. Both of those would have a computed priority of 1, and both could match any sequence of c.

Logos will now return a compile error with hints for solution in this case:

Setting priority = 2 to either token will override the computed priority, allowing Logos to properly disambiguate the tokens.

Generic type parameters

Deriving Logos on an enum with type parameters like so:

Will now produce following errors:

It's now possible to define concrete types for the generic type parameters:

This will derive the Logos trait for Token<&str, u64>. All reference types (like &str here) will automatically use the lifetime of the source.

Other changes:

  • It's now possible to define callbacks using callback = ... syntax inside #[regex(...)] and #[token(...)] attributes. This allows for callback and priority to be placed arbitrarily within the attribute. All of these are now legal and equivalent:

  • Priority is now computed from an intermediate representation of the regex, before parsing it to the state machine graph.

v0.11.0 - Apr 18, 2020

Logos logo

New face, rehauled API

Logos has a new cute logo :nerd_face:.

There is a number of breaking changes this release, aimed at reducing API surface and being more idiomatic, while adding some long awaited features, like the ability to put slices of the source or arbitrary values returned from callbacks directly into a token.

Most of the changes related to the #[derive] macro will trigger a compile error message with an explanation to aid migration from 0.10. Those will be removed in the future.

API changes

  • :hammer: [breaking] LOGOS NO LONGER HANDLES WHITESPACE BY DEFAULT. The #[trivia] attribute has been removed. Whitespace handling is easily added by defining #[regex(r"[ \n\t\f]+", logos::skip)] on any token enum variant. Putting it along #[error] is recommended.
  • :hammer: [breaking] Lexer no longer has the advance method, or publicly visible token field. Instead Lexer now implements the Iterator trait and has a next method that returns an Option of a token.
  • :hammer: [breaking] #[end] attribute was removed since it became obsolete.
  • :hammer: [breaking] #[regex = "..."] and #[token = "..."] definitions are no longer valid syntax and will error when used in this way. Those need to be transformed to #[regex("...")] or #[token("...")] respectively.
  • :hammer: [breaking] Callbacks are now defined as a second parameter to either #[regex] or #[token]. Those can be either paths to functions defined elsewhere (#[token("...", my_callback)]), or inlined directly into the attribute using closure syntax (#[token("...", |lex| { ... })]).
  • :hammer: [breaking] Extras trait has been removed. You can still associate a custom struct to the Lexer using #[logos(extras = MyExtras)] with any type that implements Default, and manipulate it using callbacks.
  • :hammer: [breaking] #[callback] attribute was removed since it was just polluting attribute namespace while offering no advantages over attaching callbacks to #[regex] or #[token].
  • :hammer: [breaking] Lexer::range has been renamed to span. span and slice methods continue to return information for the most recently returned token.
  • :sparkles: [new] Lexer has a new spanned method takes the ownership of Lexer and returns an interator over (Token, Span) tuples.
  • :sparkles: [new] Callbacks can return arbitrary values (or Options/Results of values) that can be put into token enum variants. Currently only a single value in a tuple-like variants is supported (Token::Variant(T)).
  • :sparkles: [new] Logos will automatically populate Token::Variant(&str) with a matching str slice if no callback is provided.
  • :sparkles: [new] Callback return values can be used to skip matches using the Skip type. It's also possible to dynamically skip matches using the Filter type.

Internals

  • :rocket: Generated code is now more likely to produce optimized jump tables for complex branches.
  • :rocket: Logos can now stack multiple boolean lookup tables into a single table, reducing the memory cost of using the tables by a factor of 8, and improving cache locality.
  • :rocket: For narrow range boolean branches Logos can now produce a lookup table packed into a single u64.

v0.10.0 - Apr 02, 2020

Brand new #[derive]

The derive macro at heart of Logos has been rewritten virtually from scratch between 0.10.0-rc2 and now. The state machine is now built from a graph that permits arbitrary jumps between nodes, instead of a tree that needs to build up permutations of every possible path leading to a token match. This has fixed a whole number of old outstanding issues (#87, #81, #80, #79, #78, #70).

The new codebase is nearly 1k LOC shorter, compiles faster, outputs smaller code, is more cache friendly, and is already proving itself to be easier to debug and optimize. This new release also gets rid of a number of hacks that were previously introduced to manage token disambiguation and loops, which were a huge source of bugs. All nodes in the new state machine are indexed, and loops are described as circular state jumps. Jumps between states are realized by tail recursion calls in the generated code, which in most cases should have performance profile of goto in C.

No more CPU melting

A case that was breaking the old Logos was using a simple \w+ regex. This is a particularly devious regex flag, since it expands to cover codepoints for every Unicode alphabet, which was then further expand into corresponding byte sequences.

Old logos produced staggering 228k lines of Rust code just for that single pattern, which then usually failed to compile after rustc consumed all available memory. That's because a single \w produced a tree with 303 leaf nodes, which then had to be duplicated for every leaf to handle the loop, which in the end produced a monstrous tree with 91809 leaf nodes (not counting any branches leading to those).

By contrast, in the new version a single \w produces a graph with 278 nodes total, while \w+ produces a graph with 279 nodes. That is to say, we went from n ** 2 , to n + 1, which is a universe of a difference.

Token disambiguation

This release also improves the previously flaky token disambiguation, which is now properly defined and documented. It also leaves us with an option to provide different strategies in the future.

Future

I think the next minor/major release will do a clean-up of the API surface. Since Logos currently pollutes the attribute namespace quite a lot, using pretty generic labels like token or callback, it might be wise to wrap most if not all of them into #[logos(...)]. This should help to make the crate more future-proof, and play nicer with other custom derives that you might want to put on your token enums.

I'm very excited to have this release out, and have a whiteboard full of ideas of what can bet tweaked to improve the performance, before even touching SIMD (which is on the horizon somewhere).

API changes:

  • Derive macro now allows discriminant values to be set, as long as they are greater than 0, and smaller than the total number of variants in the enum.
  • Removed NulTermStr support. In turn, a lot of effort has been put to increase the performance using regular &str sources, which more than makes up for it.
  • Logos no longer considers nul byte (0x00) to terminate input.
  • It's now possible to define binary, non-Unicode token definitions (both as plain literals and regex). Doing so will force the lexer to use non-Unicode sources such as &[u8], and will fail to compile with &str.
  • It's now possible to change the default trivia (whitespace), including the option to completely disable it (more documentation will follow).
  • Removed the Split trait.
  • Removed the Lexicon type alias. Logos::lexicon has been replaced by Logos::lex, which then implements all logic internally.

0.9.7 - Dec 15, 2018

  • Fixes a stack overflow introduced in 0.9.4 (#58).

0.9.6 - Dec 15, 2018

  • Fixing a bug in code generation where a valid token definition would sometimes produce an error variant instead of it's token variant (#59).

0.9.5 - Dec 11, 2018

  • Fixed a bug found out at #55.

0.9.4 - Dec 11, 2018

  • Added a fixed-width multi-byte reading option that fixes some corner cases where otherwise backtracking would be needed. For example having "." and "..." token definitions matching against ".." should produce two single-dot tokens.

0.9.3 - Dec 10, 2018

  • Fix a bug detected in the tests added in #49. The conditions for creating prefix-intersecting branches has been loosened up, so that adding new token definitions should not break things, while still preventing infinite recursion if a branch is followed by a Repeat fork.

0.9.2 - Dec 10, 2018

  • Fixes to the more complex case in #40 (test case added in #47).

0.9.1 - Dec 09, 2018

  • Properly fixes issue #40.

0.9 - Dec 09, 2018

  • Logos is now using Rust 2018, thus requires rustc 1.31 or newer.
  • Fixed issues around branches in nested repeats: #40 #42.
  • Updated toolshed dependency to 0.8.

Information - Updated Aug 01, 2022

Stars: 1.5K
Forks: 68
Issues: 73

Repositories & Extras

Rust library for parsing configuration files

The 'option' can be any string with no whitespace

Rust library for parsing configuration files

This crate was originally developed as a personal learning exercise for getting acquainted with Rust...

This crate was originally developed as a personal learning exercise for getting acquainted with Rust and parsing in general

This crate was originally developed as a personal learning exercise for getting acquainted with Rust...

rust crates for parsing stuff

Tokenizers for math expressions, splitting text, lexing lisp-like stuff, etc

rust crates for parsing stuff

A Rust crate for parsing and writing BibTeX and BibLaTeX files

WikiBook section on LaTeX bibliography management

A Rust crate for parsing and writing BibTeX and BibLaTeX files

Rust library for parsing COLLADA files

Notice: This library is built around files exported from Blender 2

Rust library for parsing COLLADA files

Rust JSON parsing benchmarks

This project aims to provide benchmarks to show how various JSON-parsing libraries in the Rust programming language perform at various JSON-parsing tasks

Rust JSON parsing benchmarks

A Rust library for parsing Smash Ultimate XMB files

The binary application prints an XMB file to the console in JSON format

A Rust library for parsing Smash Ultimate XMB files

Rust library for parsing English time expressions into start and end timestamps

This takes English expressions and returns a time range which ideally matches the expression

Rust library for parsing English time expressions into start and end timestamps

A Rust crate for parsing the mount points in Linux and other Unix systems

Can be used to query the filesystems mounted on the system

A Rust crate for parsing the mount points in Linux and other Unix systems
Facebook Instagram Twitter GitHub Dribbble
Privacy