Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Rewriting the limits IO to use a tokenizer rather than regexes. (#136)
Summary: These regexes are getting completely out of hand, I think it would be simpler to just use a tokenizer, especially since we have many common "building blocks" like parsing vectors. This new version creates a Tokenizer class that basically just feeds back tokens and you can use it to build your parser on top. The advantages that I see are: 1. Much simpler to extend, rather than trying to reverse engineer regexes that match any given number or whatever. 2. More useful error messages that tell you the exact spot where parsing failed. Reviewed By: jeongseok-meta Differential Revision: D66404695
- Loading branch information