Parser Combinators Beat Regexes
96 comments
·April 9, 2025yen223
egonschiele
Hey! I love parser combinators and wish they were more mainstream. That's why I wrote Tarsec, which is Parsec for TypeScript: https://github.com/egonSchiele/tarsec
I have a short and a long intro, both illustrated: https://github.com/egonSchiele/tarsec/blob/main/tutorials/5-... and https://github.com/egonSchiele/tarsec/blob/main/tutorials/th...
It also has a neat debug mode, and a way to benchmark performance. I've loved using it, hopefully someone reading will use it too! Use this as a chance to learn something cool (parser combinators)!
furyofantares
These are great.
I accidentally invented parser combinators for a project once. I was quite fond of the approach but it ended up being pretty nearly unusable for anyone else on the team. This was problematic because the project was such that we were always adding new things to be parsed. The whole reason I took the approach was that I wanted people to be able to freely add parsers as new requirements came in.
It being my first shot at it, it was for sure not as usable as it could be. But looking at your stuff, which is delightful, I am filled with doubt that my goal was even possible. Your stuff looks about as clean as possible, and yet seems likely to be almost as impenetrable to someone not versed in parsing.
That said, I am gonna look for a chance to use this in appropriate side projects.
miningape
Yeah parsers can be tricky "culturally" - as you've found there are many devs who haven't written a recursive descent parser and therefore they lack any meaningful context to understand what's happening under the hood.
It's a similar scenario to if you made your own RegEx and none of the devs on your team know's what it is. It just takes too long to teach them that on the job since there's very few quality resources you can point them towards - so you're stuck teaching everything.
On the other hand, if you have a team full of experienced PL developers doing this could significantly improve their productivity.
egonschiele
The idea of functions returning functions makes a lot of people's heads spin! I've struggled to get people using this library too, but I use it in personal work and it is an absolute joy. One of my favorite things I've made.
yen223
Love the intro articles you wrote!
mrkeen
> You could almost build an entire parser combinator library from scratch from memory, because none of the individual components are complicated.
I do this every now and then. Usually in the days leading up to advent of code.
"A parser is a function from an input to either failure or a (remaining input, result) pair" is all you need to remember to build the rest.
idahoduncan
I do this from time to time as well, although I tend to get hung up on error handling. I would say that it's a simple enough exercise if you don't care too much about reporting parse errors in a meaningful way, although I'd be happy for someone to point out an implementation of error handling that fits into one's head.
debugnik
Here's the most basic error handling to implement, which I learned from studying FParsec:
1. Keep a collection of errors in the stream. It represents errors at the current position.
2. Clear all errors when input is consumed.
3. When a token-like parser fails without consuming input, push an "expected '<literal>'" error.
4. When a labeled parser fails without consuming input, restore the error set as it was before running it, and push an "expected <label>" error instead.
This will naturally accumulate errors from choice and repetition combinators, as long as you don't have needless backtracking before the choice points.
So this already gives you reasonable errors like "foo:17:4: expected one of expression, assignment or 'end'". And it's easy to extend to nested errors, backtracking information or recoverable nodes.
Tabular-Iceberg
I’ve seen it in practice. I decided to use a Java parser combinator library for one problem. As soon as I was off the project they ripped out my code and replaced it with regexes.
That what was to be parsed wasn’t regular apparently did not matter.
zahlman
> break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
The popular `pyparsing` library for Python (https://pypi.org/project/pyparsing/) expresses these concepts, just written more in traditional OO style rather than functional style (i.e. there are classes like OneOrMany that you instantiate, rather than binding a single-token-matcher argument to lambda or whatever).
fooker
Parser combinators are already mainstream. All/most real languages are implemented with hand written recursive descent parsers that closely resemble parser combinators, or at the very least borrow a bunch of concepts.
It's the name that's unlikely to become mainstream. That's squarely upon FP folks giving sci-fi names to common sense concepts.
antonvs
Presumably you're not objecting to the word "parser".
What name would you use for "combinator"? It's a pretty simple word, based on the root "combine".
The common sense concept here is functions whose results depends only on their arguments, which makes them well-suited for being used in "combination" with other combinators, as in `f . g` or `f(g(x))`. That's called function composition, which can be defined as the act of combining functions.
It's a term that has existed in logic for over a century, from before computers existed.
It seems to me that no amount of renaming of terms will fix people's lack of curiosity or lack of desire to learn or understand. That's on them.
fooker
Every piece of software in existence calls functions and composes functions, there's no reason to attach 'combinator' to everything.
I do not doubt the correctness of the term, just the point of naming it. The fact that people write parsers all the time using these concepts without using the name proves that the name was not a great one.
> It's a term that has existed in logic for over a century, from before computers existed.
Yes, that's a great reason to not overload mathematical terms for random things. Monads and monoids are a great example of when this goes a bit wrong.
>It seems to me that no amount of renaming of terms will fix people's lack of curiosity or lack of desire to learn or understand.
Yes, I agree. Most people are just writing programs and solving problems, and moving on with their lives. It's weird to insist that they call something by a specific name if it's a simple idea that thousands of people have developed independently.
foldr
As a sibling points out, recursive descent parsers have been mainstream for a long time. Higher order parser combinators are only moderately useful in an imperative language with regular looping constructs. (For example, the code to parse a list of Xs separated by Ys is a pretty straightforward loop.) A lot of parser combinator libraries also make the mistake of encouraging "short circuit" error handling, which is rarely what you want for a production-grade parser.
bazoom42
> will never be mainstream because it had the bad fortune of coming from the functional programming world
Nah, mainstream developer don’t care where an idea comes from as long as it solves the problem and helps them hit the deadline. E.g. linq in C# have been broadly adopted.
Regexes are disliked by many developers, so parser combinators should be an easy sell if you can show it solves realistic problems with simpler code.
skybrian
Never is a long time. Other ideas from functional language have become mainstream.
yen223
I would be very happy to be proven wrong here.
But I view parser combinators in the same lens as lenses, another powerful approach to solving a specific problem that is unlikely to receive mainstream recognition because of all the Haskell.
wk_end
FWIW lenses are lovely in Typescript with monocle-ts, and extremely useful when - speaking of functional ideas hitting the mainstream! - writing Redux reducers to provide updates for immutable data.
wruza
It's unlikely because people don't get why jump through so many hoops when you can
new = copy(old)
new.foo.bar.baz = new_baz
return new
It won't come out of Haskell because it's uniquely a solution to a specific Haskell problem. Mutating a fresh copy through a "lens" when you have `=` is a level of geekage that cannot become mainstream.BoiledCabbage
> But I view parser combinators in the same lens as lenses
Hmm, anything else you have in that list I should be looking into? As I also agree they are two of the most interesting things I got from learning Haskell. (Along with type driven reasoning, monoids and functors).
austin-cheney
There are numerous posts, comments, articles and so forth about when to use regex versus using a parser. The general rule is this:
If you need to do a search from a string, such as needle(s) from a hat stack, regex is probably more ideal than a parser. If you need anything more intelligent than a list of search results you probably want a full formal parser.
Most languages allow a form of nested regex that allow for increased search precision. This occurs when a method that makes use of a regex returns to a function whose argument is a matching string result, which is why regex is probably enough when the business is primitive. There is a tremendously higher cost to using a full parser, considering the lexer and tokenizer plus rules, but it’s so much more intelligent that it’s not even comparable.
giraffe_lady
The key thing for me in making this decision is of course predicting the future.
Parsers are more work up front but in my experience much easier to debug, maintain and extend. So if I predict something will be around for a while I'll want to do a parser and if it's just a one-off I usually go with regex.
Of course this predictably leads to me having a collection of parsers that I've never needed to touch again and a handful of monstrously complex regex grammars that I'll rewrite into a parser "the next time" I need to modify them.
I still think it's the right thing to base the decision on I just need to keep improving my prophesying.
kleiba
Of course you could also pull out the good old Chomsky hierarchy and make an argument against regexes based on whatever the nature of your data is.
But the thing is: the beauty of regexes lies in their compactness (which, in turn, can make them quite difficult to debug). So, of course, if you want to optimize for some other measure, you'd use an alternative approach (e.g. a parser). And there are a number of worthwhile measures, such as e.g. the already mentioned debuggability, appropriateness for the problem at hand in terms of complexity, processing speed, ease of using the match result, the ability to get multiple alternative matches, support of regexes in your language of choice, etc.
But simply stating "X beats regexes" without saying in what respect leaves something to be desired.
kazinator
> It uses the pcre-heavy Haskell library which in turn calls out to the system-wide pcre C library for actually compiling and running the regex.
That's not true regular expressions, but a backtracker with eclectic features.
The regex used in the regex solution is just:
mul\((\d+),(\d+)\)
not requiring PCRE.If Haskell happens to provide bindings to the POSIX regcomp/regexec stuff, that would be something to try instead. \d has to be replaced with [0-9], needless to say.
arn3n
“In other languages, it would be considered overkill to write a full parser when a simple regex can do the same thing. In Haskell, writing a parser is no big deal. We just do it and move on with our lives.”
I see a long code file filled with comments and long paragraph-level explanations. I think I’d rather just learn and use regex.
layer8
Whenever I write a regex, I end up with a comments roughly ten times longer than the regex. That being said, regular expressions are often the right tool for the job (i.e. parsing a regular language, as opposed to a context-free language or whatever), just the syntax becomes unreadable rather quickly. I’m sure you could build a nicer regular-expression syntax in Haskell.
CBLT
I love the verbose flag[0] to regex, so I can write comments inline.
f1shy
Yes. Regex tend to become rather fast write only. One solution is commenting, but is still complex. What I like to do now (in C) is define parts of it. Just a crude example to get the idea:
// STRing: matches anything inside quotes (single or double)
#define STR "[\"'](.*)[\"']"
// NUMber: matches decimal or hexadecimal numbers
#define NUM "([[:digit:]]x?[[:xdigit:]]*)"
regcomp(®_exp, STR NUM , REG_EXTENDED | REG_ICASE);
So at the end I compose the RE with the various parts, which are documented separately.zokier
> just the syntax becomes unreadable rather quickly. I’m sure you could build a nicer regular-expression syntax in Haskell.
Of course regular expressions are really more of a category of expressions, and the traditional kleene star notation is only one of many options; regular expressions do not somehow inherently need to use that specific syntax.
Pomsky and VerbalExpressions are just some examples of alternative syntaxes for regex. Apparently there is even a port of VerbalExpressions for Haskell:
https://github.com/VerbalExpressions/HaskellVerbalExpression...
qrobit
I looked at the VerbalExpressionJS[1] example and it looks like combining parsers to me. If you need to make regex more verbose, better use parser combinator library when available. RegEx benefits compared to parser combinators other than compactness aren't obvious to me.
[1]: <https://github.com/VerbalExpressions/JSVerbalExpressions/tre...>
bazoom42
Sounds like you think the comments and explantions are the problem? You can write regexes with comments and parsers without. Regexes are not generally known to be self explanatory, except for trivial cases like \d+
OneDeuxTriSeiGo
I mean that's the nature of article code no? You are writing for a generic audience and want to be very explicit as to what your code is doing to teach the audience.
For your average haskell programmer you could drop all of those comments since the code isn't really doing anything that couldn't be determined by just reading it.
codebje
The main advantage of recursive descent parsing is readability. Forget the machinery of the parser, which is (a) trivial enough that AI will generate correctly it for you with next to no prompting, and (b) widely available in libraries anyway. The win is writing a parser that reads like the grammar it implements, which means changes to your grammar are easy to apply to your parser.
Regexes are effectively write-only. If you change the grammar parsed by a regex, you're probably going to discard the regex and make a new one.
sayamqazi
[dead]
o11c
Note that most implementations of both parser combinators and regexes can fail very badly (exponential time). Never use either on untrusted input, unless you can prove your implementation lets you stay linear.
internet_points
This is one of the reasons I've been afraid to use parser combinators too heavily. With regular (finite-state) languages I know their time usage, with parser combinators I have no clue, and there are so many different libraries and implementations and some are monadic and some are applicative and few mention worst-case. There are benchmarks https://gitlab.com/FinnBender/haskell-parsing-benchmarks#byt... but what kinds of "dangerous operations" do I have to watch out for when writing with parser combinators?
(I guess part of the problem is just that regular languages have been studied since Kleene had a full head of hair, while parser combinators were more or less unknown until the 80's.)
wyager
The evaluation model of most of these parser combinator libraries is simple enough that you can just think directly about the call tree. It's difficult to get "surprised" like you can with PCREs. E.g. (x+x+)+y as a regex may blow up, but "many1 (many1 x >> many1 x) >> y" will just evaluate forwards a single time, greedily, and fail, at least with any haskell parser combinator library I can think of.
giraffe_lady
PEGs don't have this problem right? Linear time over length of input? Though I guess most peg libraries probably aren't "pure" pegs to get around its limitations and may reintroduce this issue?
o11c
No, PEG is prone to exactly this in its default form. Parser combinator usually means something PEG-ish anyway.
If you memoize (packrat), it improves to polynomial time (not sure what, but it's not linear; that's false advertising and a fraudulent claim), but you're still stuck with the vulnerability to bugs in the grammar.
A better idea is to create a parser-combinator-style API on top of an LL or LR backend, for guaranteed linear time and only enough stack space for the deepest part of the AST.
yen223
PEGs don't have this problem only because they place restrictions on the grammar.
In practice, this isn't a problem, but it does require you write grammar rules in a specific way (e.g. no left-recursion)
vrighter
Only if a packrat parser is implemented, which requires a lot of memory.
null
thaumasiotes
Only PCREs are exponential time, in service of a feature you basically never need. Regexes are always linear time.
They can take exponential space, though, so I'm not sure why knowing you'll be able to process the data in linear time is supposed to keep you safe.
moregrist
It’s not only PCRE. Any dialect with back references will require a backtracking engine that can go exponential with the wrong expression and input.
This includes most flavors of regexp you find in the wild: Python’s re module, JavaScript regular expressions, Ruby’s regular expressions, Perl, PCRE, and even basic and extended REs used in grep.
Russ Cox has written some very accessible posts on the linear (in input) properties of NFAs and the equivalence of Thompson regular expressions [0]. There’s also quite a bit of literature on the Glushkov construction of regular expressions (and its NFA equivalence) [1] that’s worth reading if you find the topic interesting.
Both Go and Rust have non-backtracking regular expression libraries, and you can find solid non-backtracking C and C++ libraries for regular expressions (eg: libfsm and Hyperscan).
0: https://swtch.com/~rsc/regexp/ 1: My favorite is _Flexible Pattern Matching Strings: Practical On-Line Search Algorithms for Texts and Biological Sequences_ by Gonzalo Navarro
pjscott
This depends on the implementation of the regex engine; most are potentially superlinear in time, since that’s the easiest way of doing it, and quite fast until suddenly it isn’t. I always check the algorithm being used before I use regular expressions in production. I was surprised how many use a recursive descent strategy!
(Also, BTW, you can deal with the exponential space issue by using an NFA instead of a DFA – it’s slower that way, but the memory space required is reliably linearly proportional to the size of the regex.)
dullcrisp
I don’t believe space > time is possible on a Turing machine?
pjscott
You’re right, of course, but there was a minor miscommunication: the exponential space is exponentially proportional to the size of the regular expression, and the linear time is linearly proportional to the length of the string being searched through.
masklinn
> Only PCREs are exponential time, in service of a feature you basically never need. Regexes are always linear time.
Any re dialect which supports backtracking necessarily has a non-linear worst case, and while a select few have very high resilience against exponential backtracking (e.g. never managed to make postgres fall over) most can be made to fail with a pattern a few characters long.
FA-based engines are getting more popular, but they’re far from universal.
pklausler
Parser combinators work well for big messy problems as well as regexes; I used a custom library of combinators to write the Fortran parser for LLVM. They’re great for error recovery and for disambiguating tricky situations, of which Fortran has more than its share; I ended up parsing cooked characters without a tokenization phase.
One hard part is getting good performance when the same production is being tried as part of multiple higher level alternatives and backtracking is taking place. You can get into trouble for bad input cases, like expressions with thousands of operands being used in a context that isn’t clear until you get to characters after the expression. But there’s ways to solve that problem, so I recommend combinators to you if you need to write a parser.
pklausler
(adding to the above)
Another downside of parser combinators, at least the ones that I wrote as a basis for Fortran parsing, is that if you implement them as C++ template classes, the types get astonishingly long in their names. This leads to long compilation times and makes debuggers very hard to use. Both of these concerns only affect the developer, of course, and seemed to me to be acceptable so long as the parser is fast and robust.
I'd use parser combinators again if I ever need to write a big parser, although I might not use overloaded operators (">>", "/") and confusing function names ("many" vs. "some") inherited from Haskell's Parsec library.
maxbond
I've been working a lot with parser combinators lately, they are a really nice compromise between using grammar languages (I'd put regexes in this category, as well as parser generators) and hand-rolled imperative parsers. The parser combinator library ends up being a DSL for describing grammars in your preferred language, but the boundary is fluid, and you can move back and forth between writing grammars and writing code in your preferred language.
The bold implementer could integrate regexes into their parser combinator library, and you could move between all three.
henning
You can also use simple string search functions for this Advent of Code problem.
I used this approach for implementing orthography rules in a stenography application (implementing rules like `artistic + ly = artistically`) and in the benchmarks I did, a few lines of searching for character indexes was an order of magnitude faster than regexes. Each of my functions is about 3-10 ish lines of simple code compared to a 1-line regex.
You do have to handle cases like the character not being found, but I've had zero issues with the code and, in terms of what actually executes, it is vastly simpler. This is why the saying goes that if you make something twice as fast, you may have done something clever, but if you make it 10x faster, you stopped doing something dumb.
kqr
Basic string search and indexing operations (especially when there is language/library support for cheap spans) are definitely underappreciated.
I wouldn't reach for them as a first step though, because they'd take more automated tests to give me confidence of correctness, and most problems I end up solving are not CPU bound but programmer time bound.
Rendello
A few days ago I mentioned my first parser [1], which used regex in the tokenization step, based on studying the parser video by Destroy All Software. I'm glad I learned it that way first, since it takes a lot of the pain out of lexing. I've now built many parsers, including accidentally re-inventing parser combinators when working in Erlang. Two days ago I built another parser that uses regex heavily for tokenization/some parsing steps. I never parse programming languages, so my programs' needs are quite different from the parsers talked about online!
DadBase
Parser combinators are great until you need to parse something real, like CSV with embedded newlines and Excel quotes. That’s when you reach for the reliable trio: awk, duct tape, and prayer.
iamevn
I don't follow why parser combinators would be a bad tool for CSV. It seems like one would specify a CSV parser as (pardon the pseudocode):
separator = ','
quote = '"'
quoted_quote = '""'
newline = '\n'
plain_field = sequence(char_except(either(separator, quote, newline)))
quoted_field = quote + sequence(either(char_except(quote), quoted_quote)) + quote
field = either(quoted_field, plain_field)
row = sequence_with_separator(field, separator)
csv = sequence_with_separator(row, newline)
Seems fairly natural to me, although I'll readily admit I haven't had to write a CSV parser before so I'm surely glossing over some detail.kqr
I think GP was sarcastic. We have these great technologies available but people end up using duct tape and hope anyway.
DadBase
Exactly. At some point every parser combinator turns into a three-line awk script that runs perfectly as long as the moon is waning and the file isn’t saved from Excel for Mac.
DadBase
Ah, you've clearly never had to parse a CSV exported from a municipal parking database in 2004. Quoted fields inside quoted fields, carriage returns mid-name, and a column that just says "ERROR" every 37th row. Your pseudocode would flee the scene.
masklinn
Seems like exactly the situation where you’d want parsers, because they can do any form of quoting \ escaping you want and have no reason to care about newlines any more than they do any other character.
DadBase
Parsers can handle it, sure, but then you blink and you're ten layers deep trying to explain why a single unmatched quote ate the rest of the file. Sometimes a little awk and a strong coffee gets you further.
maxbond
Use what's working for you but one of the strengths of parser combinators is managing abstraction so that you can reason locally rather than having to manage things "ten layers deep." That's more of a problem in imperative approaches where you are manually managing a complex state machine, and can indeed have very deep `if` hierarchies.
Parser combinator are good at letting you express the grammar as a relationship among functions so that the compiler handles the nitty-gritty and error prone bits. (Regexes also abstract away these details, of course.)
null
kreig
It makes sense, Combinators can parse a superset of formal languages than regex, according to Chomsky's hierarchy [0].
Parser combinators are one of those ideas that is really powerful in practice, but will never be mainstream because it had the bad fortune of coming from the functional programming world. It's a shame too, because parser combinators are what made parsers click for me.
Take the hard problem of parsing a language, break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
You could almost build an entire parser combinator library from scratch from memory, because none of the individual components are complicated.