Skip to content(if available)orjump to list(if available)

RE#: High performance derivative-based regular expression matching (2024)

kazinator

> The match semantics supported in RE# is leftmost-longest (POSIX) rather than leftmost-greedy (a.k.a., backtracking or PCRE) semantics. It is unclear how to support extended Boolean operators in backtracking in the first place and what their intended semantics would be – this is primarily related to that | is non-commutative in the backtracking semantics and therefore some key distributivity laws such as (|) ≡ | no longer preserve match semantics.

Non commutative A|B in regex is broken garbage. Bravo for calling it out!

The issue is that backtracking "greedy match" regex engines, when they deal with the disjunction, simply evaluate the cases left to right and stop on the first match: A|B|C|D is interpreted as "try regex A; if that matches, then stop, else try B ...". So if A matches, it's as if B, C and D don't exist.

Say we have the regex "c.r|carp.t", the input "carpet-odor" and are doing a prefix match. Greedy semantics will try "c.r" which matches "car", and stop there, declaring a three character match. Longest match semantics matches all branches simultaneously, picking the longest match. (This is closely related to the "maximal munch" principle in tokenizing.) That semantics will match see that the "carp.t" branch can match more characters after the "c.r" branch no longer matches, and report the six character match "carpet".

Longest match semantics jibes with a set-theoretical interpretation of regex, and that's why the | operator commutes. R1|R2 means the union of the strings matched by R1 and R2, and so R1|R2 is the same as R2|R1.

o11c

Well, technically ... if your dialect supports capturing groups, there's technically a non-commutativity anyway.

Assuming input is "ab",

  /(a)b|a(b)/ produces \1=a, \2=<missing>
  /a(b)|(a)b/ produces \2=<missing>, \1=b
Probably the easiest way to test this yourself is with GNU sed.

cvoss

What you say is not only technically untrue, it's just plain untrue. It's a choice of the language designer whether capturing groups break commutativity.

Sed makes one choice. I'd guess that GP would call this broken garbage too, and I'd agree. Regular expressions have all these nice theoretical properties like closure under all the boolean operations and linear-time matching, but these nice properties get trashed by features that don't mesh or aren't fully thought through.

In this case (thinking about capturing groups and commutativity), one property of regular expressions is that for each one there is a machine that can do the linear-time matching -- a DFA. Even if the regular expression contains not-mutually-exclusive alternations, when it gets compiled to a DFA, the matching procedure is deterministic by construction. I can imagine a way to integrate capturing start and end actions into the transition edges of the DFA. The right thing to do is to perform capturing on all the matching alternands, not just the first. You lose the ability to number the capturing groups left to right, but instead you should lay them out in a tree that follows the concatenation/alternation structure of the expression.

burntsushi

I find your certainty here quite odd. You claim to know what the "right thing" is, but there is no implementation of it and it gives up an incredibly useful feature of capturing that general purpose regex engines all utilize.

null

[deleted]

HelloNurse

This is a broken regexp, with deliberate ambiguity: nondeterministically choosing the groups according to one of several matching alternatives is an implementation-defined ambiguity "resolution" that should not happen.

Just write /(a)b/ or /a(b)/ or /ab/ or /(ab)/ or /(a)(b)/ which mean five slightly different things.

null

[deleted]

o11c

It's called a minimal example. You can easily get nontrivial real-world versions, such as "All vowels or all uppercase" or "Three letters, at least two of which are A's".

It is not reasonable to expect the user to manually disambiguate every regex.

jonstewart

With Perl/PCRE matching semantics with alternation, I always think of it in terms of preference, and therefore as a feature (perhaps of dubious worth).

It is possible to support these semantics with an automata-based engine (see RE2; and pity junyer isn’t here to read this article, he loved derivatives), but I can’t say I recommend it. The benefit, of course, is then you can peg your test suite to PCRE.

kazinator

> The first industrial implementation of derivatives for standard regexes in an imperative language (C#) materialized a decade later [Saarikivi et al. 2019]

Nope; I did it in TXR in early 2010:

  b839b5a212fdd77c5dc95b684d7e6790292bb3dc    Wed Jan 13 12:24:00 2010 -0800    Impelement derivative-based regular expressions.

def-lkb

https://sourceforge.net/projects/libre/ dates back to 2001. (One could object it is not imperative enough, whatever that means :))

burntsushi

The claim here wasn't just "first implementation of derivatives." It was a far more precise "first industrial implementation of derivatives for standard regexes in an imperative language."

burntsushi

What is TXR? Was this implementation really "industrial"? Did it have the caching present in RE# to avoid worst case exponential compile times? Did it support Unicode? Did it have prefilters? What kind of match semantics did it support?

"industrial" in this context to me means something like, "suitable for production usage in a broad number of scenarios."

IDK if RE# lives up to that, but their benchmark results are impressive. This paper is a year old. In which production systems is RE# used?

gjm11

It isn't clear to me what exactly OP means by "industrial" but it seems possible that they might not consider it to apply to TXR.

kazinator

I implemented it as a committed feature in a programming language designed to be used for solving problems in the real world, rather than accompanying academic research into the topic.

No different from what was done in C#.

high_na_euv

Whats TXR

omgtehlion

Source repository (https://github.com/ieviev/resharp) seems to be deleted. Does anyone have a link to the actual code?

Edit: answering myself, this seems to be (at least partially) merged into the dotnet itself https://github.com/dotnet/runtime/pull/102655

null

[deleted]

unit149

[dead]