Skip to content(if available)orjump to list(if available)

“Dynamic programming” is not referring to “computer programming”

npsomaratna

Sri Lankan here. I used to compete in the IOI (International Olympiad in Informatics), back in the '90s.

I aced the competition in Sri Lanka, but I failed to win a medal the first time I competed internationally. I solved several problems with recursion, but these failed to complete within the strict time limits allowed.

One of the contestants from another country told me: "You needed dynamic programming to solve problems 1 and 5." I then spent the next year trying to figure out what dynamic programming was. The folks over here weren't familiar with the term. In fact, I was often asked "did you mean dynamic memory allocation?"

After a while, I managed to find a book that talked about dynamic programming, and I was like "Oh, it's recursion + storing results" and "Ah, you can also do this iteratively in certain circumstances"

Bottom line, armed with this new knowledge, I ended up winning a gold medal at IOI 2001. Fun times.

aljgz

In the university, we had a team for ACM ICPC. We helped our professor organize a local competition, to encourage more people to practice. We participated in that competition, of course. One question would give you a few numbers no greater than 13, and you would output the number of ways to put N queens on the cheeseboard of those sizes. Not a difficult problem, but our first implementation, needed 2 minutes while time limit was 1 minute. We could optimize, but we decided to run it for all possible numbers, write a new program with just a switch-case. This was q perfectly legal move. We could see his face while judging our submission, his huge surprise when code ended in milliseconds, his look of admiration at us, the curiosity about the solution, and finally his reaction when he checked out our code. Priceless.

ivan_gammel

Fun enough, in commercial software engineering your solution won’t raise eyebrows as something routine and expected, but an efficient algorithm could do.

TheJoeMan

Had a similar problem in a highschool comp. sci. class with a maze-following algorithm where you got points off for back-tracking. Turns out your code could pre-fetch the entire maze before making a move, which I think goes against the spirit of the exercise.

nicknash

I wonder if you can clear up a memory of mine from IOI 2001. The first day results were delayed, and I seem to remember this is because a contestant encoded the search in one of the problems into a compile time c++ template metaprogram. On the second day, there was then also a compile time limit for solutions, from what I remember. Do you remember any more details of this?

hnfong

I think I know the culprit. The story I was told is that the person precomputed some of the results and submitted a huge source file (in tens of megabytes) of hard coded data. This probably led to subsequent amendment of the rules regarding source file sizes.

I'll report back once I find the persons involved.

npsomaratna

Wasn't this for one of the questions on the second day? Where you had to submit a series of outputs, but these were trivial to generate once you cracked the problem.

I remember getting an "aha" moment, writing a program, and then submitting (scored 100% too!). Then, I met a guy who also cracked the problem and realized that he just needed to paste the test data into a spreadsheet, do some basic sorting there, and then paste the output into a text file; no coding necessary.

I felt pretty stupid afterwards.

npsomaratna

No, sorry. I vaguely remember compile time limits, but they were high enough (30 seconds, I think?) that I didn't bother worrying about them (at least, that's my memory).

xandrius

That sounds very smart :D

amelius

But aren't the programs given access to the problem data _after_ the program has been compiled?

mort96

One thing I always wondered is: what makes dynamic programming more "dynamic" than regular programming? Why is recursion "static" but it becomes "dynamic" if you store results?

stonemetal12

Marketing, Richard Bellman was trying to get funding and needed his work to sound high tech and fancy.

According to wikipedia: https://en.wikipedia.org/wiki/Dynamic_programming#History_of...

The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive.The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases linear programming and mathematical programming, a synonym for mathematical optimization.

Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic.

hnfong

When I learned dynamic programming, I was told this was in contrast to simple memoization in that memoization is a cache of computation, dynamic programming involves a "choice" in each step.

For example let's look at the fibonacci sequence, F(n) = F(n-1) + F(n-2), naively you just fill in a (1 dimensional) table with the results [1, 1, 2, 3, 5, 8, 13...] and it feels static because there's nothing to choose or decide.

In contrast for example the longest common subsequence looks like this:

  lcs[m, n] = 0                 if m = 0 or n = 0
  lcs[m, n] = lcs[m-1, n-1] + 1  if X[m] = Y[n]
  lcs[m, n] = max(lcs[m-1, n], lcs[m, n-1])  if X[m] != Y[n]
At every step there's a "choice" to be made depending on the situation. The functions like "max()" are quite typical for dynamic programming. At least this is how I interpret the "dynamic" part.

eloisant

I also discovered the term "dynamic programming" 20 years in my career, when doing leetcode training for a job interview.

I still have no idea why it's called "dynamic".

lxgr

> I still have no idea why it's called "dynamic".

Neither did I – until I read TFA :)

bazoom42

According the article, the term “dynamic” was chosen because it has positive connotations.

bee_rider

If I ever invent anything I’m calling it “dynamic entropy,” that is clearly the coolest name.

moffkalast

Because fuck descriptive names right?

sidewndr46

As others have mentioned the term was chosen to be distinctive and attractive.

snthpy

I was part of the hosting team at IOI 1997 in Cape Town and had similar conversations with the participants that you described and also learned about Dynamic Programming there. Your description of it as "recursion + storing results which you can sometimes do iteratively" is the best summary. Good example is iterative Fibonacci algorithm.

Another fun anecdote from that time is that we had to take special precautions with the scoring program which connected to the participants machine over the serial port. The previous year one participant had hacked the serial port interface to manipulate the results. :-)

BobbyTables2

The term always made me insufficient as I always assumed it meant much more and I just didn’t understand how.

In the Fibonacci example, it seems more like common sense to avoid recomputing expensive things. Instead we should give derogatory names to the inefficient implementations, not an opaque name to the only sane implementation.

npsomaratna

Nice! I guess you must know Bruce Merry. Once, when practicing, I kept on hitting my head on a problem. On the spur of the moment, I decided to fire off an email to Bruce (given his track record, I was like "if anyone can figure this out, he can"). I was shocked (and very pleased) to get a reply a few days later outlining his thought process and how he went around solving the problem.

I used to know the ZA team leaders as well (after competing, I was a team leader for several more years).

radicalbyte

I had to laugh hard the first time I heard about it, I mean it's a really obvious technique if you've ever done low-level programming on limited machines as it's very close to something we used to do all the time - calculate result tables and use lookups which we built in to our code.

Trades memory for CPU when it's worth it. Dynamic programming takes the same idea but does it at runtime.

bo1024

To be fair, DP generally also involves recursion in some way. You could describe it as a lookup table where, to fill in later entries of the table, you need to use the earlier entries.

krackers

Has there been an "arms race" of sorts with programming contests, as knowledge percolates and harder categories of problems need to be chosen? Since these days dynamic programming is basically table stakes for ICPC-type problems.

paldepind2

Yes, absolutely. I did programming competitions back in high-school (around 10 years ago) and common folklore was that back in the days knowing dynamic programming could win you a medal, but today it was just basic expected knowledge.

bubblethink

There's a lot of variety in DP. Knowing about DP doesn't help much with solving DP problems. I'm sure you've seen all the fun problems on codeforces or leetcode hards. The one quote I remember from Erik Demaine's lecture on DP is where he comments, "How do we solve this with DP ? - With difficulty."

npsomaratna

That's my impression as well.

I think it's because access to knowledge has become a lot easier, so contestants know more; plus, the competitions themselves have become more organized.

For example, last I checked, the IOI had a syllabus outlining the various algorithms contestants needed to know. During my time, it was more of a wild west.

kindkang2024

That's how Competition Makes All Great Again. Looks like it's by God's design. ^_^

kindkang2024

Loved your wonderful story—here’s my plain one.

I came to programming after I graduated, and I never really understood Dynamic Programming until I watched one of Stanford’s algorithm courses (kudos to MOOCs).

But honestly, I’ve never done anything truly useful with it—except that it gave me a new perspective on ordinary things. For example, I found that there may be great wisdom in the Taiji symbol (image here: https://commons.wikimedia.org/wiki/File:Yin_yang.svg). We can divide the current problem and conquer it at the lower levels (like one of the dots in yin and yang). And of course, those lower levels still contain their own divisions and computations—until there are none left.

At the same time, we can also start from the base and build upward (bottom-up), moving toward higher-level computations—where possible divisions await, until eventually one big unified answer emerges.

Pretty useless for its actual usage here, isn’t it? ^^

matsemann

Dynamic programming is for me one of the things that really scratched an itch and made me start liking programming. The invariant/inductions of composing smaller things into a bigger solution is so elegant.

I do feel dynamic programming influences how I solve problems in my work. How, if you compose the problem correctly, it can never reach a fault state in a sense. Not sure if I'm able to explain what I mean, but for instance I recently refactored an old application at work that had lots of bugs, was hard to grasp and had loads of error checking code / asserts. However, by composing it a bit differently, modeling it properly, I could remove swaths of code / defensive programming because I with the help of the compilator now can guarantee the base cases holds etc.

Edit: one a bit contrived example might be an Order. If you have one big model tracking the lifetime of an order, some fields will have to be null until they have a value. For instance a field sent_at_time. Then you have a function that generates an email to the customers, which uses that field. How do you handle that field possibly being null? Do you say "I know that at this point in time, this field is always set since the email is generated after sending" and tell the compiler to ignore the possibly null value? Or do you handle the possible null with some error handling / fallback value, that in practice is dead and cluttered code and will just confuse a future developer seeing it with questions like "how can this happen"?

With the lessons from dynamic programming in mind, I think both are bad solutions. You want to be able to go from state n to n+1 with safety (order purchased, ordered sent, email sent). So model it in a way that the email sending code only ever can get an order in the correct previous state as input. Then if the assumption were to break in the future (maybe some orders are digital and the email is sent without the order having been physically sent), that breaking assumption will immediately be obvious in how the state/models are composed, and not in runtime when either the not-null-assert fails or the emails start having the fallback value.

lock1

Maybe "Making illegal state unrepresentable" is the term you're looking for?

https://fsharpforfunandprofit.com/posts/designing-with-types... https://inside.java/2024/06/03/dop-v1-1-illegal-states/

IMO it's doesn't have much relation with dynamic programming though.

zekica

I had the same experience: no one knew of my mentors what "dynamic programming" was and our country-level competition (that had created problems inspired by IOI) required dynamic programming for 2 out of 5 problems. And of course I failed the first time (2004). Then I learned what it was about and aced the next time (2005).

tgma

Programming terminology is in the same vein as "Linear Programming..."

As per Wikipedia, that origin story is somewhat disputed:

"According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953." Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true."

mr_toad

Linear programming gets confused with programming, but also with linear algebra and linear modelling.

nextaccountic

Programming in this sense means just optimization

random3

I can't find the resource, but, according to Bellman he needed a name that wouldn't sound like something the army would scrape— effectively protecting his research. He had a few options and "dynamic" sounded "cool" enough, in his opinion, for army.

This said, the "programming" part is pretty accurate, but IMO the author misses the point with what "computer programming" meaning is, afterall.

andreygrehov

If anyone's interested, I published a short, ad-free YouTube series on Dynamic Programming about 4-5 years ago [1]. I haven't had a chance to continue it, but many people have found the course eye-opening.

[1] https://www.youtube.com/@andreygrehov

manoji

DP(Dynamic Programming) is such a beautiful technique .It took a while for me to grasp it . There are many applications in real world systems as well. Eg Duck DB's join order optimizer is built on https://15721.courses.cs.cmu.edu/spring2020/papers/20-optimi... .

https://github.com/duckdb/duckdb/blob/61f07c3221e01674d8fe40...

throwawayffffas

Dynamic programming is a subfield of mathematical programming. The field of studying optimization problems.

It's called programming because programming means scheduling. And most of the problems the field initially concerned itself with were logistics scheduling problems.

Computer programming is called programming because it involves scheduling instructions.

aldousd666

I learned what 'dynamic programming' was from studying the 'BLAST' algorithm for DNA sequence search. BLAST is a heuristic approximation for the otherwise Dynamic Programming algorithm created by 'Smith & Waterman' Before that, I would have misunderstood the term too.

cousin_it

My favorite dynamic programming problem is computing the percent of relatedness between two people, given a set of people and a partial graph of parenthood among them (with the assumption that any relatedness not implied by the graph is zero). If seems very confusing at first, but then you realize that if A is not a descendant of B, rel(A,B)=(rel(A,father(B))+rel(A,mother(B)))/2, which allows you to compute all relatedness values from top to bottom of the graph as fast as possible.

cjfd

Just call it caching. The whole concept is a bit too simple to be a word.

gsf_emergency_2

It seems dynamic programming (some predetermined sequence of steps) as envisioned by Bellman is strictly less dynamic than ordinary programming today, hence the confusion?

Elevating "memoization+recursion" to "dynamic programming" seems like a development that Bellman didn't anticipate

https://cs.stackexchange.com/questions/99513/dynamic-program...

There is another mid century term which is oddly specific for something oddly general (or the other way?) "Operations Research"

Guess Bellman should have called it "Reactive Optimization", then we would have the totally appropriate "Recursive Memoization" or "Memoized Recursion"

LPisGood

You should think of the “programming” in dynamic programming the same way you think of in linear programming, integer programming, and constraint programming.

Indeed even in layman’s terms, thinking of it as in television programming is more accurate than thinking it is related to computer programming (as is mentioned in TFA)

mort96

I think of "programming" in "dynamic programming" the exact same way I think of it in "linear programming", "integer programming" and "constraint programming": it's probably some kind of software development that some computer scientists came up once and that I don't need to think about, because my normal programming has worked out pretty well so far

(Except, well, I guess I understand what "dynamic programming" is more than I understand what the other forms of programming you mention is; "dynamic programming" is solving certain classes of recursive problems by using arrays, sometimes nested arrays, to avoid re-computation, and somehow that's supposed to be more "dynamic" than not doing that)

TeMPOraL

> linear programming, integer programming, and constraint programming

Can't think of a universe in which you'd learn about these things before learning "computer programming".

tgma

There was a universe before digital computers were popular or a thing at all.

Computing predates computers.

mr_toad

I know two people who are experts in linear programming who have never written a single line of code.

opnac

In the UK at A-level (age 16-18) you may still be taught linear and dynamic programming before ever touching a line of code! (Indeed, that was the same for me!)

null

[deleted]

agumonkey

I always find funny how many meanings "programming" has.

thaumasiotes

Television programming isn't a separate meaning from computer programming. Programming means creating a program. The program, whether you're talking about a computer, a television channel, or a live staged performance, is a list of what's going to happen in what order.

Certhas

The sequence of steps is the result of dynamic programming. Not every sequence of steps is a dynamic programme. And (what I would argue are) the core results are fairly mathematical, memoization and recursion don't feature, but partial differential equations do. Check out the Hamilton Jacobi Bellman equation to get a flavour.

thaumasiotes

> Elevating "memoization+recursion" to "dynamic programming" seems like a development that Bellman didn't anticipate

Well, it also isn't a development that happened. You can (always) implement dynamic programming as recursion with memoization. But an algorithm that is recursive and memoized isn't necessarily an example of dynamic programming.

The point of dynamic programming is that you set up the recursion so that recursive calls do hit the cache, not that there is a cache and if you're lucky they might hit it.

zelphirkalt

Usually though when using recursion to solve an algorithmic problem, people memoize results they know will be needed later again. So in most cases that should automatically become the same thing in practice.

throwawayffffas

For all the people looking for different terminology, the word you are looking for is optimization.

Dynamic optimization makes just as much sense. Same as Mathematical optimization does for the wider field of study.

ygritte

> Try thinking of some combination that will possibly give [the word, dynamic] a pejorative meaning. It’s impossible*.

> *This Haskell fan’s contender is “dynamic typing” :P

Nothing is impossible!

zoky

It’s certainly possible to use it as a euphemism, if not an outright pejorative. “Dynamic truthfulness”, “dynamic sense of morality”, etc.

eru

There's also dynamic scoping; as opposed to lexical scoping a.k.a. static scoping.

You can find defenders of dynamic typing, but dynamic scope is now widely seen as a mistake. Or at least dynamic scope by default; it has specialised uses--and in Haskell the Reader Monad is basically isomorphic to dynamic scoping and no one complains about it.

ygritte

> dynamic scoping

Right you are. Even more horrible. The tcl language still has it!

jrapdx3

Well, Tcl is kind of a mixture of scope rules. In some respects, e.g., within a proc, it's mainly a lexical environment, but of course dynamic scoping is introduced by commands like upvar/uplevel. FWIW Tcl programmers don't concern themselves very much with sorting out dynamic vs. lexical. In any case, Tcl programmers are careful to maximize static scoping. No doubt that's necessary to create reliable larger programs many of which have been written in Tcl.

geocar

Dynamic scope/binding is super useful, but I do not agree tcl does it or does it well, because it's not just shadowing the closure environment (ha), or changing globals in a dynamic-wind.

Perl's the only non-lisp I'm aware of that does dynamic binding well-enough to get the idea, but too many things just don't work:

    local *int = sub { return 69 }; print int("42");
is a particularly annoying one: You just can't do it. Other names might have better chances, e.g.

    package foo; sub int { 42 };
    package main; sub a{local *foo::int = sub { return 69 };&b(@_);} sub b{goto \&foo::int};
    print b("x"), a("x"), b("x"), a("x");
but this is a little unsatisfying.

umanwizard

So does emacs lisp, by default, although you can set it to lexical scope per-file (and it’s recommended to always do so).

eru

I think EmacsLisp also does dynamic scoping. Or at least they used to about ten years ago. Not sure, if they fixed it?

rat87

Isnt dynamic scoping a bit similar to Dependency Injection? Maybe there's a form of it that could be useful. Like explicit dynamic scopes?

eru

Well, isn't dependency injection just a more cumbersome way to say 'function arguments'? Dynamic scoping is exactly the same, it's basically equivalent to extra implicit function arguments that are passed through everywhere.

And yes, dynamic scopes can be useful in specific cases. What's wrong is having your variables scoped dynamically by default.

smokel

Also, the term "dynamic programming" has slightly different meanings in the context of leetcode problems and in the context of reinforcement learning.

In the first it's for optimizing a relatively small deterministic system, in the latter it is for optimizing a stochastic process.

Pretty confusing.

jeltz

The idea is way older than both leetcode and reinforcement learning and is used everywhere (for example when planning SQL queries). If reinforcement learning invented a new way to use the word then that is all their fault because leetcode is true to the original meaning.

smokel

Both meanings derive from Bellman. "Reinforcement learning" did not invent a new way, and the field of reinforcement learning predates leetcode by a large margin.

jeltz

So what is the confusion then? Leetcode got it from real computer science and engineering and retained the original meaning in both steps without any changes. Every bigger piece of software you use probably uses dynamic programming somewhere. Or do you think that e.g. PostgreSQL is leetcode?

There is nothing derived here. It is exactly the same meaning.