Can you complete the Oregon Trail if you wait at a river for 14272 years?
96 comments
·January 13, 2025quuxplusone
albrot
Hi, I'm the guy who discovered the quirk in the first place. You can survive pretty much indefinitely at the river, with or without food. You could cross the river at any point. I just thought it would be a laugh to see if you could get to a five-digit year. Then, upon resumption of the journey, the party very rapidly deteriorates and you can only survive about 5 or 6 days before they're all dead, even if you gather food immediately and wait to restore health. So the unmodded achievement was "I lived for 15,000 years in The Oregon Trail" and then I asked moralrecordings for help in reverse-engineering the game so I could get the satisfaction of a successful arrival.
Just a bit of fun.
edit: And the answer to "Why THAT river?" is simply that it's the last river in the game, and when I was hoping to complete a run without any modding, I thought it might be possible to play normally, get to the final river, wait 15,000 years, and then try to limp my decrepit deathwagon to the finish line before we all expired. This proved impossible, sadly.
Hilift
Could be the terrain and geology. About 15,000 years ago, after the last glacial maximum subsided, the largest flood in history carved out that part of Oregon. Maybe there is a similar timetable where the Columbia is silted up.
From Wikipedia: "The wagons were stopped at The Dalles, Oregon, by the lack of a road around Mount Hood. The wagons had to be disassembled and floated down the treacherous Columbia River and the animals herded over the rough Lolo trail to get by Mt. Hood."
https://en.wikipedia.org/wiki/Oregon_Trail#Great_Migration_o...
metadat
How did the wagons avoid sinking / not take on water through the wood plank edges? Constant bailing while on the water?
oxidant
I think you take the wagons apart and put them on a ~boat~ edit: was it a raft? In this context, "Float them down" doesn't refer to the wagons floating by their own bouancy, but rather their position atop the water.
null
jandrese
The mental image conjured up by this scenario is amusing. Your impossibly patient party waits almost 15,000 years to cross a river in a state of suspended animation. Then they finally cross the river and instantly wither away to dust because they had not had a good meal in 15 centuries.
Something that was very common with BASIC interpreters but still baffling is how they were running on machines with extremely limited memory and fairly limited CPU time, but for some reason decided not to make integer types available to programmers. Every number you stored was a massive floating point thing that ate memory like crazy and took forever for the wimpy 8 bit CPU with no FPU to do any work on. It's like they were going out of their way to make BASIC as slow as possible. It probably would have been faster and more memory efficient if all numbers were BCD strings.
glxxyz
BBC BASIC from Acorn in 1982 supported integers and reals. From page 65 of the user guide [https://www.stardot.org.uk/forums/download/file.php?id=91666]
Three main types of variables are supported in this version of
basic: they are integer, real and string.
integer real string
example 346 9.847 “HELLO”
typical variable A% A A$
names SIZE% SIZE SIZE$
maximum size 2,147,483,647 1.7¥1038 255 characters
accuracy 1 digit 9 sig figs —
stored in 32 bits 40 bits ASCII values
A%, A, and A$ are 3 different variables of different types.null
CalRobert
And to add insult to injury, you write "peperony and chease" on their tombstone.
Edit:
Poor Andy :-(
https://tvtropes.org/pmwiki/pmwiki.php/Trivia/TheOregonTrail
em3rgent0rdr
> "for some reason decided not to make integer types available to programmers...It's like they were going out of their way to make BASIC as slow as possible."
BASIC was well-intention to make programming easy, so ordinary people in non-technical fields, students, so people who weren't "programmers" could grasp. In order to make it easy, you better not try to scare adopters with concepts like int vs float and maximum number size and overflow, etc. The ordinary person's concept of a number fits in what computers call a float. You make a good point though that BCD strings might have done the trick better as a one-size fits all number format that might have been faster.
BASIC also wasn't intended for computationally intense things like serious number crunching, which back in the day usually was done in assembly anyway. The latency to perform arithmetic on a few floats (which is what your typical basic program deals with) is still basically instantaneous from the user's perspective even on a 1 MHz 8-bit CPU.
mywittyname
> but for some reason decided not to make integer types available to programmers.
Can you expand upon this? All of the research I've done suggest that, not only was it was possible to use integer math in Basic for the Apple II, there are versions of BASIC that only support integers.
Salgat
https://en.wikipedia.org/wiki/Dartmouth_BASIC
"All operations were done in floating point. On the GE-225 and GE-235, this produced a precision of about 30 bits (roughly ten digits) with a base-2 exponent range of -256 to +255.[49]"
jazzyjackson
Good find, Dartmouth was the original BASIC for their mainframe timesharing, Apple and other micro variants came later.
Speaking of, John G. Kemeny's book "Man and the Computer" is a fantastic read, introducing what computers are, how time sharing works, and the thinking behind the design of BASIC.
jandrese
BASIC doesn't have typing, so most BASIC interpreters just used floating point everywhere to be a beginner friendly as possible.
The last thing they wanted was someone making their very first app and it behaves like:
Please enter your name: John Doe
Please enter how much money you make every day: 80.95
Congratulations John Doe you made $400 this week!
int_19h
Classic BASIC does have typing, it just shoves it into the variable name. E.g. X$ is a string, X% is a 16-bit signed integer, and X# is a double-precision floating point number.
This started with $ for strings in Dartmouth BASIC (when it introduced strings; the first edition didn't have them), and then other BASIC implementations gradually added new suffixes. I'm not sure when % and # showed up specifically, but it was already there in Altair BASIC, and thence spread to its descendants, so it was well-established by 1980s.
Breza
IIRC Python had similar reasoning for making floats the default in version 3 instead of integers, which had been the assumption in 1 and 2 when you entered a number without a decimal. R always did it that way and Julia still assumes integer, which occasionally trips me up when switching languages.
null
KerrAvon
Wozniak's original BASIC for the Apple II only supported integers; when Apple decided they needed floating point and Woz refused to spend time on it, they decided to license it from Microsoft, producing Applesoft BASIC. Applesoft was slower than Woz's BASIC, because it performed all arithmetic in floating point.
Brian_K_White
MS BASIC on TRS-80 model 100
default, a normal variable like N=10, is a signed float that requires 8 bytes
optional, add ! suffix, N!=10, is a signed float that requires 4 bytes
optional, add % suffix, N%=10, is a signed int that requires 2 bytes
And that's all the numbers. There are strings which use one byte per byte, but you have to call a function to convert a single byte of a string to it's numerical value.
An unsigned 8-bit int would be very welcome on that and any similar platform. But the best you can get is a signed 16-bit int, and you have to double the length of your variable name all through the source to even get that. Annoying.
leni536
Maybe it was a consideration of code size. If you already choose to support floats then you might as well only support floats and save a bunch of space by not supporting other arithmetic types.
nopakos
I remember having integer variables in Amstrad CPC (Locomotive) Basic. Something with the % symbol. edit: ChatGPT says that BBC BASIC and TRS-80 Microsoft BASIC also supported integer variables with % declaration.
canucker2016
The wikipedia page for Microsoft BASIC (which Applesoft Basic is a variant), https://en.wikipedia.org/wiki/Microsoft_BASIC, mentions that integer variables were stored as 2 bytes (signed 16-bit) but all calculations were still done in floating point (plus you needed to store the % character to denote an integer var).
So the main benefit was for saving space with an array of integers.
sedatk
Yes, Locomotive BASIC also supported DEFINT command, so, all variables in a given range would be treated as integers without "%" suffix.
null
csours
You board the Generation Ship Oregon Trail with some trepidation. If the scientists are correct you will be in suspended animation for the next 14272 years. You already feel colder somehow. To the West you see a robotic barkeep.
LeifCarrotson
Sometimes, I hate working with code where the developer was either a Basic developer or a mathematician: variable names limited to two characters (like "H" for health and "PF" for pounds of food remaining) work when when manipulating an equation and are a lot better than 0x005E, but the code isn't nearly self-documenting. On the other hand, the variable name could be "MessageMappingValuePublisherHealthStateConfigurationFactory". Naming things is one of the hard problems in computer science, and I'm glad we're past the point where the number of characters was restricted to 2 for performance reasons.
Unrelated, my monitor and my eyeballs hates the moire patterns developed by the article's background image at 100% zoom - there's a painful flicker effect. Reader mode ruins the syntax highlighting and code formatting. Fortunately, zooming in or out mostly fixes it.
harrison_clarke
have you seen arthur whitney's code style?
https://www.jsoftware.com/ioj/iojATW.htm
i tried this style for a minute. there are some benefits, and i'll probably continue going for code density in some ways, but way less extreme
there's a tradeoff between how quickly you can ramp up on a project, and how efficiently you can think/communicate once you're loaded up.
(and, in the case of arthur whitney's style, probably some human diversity of skills/abilities. related: i've thought for a while that if i started getting peripheral blindness, i'd probably shorten my variable names; i've heard some blind people describe reading a book like they're reading through a straw)
parpfish
over the years i've had to translate a lot of code from academics/researchers into prod systems, and variable/function naming is one of their worst habits.
just because the function you're implementing used single-character variables to render an equation in latex, doesn't mean you have to do it that way in the code.
a particular peeve was when they make variables for indexed values named`x_i` instead of just having an array `x` and accessing the ith element as `x[i]`
Breza
At least I've never seen UTF8 math symbols in the wild. Julia, Python, and other languages will let you use the pi symbol for 3.14... instead of just calling it pi.
JoshTriplett
I've seen that. Some Haskell libraries use Unicode for custom operators. Makes the code even harder to understand.
bluedino
40x25 text screens and line-by-line editors encourage short variable names as well
sumtechguy
Also some of that older stuff it can be the compiler only let you have 8 chars for a variable name.
bluedino
Applesoft BASIC only uses the first two characters (!) to distinguish one variable name from another. WAGON and WATER would be the same.
(page 7)
https://mirrors.apple2.org.za/Apple%20II%20Documentation%20P...
hombre_fatal
On the other hand, sometimes less descriptive but globally unique names add clarity because you know what they mean across the program, kinda like inventing your own jargon.
Maybe "PF" is bad in one function but if it's the canonical name across the program, it's not so bad.
xg15
and then there are the people who name their variables Dennis...
nadermx
"The game dicks you at the last possible moment by expecting the year to be sensible"
Great read on how to actually hack. Takes you through the walls he hits and then how by hitting that wall it "opens up a new vector of attack"
egypturnash
> Several days later, I tried writing a scrappy decompiler for the Applesoft BASIC bytecode. From past experience I was worried this would be real complex, but in the mother of all lucky breaks the "bytecode" is the original program text with certain keywords replaced with 1-byte tokens. After nicking the list of tokens from the Apple II ROM disassembly I had a half-decent decompiler after a few goes.
Applesoft has a BASIC decompiler built in, it's called "break the program and type LIST". Maybe Oregon Trail did something to obscure this? I know there were ways to make that stop working.
sumtechguy
If I remember correctly applesoft also had a few single bytecodes that would decode to other key words. Like PRINT and ?. But I could be remembering badly.
bongodongobob
Depends on the version. The original was BASIC, but the one with graphics and sound (which I think was more popular?) was assembly.
egypturnash
Wikipedia implies this version was mostly BASIC, the hunting minigame was in assembly.
canucker2016
Yes, a few minutes spent reading about Applesoft BASIC or Microsoft BASIC would've reduced the cringe factor in reading a neophyte trying to mentally grapple with old technology.
"bytecode" and "virtual machine", no, no, no. That's not the path to enlightenment...
in this case, print debugging, is your best bet.
bluedino
> So 1985 Oregon Trail is written in Applesoft BASIC
This surprised me for some reason, I guess it's been 30-some years but I remember my adventures in Apple II BASIC not running that quickly, but maybe Oregon Trail's graphics was simpler than I remember
I guess I just assumed any "commercial" Apple II games were written in assembly, but perhaps the actions scenes had machine code mixed in with the BASIC code.
Suppafly
There are so many different versions of Oregon Trail, you might have played the old version first but substituted the graphics and game play you remember with a later version you also played. Not to mention that imagination fills in a lot of the details when you're playing those games, usually as a child.
Scuds
There are two versions of Ultima 1, the original has BASIC is basic with assembly and there is a remake in pure assembly. You can definitely tell the improvements the asm version brings with the overworld scrolling faster and the first person dungeons redrawing very quickly.
So - I'm guessing game logic of MECC Oregon was in Basic with some assembly routines to re-draw the screen. BTW original Oregon Trail was also 100% basic and a PITA to read. You're really getting to the edges of what applesoft basic is practically capable of with games like Akalabeth and Oregon
vidarh
That reminds me of finding out Sid Meier's Pirates! on the C64 was a mix of BASIC and assembly. You could LIST a lot of it, but the code was full of SYS calls to various assembly helpers, which I remember was incredibly frustrating as I did not yet have any idea how assembly worked so it felt so close but so far to being able to modify it.
egypturnash
Wikipedia tells me that the 1985 version's hunting minigame is in assembly; it does not explicitly say that the rest is in Basic but it definitely implies this.
bluGill
Oregon trail was conceptually simple and so well crafted BASIC would be plenty fast. Most other games were move complex and probably needed assembly. Though it was common to call inline assembly (as binary code) in that era as well.
classichasclass
Not uncommon, at least on the A2 and C64, to have a BASIC scaffold acting like a script that runs various machine language subroutines and/or the main game loop.
ianbicking
I also thought it was interesting that it was actually several BASIC programs with data passed back and forth by stuffing it in specific memory locations.
itslennysfault
I find it amusing that the bug in the final screen is essentially the Y2K bug.
dekhn
In the dungeon crawling classic Wizardry, there was a cheat, if you used your bishop to try to 'I'dentify object in inventory slot 9 (there were only 8 slots) over and over, you'd get an 100,000,000 XP bonus. I believe it was unintentional bug.
__alexander
> Specialist knowledge is for cowards
What a strange and thought provoking statement.
suprfnk
“A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”
― Robert A. Heinlein
8bitsrule
Great quote ... but: Warning from experience: do not try using that last part in a job application.
losvedir
> If you're into retro computing, you probably know about Oregon Trail
Damn, that made me feel really old.
jedberg
Yesterday I was at a gaming parlor in SF and they had "Oregon Trail the card game". I sent my brother a picture. I couldn't my kids to even know why it was special.
jazzyjackson
Lol. I played Oregon Trail as part of curriculum in elementary school in the mid 90s, I guess by then it was already a retro throwback.
EDIT: I played the 1985 version, I didn't know there was a text adventure.
rbanffy
> If you want to modify anything else in the 16-bit address space, you first need to write a 16-bit pointer to the Zero Page containing the location you want, then use an instruction which references that pointer.
This is so completely wrong I question the person's ability to understand what's happening in the emulator.
Also, that LDA instruction reads the 2-byte pointer from the memory location, adds Y, and loads the accumulator from the resulting memory position. IIRC, the address + Y can't cross a page - Y is added to the least significant byte without a carry to the MSB.
> and the program is stored as some sort of bytecode
We call it "tokenized". Most BASIC interpreters did that to save space and to speed parsing the code (it makes zero sense to store the bytes of P, R, I, N, and T when you can store a single token for "PRINT".
toast0
> This is so completely wrong I question the person's ability to understand what's happening in the emulator.
I'd argue that it's not completely wrong in the context of a BASIC program; other addressing modes exist, but I don't think the BASIC interpreter will use self-modifying code to make LDA absolute work.
> IIRC, the address + Y can't cross a page - Y is added to the least significant byte without a carry to the MSB.
If wikipedia is accurate, the address + Y can cross a page boundary, but it will take an extra cycle --- the processor will read the address + Y % 256 first, and then read address + Y on the next cycle (on a 65C02 the discarded read address will be different). But if you JMP ($12FF), it will read the address from 12FF and 1200 on an 6502 and a 12FF and 1300 on a 65C02 --- that's probably what you're thinking of.
rbanffy
Thanks. It’s been 40+ years since I last programmed in 6502 assembly. I am rusty.
anonymousiam
Yeah. I got as far as the zero-page description and stopped reading. The 6502 was the first microprocessor I learned (on the KIM-1) in 1979. All that zero page addressing offers is a faster way to access memory, because when using zero-page addressing modes, you only need one octet for the address instead of two. When using them on a 1MHz CPU with no cache, you've just saved many microseconds because you didn't need to fetch the high order address octet from program memory!
On the 6502, you can absolutely access all 64K of memory space with an LDA instruction.
The other weird thing about the 6502 is "page one", which is always the stack, and is limited in size to 256 bytes. The 256 byte limit can put a damper on your plans for doing recursive code, or even just placing lots of data on the stack.
I've done lots of embedded over the years, and the only other processor I've developed on that has something similar to the 6502 "zero page" memory was the Intel 8051, with it's "direct" and "indirect" memory access modes for the first 128 bytes of volatile memory (data, idata, bdata, xdata, pdata). What a PITA that can be!
bluGill
> On the 6502, you can absolutely access all 64K of memory space with an LDA instruction.
There are two LDA instructions (maybe more, I too am about 40 years rusty). One loads from page 0 only and thus saves the time by only needing to read one byte of address, and the other reads two bytes of addresses and can read from all 64k. In latter years you had various bank switching schemes to handle more than 64k, but the CPU knew nothing about how that worked so I'll ignore them. Of course your assembler probably just called both LDA and used other clues to select which but it was a different CPU instruction.
Can someone fill in the missing link in my understanding here? It seems like the post never gets around to explaining why waiting for 14272 years should make the river passable. (Nor why this river in particular, as opposed to any other obstacle.)
The post alludes to a quirk that causes people not to get sicker while waiting; but it says they still get hungry, right? So you can't wait 14272 years there for any purpose unless you have 14272 years' worth of food, right?
IIUC, the blogger goes on to patch the game so that you don't get hungry either. But if patching the game is fair play, then what's the point of mentioning the original no-worsening-sickness quirk?
It kinda feels like asking "Can you win Oregon Trail by having Gandalf's eagles fly you to Willamette?" and then patching the game so the answer is "yes." Like, what's the reason I should care about that particular initial question? care so badly that I'd accept cheating as an interesting answer?