Skip to content(if available)orjump to list(if available)

How to Debounce a Contact (2014)

How to Debounce a Contact (2014)

76 comments

·January 5, 2025

nickcw

> But some environments are notoriously noisy. Many years ago I put a system using several Z80s and a PDP-11 in a steel mill. A motor the size of a house drawing thousands of amps drove the production line. It reversed direction every few seconds. The noise generated by that changeover coupled everywhere, and destroyed everything electronic unless carefully protected. We optocoupled all cabling simply to keep the smoke inside the ICs, where it belongs. All digital inputs still looked like hash and needed an astonishing amount of debounce and signal conditioning.

:-)

I had a similar experience with a elevator motor and a terminal. The electronics worked absolutely fine, but when someone operated the elevator it occasionally produced phantom keypresses on the capacitive keypad.

This was perhaps understandable, but what really confused the users was that these phantom keypresses sometimes pressed the not fitted debug buttons (which weren't on the production keypad) which stuck the device into debug mode!

We learnt not to include the debug code in the production firmware and beefed up the transient suppression hardware and the debouncing firmware to fix.

HeyLaughingBoy

My favorite noise story is from just a couple years ago. Our controller would run fine for hours or days and then reset for no apparent reason. Looking at the debug output, I could tell that it wasn't a watchdog or other internal reset (e.g., system panic) and there had been no user input. The debug log basically said that someone pushed the reset button, which clearly wasn't happening.

The EE and I were standing around the machine and he happened to be in front of the UI when it reset and I mentioned that I heard a soft click just before he said that it reset, but we had no hardware in the region where I thought the noise came from.

Finally, we put two and two together and realized that the system included a propane heater with an automatic controller and the noise I heard was probably the propane igniter. The high voltage from the igniter was wirelessly coupling into one of the I/O lines going to the controller board. The reason that the problem had suddenly started happening after months of trouble-free development was that the customer had rerouted some of the wiring when they were in the machine fixing something else and moved it closer to the heater.

In 30 years of doing this, I can count on one hand the number of times I've had to deal with noise that was coupling in through the air!

throwup238

I’ve worked with electron microscopes and in silicon fabs and it’s super fun being on the teams hunting for sources of noise during construction and bringup. In fabs there are multiple teams because it’s so specialized, the HVAC team being the most interesting one because they’ve got tons of mechanical and electronic sources all over the buildings. They were also the long tail for problems with yield (which was expected and budgeted for). I think the EM startup I worked for failed in part due to not taking the issue seriously enough.

I can’t tell any specific stories but poorly shielded USB ports were the bane of our existence in the 2000s. Every motherboard would come with them and the second a random floor worker would plug something in it’d take down a section of the fab or all of the microscopes even if it were on the other side of the building. For some god forsaken reason all the SBC manufacturers used by tons of the bespoke equipment were also adding USB ports everywhere. We ended up glueing all of them shut over the course of the several months it took to discover each machine as floor workers circumvented the ban on USB devices (they had deadlines to meet so short of gluing them shut we couldn’t really enforce the ban).

ikiris

Only time I’ve ever run into this was:

- an am radio station nearby coupling into a pbx

- when some genius thought it would be a good idea to run Ethernet down the elevator shaft, right next to the lines feeding its truck sized motor.

fsckboy

>Many years ago I put a system using several Z80s and a PDP-11

many years ago I wired up my own design for an 8080 system, but i was a self taught beginner and not very good at stuff like a capacitive RC debounce circuit so I couldn't get my single-step-the-CPU button to work.

I was reading the spec sheet for the processor and I realized I could kluge it with signals. There was something like a "memory wait" pin, and another one called something like "halt", but one fired on the leading edge of the clock, and the other on the trailing edge, so I was able to use a SPDT push button and a flip flop to single step halt/memory wait on the first bounce, and then restart only on the first bounce when the button was released.

bambax

> Years ago a pal and I installed a system for the Secret Service that had thousands of very expensive switches on panels in a control room. We battled with a unique set of bounce challenges because the uniformed officers were too lazy to stand up and press a button. They tossed rulers at the panels from across the room. Different impacts created quite an array of bouncing.

It's impossible to guess how users will use a system until they can be observed in the wild.

dpkirchner

This probably induced a lot of Object Thrown Operation (OTO) once word spread to everyone -- not just the lazy -- that it was possible to activate the buttons from afar.

khafra

> One vendor told me reliability simply isn't important as users will subconsciously hit the button again and again till the channel changes.

Orthogonally to the point of this excellent article, I found it striking how this was probably true, once--and then TVs got smart enough that it took seconds to change channels, instead of milliseconds. And then it was no longer possible for input failures to be corrected subconsciously.

myself248

Lightswitches are like this for me now. Activating the switch still produces an audible and subtly-tactile click, but then some awful software has to think about it for a moment and close a relay, and then a whole power supply in an LED module somewhere has to start up.

It's slower enough, compared to incandescent, to juuuuust make me flinch back and almost hit the switch again, but nope, it worked the first time after all.

I don't have a term for the annoyance of that flinch, but I should.

HPsquared

It used to be fun and rewarding to flip through channels on analogue equipment. No buffering, no delay, just press, flash to the next channel.

1970-01-01

What's truly been lost is the speed

20 years ago was when I could flip through all (40ish) analogue CATV channels * in under 20 seconds * and could tell you what shows were going on with each channel.

Yes, it only took around 500ms to filter and decide if each station broadcast was on commercial, news, sports, nature, or something else worth watching.

To this day, with all the CDNs and YouTube evolutions, we still have not come close to receiving video variety anywhere near that speed.

myself248

Seriously. Analog stuff was wild. You could have telephones in adjacent rooms, call one from the other, and your voice would come out the telephone (having traveled electrically all the way across town and back) before the sound came down the hall. Analog cellphones were like that too -- ludicrously low latency.

Being able to interrupt each other without the delay-dance of "no, you go ahead" *pause* was huge. Digital cellular networks just enshittified that one day in about 2002 and apparently most folks just didn't care? I curse it every time I have to talk on the godforsaken phone.

kderbe

Puffer channel changes are near-instant. https://puffer.stanford.edu/

marcosdumay

Well, that one was lost for a really reasonable increase in video quality, reception reliability, and number of channels.

GuB-42

I have thought about that for a while and I wonder if it has to do with how memory becomes bigger more than it becomes faster. For example, compared to 30 years ago, PCs have about 1000x times more RAM, but it is only about 100x faster with about 10x less latency. It is a general trend for all sorts of devices and types of storage.

It means that for instance, storing an entire frame of video is nothing today, but in the analog times, it was hard, it means you simply didn't have enough storage for high latency. Now, you can comfortably save several frames of video, which is nice since more data means better compression, better error correction, etc... at the cost of more latency. Had memory be expensive and speed plentiful, a more direct pathway would have been cheaper, and latency naturally lower.

robinsonb5

And yet if manufacturers cared enough about UX it wouldn't take much for input failures to be subconsciously correctable again. All you need is some kind of immediate feedback - an arrow appearing on-screen for each button press, for instance (or a beep - but I'd be the first to turn that off, so for the love of all that is holy, don't make it always-on!).

What's crucial, though, is that mistakes or overshoots can be (a) detected (for example, if three presses were detected, show three arrows) and (b) corrected without having to wait for the channel change to complete.

somat

nobody cares enough to actually do it. but what would it take to have near instantaneous channel changes again?, prestarting a second stream in the background? And realistically the linear array of channels is also dead so it really does not matter. so I guess the modern equivalent is having a snappy UI.

A horrible idea, as if our current tv features were not already bad enough. the modern equivalent to quick channel changes would be a learning model that guesses what you you want to see next, has that stream prestarted then have the "next channel" button tied to activate that stream. The actual reason this is a bad idea, I mean above and beyond the idea that we want learning models in our tv's. Is that the manufactures would very quickly figure out that instead of the agent working for their customers they could sell preferential weights to the highest bidder.

closing thought... Oh shit I just reinvent youtube shorts(or perhaps tik tok, but I have managed to avoid that platform so far)... an interface I despise with a passion.

null

[deleted]

jakewins

There was some article from early instagram times about this the other week - that an innovation there was that they started the upload as soon as the picture was taken, so after the user filled out the caption part and hit “submit”, the “upload” was instantaneous

theamk

a workaround for IP-based TVs may be some sort of splash/loading screen that shows recent-ish screenshot of the channel very quickly. It'd still take a long time for picture to start moving, but at least user will see something and could switch away quickly if they don't care about content at all.

Of course this will be non-trivial on server side - constantly decode each channel's stream, take a snapshot every few seconds, re-encode to JPEG, serve to clients... And since channels are dead, no one is going to do this.

myself248

It could simply be the most recent I-frame from the other stream in question. That would require neither decoding nor encoding on the server's part, merely buffering, and I suspect transport-stream libraries have very optimized functions for finding I-frames.

Furthermore, once a user starts flipping channels, since most flipping is just prev/next, you could start proactively sending them the frames for the adjacent channels of where they are, and reduce the show-delay to nearly nothing at all. When they calm down and haven't flipped for a while, stop this to save bandwidth.

gosub100

I think it's irrelevant because TV is dead. But I do remember with rose tinted glasses the days of analog cable where changing channels was done in hardware and didn't require 1.5s for the HEVC stream to buffer

dekhn

I've been given a lot of suggestions for debouncing switches over the years. I'm just doing hobby stuff, either I have an endstop switch for some CNC axis, or more recently, some simple press buttons to drive a decade counter or whatever. My goal for one project was just to have a bit counter that I could step up, down, reset, or set to an initial value, with no ICs (and no software debounce).

I got lots of different suggestions, none of which worked, until I found one that did: 1) switch is pulled high or low as needed 2) switch has capacitor to ground 3) switch signal goes through a schmitt trigger

I designed this into its own PCB which I had manufactured and soldered the SMD and through-hole and ICs to that, and treat it as its own standalone signal source. Once I did that, literally every obscure problem I was having disappeared and the downstream counter worked perfectly.

When you look at the various waveforms (I added a bunch of test points to the PCB to make this easy) the results of my PCB produces perfect square waves. I found it interesting how many suggested hardware solutions I had to try (simple RC filter did not work) and how many "experts" I had to ignore before I found a simple solution.

the__alchemist

I've been using the perhaps-too-simple:

  - Button triggers interrupt
  - Interrupt starts a timer
  - Next time interrupt fires, take no action if the timer is running. (Or use a state variable of some sort)
Of note, this won't work well if the bounce interval is close to the expected actuation speed, or if the timeout interval isn't near this region.

knodi123

> this won't work well if the bounce interval is close to the expected actuation speed

lol, or you could do what my TV does- say "to hell with user experience" and use an interrupt timer anyway.

If I hold down my volume-up button (The physical one on the TV!), I get a quick and fluid increase. But if I hammer the button 4 times per second, the volume still only goes up 1 ticke per second.

nomel

Loosely related, my HiSense TV has a wifi remote that apparently sends separate key up and down events to the TV. If the wifi happens to go down while you're holding the volume up button, it never sees the "key up" so will just hold whatever button indefinitely, including the volume button, which is how I discovered it.

nomel

This is functionally identical to the capacitive approach. Pressing the button charges the cap whose voltage decays when released (starts the timer). If the button is pressed before it decays below the "release" threshold (timer expires), the cap is recharged (timer restarted).

ghusbands

This is an interesting take on debouncing, but I found the choice of TV remotes as an example a bit confusing. From my understanding, the issues with remote controls aren’t typically caused by bouncing in the mechanical sense but rather by the design of the IR communication. Most remotes transmit commands multiple times per second (e.g., 9–30 times) intentionally, and the receiver handles these signals based on timing choices.

If there are problems like double channel jumps or missed commands, it’s more about how the receiver interprets the repeated signals rather than a classic switch debounce issue. There’s definitely a similarity in handling timing and filtering inputs, but it seems like the core issue with remotes is different, as they must already handle repeated commands by design.

HPsquared

Then there's the "hold to repeat" mechanic, where if you hold it long enough it'll add virtual presses for you.

geerlingguy

This is one of the best treatises on debounce, I've read it a number of times and probably will again.

One of the best things I've done to help with really bad debounce is spend time testing a number of buttons to find the designs that have, at the hardware/contact level, much less bounce. Some buttons wind up with tens of ms of bounce, and it's hard to correct for it and meet expectations all in software.

theamk

Just don't implement SR debouncer, OK? And don't use MC140* series chips, those don't work with 3.3V used by modern micros. And when he says:

> Never run the cap directly to the input on a microprocessor, or to pretty much any I/O device. Few of these have any input hysteresis.

that's not true today, most small MCU made in 2005 or later (such as AVR and STM8 series) have input hysteresis, so feel free to connect cap directly to it.

And when he says:

> don't tie undebounced switches, even if Schmitt Triggered, to interrupt inputs on the CPU

that's also not correct for most modern CPUs, they no longer have a dedicated interrupt line, and interrupts share hardware (including synchronizer) with GPIO. So feel free to tie undebounced switch to interrupt lines.

AdamH12113

What's wrong with the SR latch debouncer?

theamk

It needs SPDT switch, and that rules out most of buttons.

And if you do end up choosing SPDT switch, then there are much simpler designs which have switch toggle between Vcc and GND, like Don Lancaster's debouncer [0]. That design is especially useful if you have many switches, as you can wire all VCCs and GNDs in parallel, and use 8-channel buffers to debounce multiple ones.

The SR latch schematics only makes sense if you are working with TTL logic (popular in 1970/1980) which did not nave a symmetric drive output pattern, and there is absolutely no reason to use it in 2000's.

[0] https://modwiggler.com/forum/viewtopic.php?p=275228&sid=52c0...

StayTrue

Agree. Helped me (a software guy) when I needed it. Automatic upvote.

theamk

Analysis is nice, although the graph style is very much 2005. The conclusion is that as long as you don't get a crappy switch, 10mS debounce interval should be sufficient.

I would not pay much attention to the rest of the text.

The hardware debouncer advice is pretty stale - most of the modern small MCUs have no problem with intermediate levels, nor with high frequency glitches. Schmidt triggers are pretty common, so feel free to ignore the advice and connect cap to MCU input directly. Or skip the cap, and do everything in firmware, MCU will be fine, even with interrupts.

(Also, I don't get why the text makes firmware debouncer sound hard? There are some very simple and reliable examples, include the last one in the text which only takes a few lines of code.)

michaelt

> Also, I don't get why the text makes firmware debouncer sound hard?

The article links to Microchip's PIC12F629 which is presumably the type of chip the author was working with at the time.

This would usually have been programmed in assembly language. Your program could be no longer than 1024 instructions, and you only had 64 bytes of RAM available.

No floating point support, and if you want to multiply or divide integers? You'll need to do it in software, using up some of your precious 1024 instructions. You could get a C compiler for the chips, but it cost a week's wages - and between the chip's incredibly clunky support for indirect addressing and the fact there were only 64 bytes of RAM, languages that needed a stack came at a high price in size and performance too.

And while we PC programmers can just get the time as a 64-bit count of milliseconds and not have to worry about rollovers or whether the time changed while you were in the process of reading it - when you only have an 8-bit microcontroller that was an unimaginable luxury. You'd get an 8-bit clock and a 16-bit clock, and if you needed more than that you'd use interrupt handlers.

It's still a neat chip, though - and the entire instruction set could be defined on a single sheet of paper, so although it was assembly language programming it was a lot easier than x86 assembly programming.

theamk

You've read the article, right? None of the code author gives need "64-bit count of milliseconds" nor floating-point logic.

The last example (that I've mentioned in my comment) needs a single byte of RAM for state, and updating it involves one logic shift, one "or", and two/three compare + jumps. Easy to do even in assembly with 64 bytes of RAM.

michaelt

Do you mean this code, from the article?

  uint8_t DebouncePin(uint8_t pin) {
      static uint8_t debounced_state = LOW;
      static uint8_t candidate_state = 0;
      candidate_state = candidate_state << 1 | digitalRead(pin);
      if (candidate_state == 0xff)
          debounced_state = HIGH;
      else if (candidate_state == 0x00)
          debounced_state = LOW;
      return debounced_state;
  }
That doesn't work if you've got more than one pin, as every pin's value is being appended to the same candidate_state variable.

The fact the author's correspondent, the author, and you all overlooked that bug might help you understand why some people find it takes a few attempts to get firmware debouncing right :)

XorNot

That chip has a 200ns instruction cycle though. Whatever program you're running is so small that you can just do things linearly: i.e. once the input goes high you just keep checking if it's high in your main loop by counting clock rollovers. You don't need interrupts, because you know exactly the minimum and maximum number of instructions you'll run before you get back to your conditional.

EDIT: in fact with a 16 bit timer, a clock rollover happens exactly every 13 milliseconds, which is a pretty good denounce interval.

michaelt

Sure! I'm not saying debouncing in software was impossible.

But a person working on such resource-constrained chips might have felt software debouncing was somewhat difficult, because the resource constraints made everything difficult.

Cumpiler69

>No floating point support, and if you want to multiply or divide integers? You'll need to do it in software, using up some of your precious 1024 instructions.

Very much not true as almost nobody ever used floating point in commercial embedded applications. What you use is fractional fixed point integer math. Used to be working in Automotive EV motor control in the past and even though the MCUs/DSPs we used had floating point HW for a long time now, we still never ued it for safety and code portability reasons. All math was fractional integer. Maybe today's ECUs started using floating point but that was definitely not the case in the past, and every embedded dev wort his salt should be comfortable doing DSP math in without floating point.

https://en.wikipedia.org/wiki/Fixed-point_arithmetic

https://en.wikipedia.org/wiki/Q_(number_format)

kragen

Plenty of embedded microcontrollers in the 70s and later not only used floating point but used BASIC interpreters where math was floating point by default. Not all commercial embedded applications are avionics and ECUs. A lot of them are more like TV remote controls, fish finders, vending machines, inkjet printers, etc.

I agree that fixed point is great and that floating point has portability problems and adds subtle correctness concerns.

A lot of early (60s and 70s) embedded control was done with programmable calculators, incidentally, because the 8048 didn't ship until 01977: https://www.eejournal.com/article/a-history-of-early-microco... and so for a while using something like an HP9825 seemed like a reasonable idea for some applications. Which of course meant all your math was decimal floating point.

hamandcheese

Are you saying that it isn't true that there was not floating point support? That there actually was, but nobody used it? I don't see how that changes the thrust of the parent comment in any significant way, but I feel like I may be misunderstanding.

persnickety

What's with the advice about interrupts and undebounced signals?

What does it mean that a flip-flop gets confused? What kind of undesired operation could that cause?

Because, quite honestly, if connecting directly means occasional transient failures, then having less hardware is a tempting tradeoff on small PCBs.

kevin_thibedeau

My test whenever I get handed someone else's code with a debounce routine is to hammer the buttons with rapid presses, gradually slowing down. That shows if the filter is too aggressive and misses legitimate presses. I also see strange behavior when they're implemented wrong like extra presses that didn't happen or getting stuck thinking the button is still held when it isn't.

SOLAR_FIELDS

What kind of line of work gives you the ability to discuss debounce routines as an everyday enough occurrence to speak with authority on the matter, if you don’t mind me asking?

HeyLaughingBoy

Pretty much anything that involves direct conversations with hardware.

I build medical devices.

cushychicken

Be sure to read Jack’s mega treatise on low power hardware/software design if you haven’t yet.

https://www.ganssle.com/reports/ultra-low-power-design.html

One of the best practical EE essays I’ve ever read, and a masterwork on designing battery powered devices.