Skip to content(if available)orjump to list(if available)

Negotiating PoE+ Power in the Pre‑Boot Environment

minetest2048

Related problem is single-board computers that relies on USB-PD for power. USB-PD sources requires the sink to do power delivery negotiation within 5 seconds, or it will cut its power or do funny things. Because USB-PD negotiation is handled in Linux, by the time Linux boots it will be too late, and power supply will cut the power, so it will be stuck in a boot loop: https://www.spinics.net/lists/linux-usb/msg239175.html

They way they're trying to solve it is very similar to this article, by doing the USB-PD negotiation during U-boot bootloader stage:

- https://gitlab.collabora.com/hardware-enablement/rockchip-35...

- https://lore.kernel.org/u-boot/20241015152719.88678-1-sebast...

rickdeckard

Interesting, thanks for sharing. I missed that evolution of "thin" USB-C controllers which delegate the PD handshake elsewhere.

I don't know yet how I feel about the fact that a driver in the OS is supposed to take this role and tell the power-supply how much power to deliver. Not necessarily a novel security concern, but a potential nightmare from a plain B2C customer service perspective (i.e. a faulty driver causing the system to shut down during boot, fry the Motherboard,...)

gorkish

It's not, per-se, a driver doing the PD negotiation in software; it's more that the USB chipset isnt initialized and configured for PD negotiation (or anything else for that matter) until the CPU twiddles its PCI configuration space.

I would have imagined that USB controller chipsets would likely offer some nonvolatile means to set the PD configuration (like jumpers or eeprom) precisely because of this issue. It's surprising to me that such a feature isnt common

jdndnxnd

Having persistent state between components is a nightmare in embedded world

Nevertheless I'm surprised USB controller initialization is done by the OS instead of the motherboard chipset

mrheosuper

it would be the role of embedded controller instead of SOC to handle PD negotiation. But on SBC, EC may not available.

hypercube33

It kinda blows my mind that the Ethernet or USB phy doesn't have this stored in some tiny nvram and handle all of the negotiations. What if I have a battery to charge while the device is off such as a laptop? How does Android deal with this when it's not booted? Does the bios handle this stuff?

wolrah

> What if I have a battery to charge while the device is off such as a laptop? How does Android deal with this when it's not booted? Does the bios handle this stuff?

In my experience, if the device doesn't have enough power to actually boot it will simply slow charge at the default USB rate.

This can be problematic with devices that immediately try to boot when powered on.

RulerOf

> This can be problematic with devices that immediately try to boot when powered on.

I had an iPad 3 stuck in a low-battery reboot loop like this for hours once upon a time. I eventually got the idea to force it into DFU mode and was finally able to let it charge long enough to complete its boot process.

kevin_thibedeau

There are standalone PD controllers that can be configured with the desired power profile(s) in flash.

rjsw

Reading the thread, the behaviour seems to depend on the power supply. I have powered a Pinebook Pro via USB-C with a PinePower PSU, didn't even have a FUSB302 driver in the OS then (am currently adding one).

Other boards don't do USB-PD at all and just rely you using a PSU with a USB-C connector that defaults to 5V, e.g. RPi and Orange pi 5 (RK3588).

rickdeckard

For 5V output (like used on the Pinebook) you don't need to negotiate anything over USB-PD, that's the default provided by a USB-C PSU to ensure legacy USB compatibility. Support for higher currents can then be "unlocked" with resistors between the Alt Data-Lines (like a USB-A charger would).

Everything beyond 5V requires a handshake between device and PSU, which ensures that the connected device can actual handle higher power output.

nfriedly

It's arguably not "negotiation", but if the connector is USB-C on both ends, then even 5V requires a couple of resistors to determine which side is the source and which side is the sink.

It's pretty common for really cheap electronics to skip these resistors, and then they can only be powered with a USB-A to USB-C cable, not C-to-C. (Because USB-A ports always a source and never a sink.) Adafruit even makes a $4.50 adapter to fix the issue.

But you're right that everything higher than 5V & 3A gets significantly more complex.

varjag

Apple solves this by doing all PD negotiation in hardware.

nyrikki

Unless you are very price sensitive, using USB Power Delivery ICs is the norm now for most devices, but PD is different than POE. PD is just loose tolerance resistors on USB.

p12tic

Incorrect. https://en.wikipedia.org/wiki/USB_hardware#USB_Power_Deliver... is a good start about the subject: "PD-aware devices implement a flexible power management scheme by interfacing with the power source through a bidirectional data channel and requesting a certain level of electrical power <...>".

floating-io

No, it's not. You can do very basic selection with resistors, but you can't get above 5V (or more than a couple of amps IIRC) without using the actual PD communication protocol.

throw0101d

> PoE Standards Overview (IEEE 802.3)

For the record, 802.3bt was released in 2022:

* https://en.wikipedia.org/wiki/Power_over_Ethernet

It allows for up to 71W at the far end of the connection.

londons_explore

The UART standards didn't specify bit rates - which allowed the same standard to scale all the way from 300 bps in the 1960's all the way up to 10+ megabps in the 90's.

Why can't POE standards do the same?

Simply don't set voltage or current limits in the standard, and instead let endpoint devices advertise what they're capable of.

Aurornis

> Simply don't set voltage or current limits in the standard,

There are thermal and safety limits to how much current and voltage you can send down standard cabling. The top PoE standards are basically at those limits.

> and instead let endpoint devices advertise what they're capable of.

There are LLDP provisions to negotiate power in 0.1W increments.

The standards are still very useful for having a known target to hit. It’s much easier to say a device is compatible with one of the standards then to have to check the voltage and current limits for everything.

esseph

That would require them to know the standard of cable they're connected with

Unless you like home and warehouse fires

Or if you want to add per port fuses. That sounds incredibly expensive.

brirec

The standard is, well, a standard, and that’s why PoE is safe in the first place. Adding per-port fuses won’t stop bad cable from burning, because the fuse would have to be sized for the rating of the PoE switch.

This is why you don’t want “fake” Cat6 etc. cable. I’ve seen copper-clad aluminum sold as cat6 cable before, but that shit will break 100% of the time and a broken cable will absolutely catch fire from a standard 802.at switch.

RF_Savage

Proper PoE sources have active per port current monitoring and will disable the PoE power in case of an over current event.

wmf

Does POE+++++ measure the cable? If not, there's nothing in the protocol stopping you from overloading the cable.

userbinator

The standards basically specify the minimum power the source is supposed to be able to supply, and the maximum power the other end can sink.

yencabulator

802.3bt changes how the wires are used physically. Power can now be negotiated to be delivered over previously data-only lines.

mrheosuper

because power delivery depends on a lot of other things. The most important one i could think is the cable, the ethernet cable is a dumb one, no way to tell its capability. USB-C solved this problem with the E-marker chip, which basically transform the dumb cable into smart one.

Even so, the PD protocol limits how much power can be transferred.

varjag

It was finalized in 2018, and by 2020 there were commercial offerings from major vendors. I know this as we developed a 802.3bt product in 2018.

userbinator

running Intel Atom processors[...]these were full-fledged x86 computers that required more power than what the standard PoE (802.3af) could deliver

Those must've been the server Atoms or the later models that aren't actually all that low-power, as the ones I'm familiar with are well under 10W.

bigfatkitten

You might have a 10W TDP CPU, but the rest of the system requires power too.

SoftTalker

What I don't understand, though, is that these were for "digital signage systems" according to TFA. You're not running Windows 10 Pro on an Atom board and a large illuminated digital sign with 25W PoE. Maybe the signs were smaller than I'm thinking? Like tablet-sized? But if you're needing to run power for a display, why not just power the whole system from that?

p_l

Lots of digital signage with small touchscreen LCDs that people interact with.

transpute

Wouldn't each display have dedicated PoE?

oakwhiz

Wouldn't it be funny to probe peripherals to decide if extra power is demanded or not, then request it all inside UEFI

BobbyTables2

Blade servers basically do this before they even turn on.

amelius

What if the demand is variable?

pawanjswal

Solving PoE+ power negotiation before the OS boots is next-level. This is good and a clever workaround.

tremon

I would have thought it the other way around: performing PoE+ negotiation in the network hardware is first-level; delegating it to the OS is next-level for me.

mrheosuper

Interesting. If it were me, i would try to boot the OS at lower CPU clock and maybe I can get away with it. That approach would be less than ideal than author's.

ranger207

I know USB PD has trigger chips that will request the power levels that require active negotiation for you; are there not equivalents for PoE?

willis936

>the switch is configured to require LLDP for Data Link Layer Classification for devices requiring more than 15.4W

This really feels like a switch configuration problem. A compliant PoE PD circuit indicates its power class and shouldn't need to bootstrap power delivery. If the PD is compliant and components selected correctly then the PSE is either non-compliant or configured incorrectly.

theandrewbailey

> I dug deeper and came across the concept of UEFI applications.

TIL.

yencabulator

My rule of thumb: UEFI is like a cleaned-up MS-DOS.

protocolture

This is awesome. I probably would have used passive poe, which is the defacto workaround everyone seems to use. Good to see someone actually tackle the issue instead of working around it.

xyst

> Back in 2015, I was working on a project to build PoE-powered embedded x86 computers and digital signage systems.

> Our device required about 23W when fully operational, which pushed us into 802.3at (PoE+) territory

The problem the author solved is quite interesting. But I can’t help but think how wasteful it is to load up a full copy of windows just to serve up dumb advertisements.

The attack surface of a fully copy of windows 10 is an attackers wet dream.

Hope most of these installations are put out to pasture and replaced with very low power solutions.

wildzzz

It all depends on the environment. If windows is already running on every PC in the building, it may make more sense to have these signs run windows too. You can put copies of the security software you're already paying for and allow for approved AD users to login and manage the signs. Windows is a great attack vector but there's less risk if you're in control of it versus some vendor solution that you can't audit as easily. The fact that you don't have a user going to malicious websites or plugging in malware flash drives probably reduces the risk too. If you already have 1000 windows machines on the enterprise network, what's a few more?

stackskipton

Having worked with these systems before, most of them are appliances meaning no, we did not AD join them or install our security software.

Windows was running because Linux was too hard for vendors.

baybal2

[dead]