Copper is Faster than Fiber (2017) [pdf]
47 comments
·July 1, 2025MadVikingGod
Palomides
high speed links all have forward error correction now (even PCIe); nothing in my small rack full of 40Gbe devices connected with DACs has any link level errors reported
p_l
DACs don't cause problems, but twisted pair at 10Gig is a PITA due to power and thermals
somanyphotons
What allows DACs to avoid the power/thermal issues that twisted pair has?
(My naive view is that they're both 'just copper'?)
kijiki
DACs are usually twin-ax, which is just 2 coax cables bundled. The shielding matters a lot, compared to unshielded twisted pairs.
Faster parallel DACs require more pairs of coax, and thus are thicker and more expensive.
Hilift
Storage over copper used to be sub optimal but not necessarily due to the cable. UDP QUIC is much closer to wire speed. so 10 GB copper and 10 GB fiber are probably the same, but 40+ GB fiber is quite common now.
bhaney
> I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables
Especially since physics imposes a ~1.67ns/m penalty on fiber. The best-case inverse speed of light in copper is ~3.3ns/m, while it's ~5ns/m in fiber optics.
laurencerowe
> So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
Surely resignaling should be the fixed cost they calculate at about 1ns? Why does it also incur a 0.4ns/m cost?
cenamus
Light speed is ~3ns per metre, so maybe the lowered speed through the fibre?
Speed of electricity in wire should be pretty close to c (at least the front)
myself248
Velocity factor in most cables is between 0.6 and 0.8 of what it is in a vacuum. Depends on the dielectric material and cable construction.
This is why point-to-point microwave links took over the HFT market -- they're covering miles with free space, not fiber.
laurencerowe
I misremembered the speed of electrical signal propagation from high school physics. It's around 2/3rds the speed of light in a vacuum not 1/3rd. The speed of light in an optical fibre is also around 2/3rds the speed in a vacuum.
It seems there is quite a wide range for different types of cables so some will be faster and others slower than optical fibre. https://en.wikipedia.org/wiki/Velocity_factor
But the resignalling must surely be unrelated?
b3orn
It's c, but not the same c as in air or vacuum. The same applies in optic fibers. They're both around two thirds of the speed of light in vacuum.
tcdent
PHYs are going away and fiber is going straight to the chip now, so while the article is correct, in the near future this will not be the case.
sophacles
The chip has a phy built into it on-die you mean. This affects timing for getting the signal from memory to the phy, but not necessary the switching times of transistors in the phy, nor the timings of turning the light on and off.
jerf
"Has lower latency than" fiber. Which is not so shocking. And, yes, technically a valid use of the word "faster" but I think I'm far from the only one who assumed they were going to make a bandwidth claim rather than a latency claim.
kragen
I assumed they were going to make a bandwidth claim and was prepared to reject it as nonsense.
jcelerier
I wonder where does the idea of "fast" beign about throughput comes from. For me it always, always only ever meant latency.
nine_k
Latency to the first byte is one thing, latency to the last byte, quite another. A slow-starting high-throughput connection will bring you the entire payload faster than an instantaneously starting but low-throughput connection. The larger the payload, the more pronounced is the difference.
mouse_
ehh... latency is an objective term that, for me at least, has always meant something like "how quickly can you turn on a light bulb at the other end of this system"
switchbak
A 9600 baud serial connection between two machines in the 90's would have low latency, but few would have called it fast.
Maybe it's all about sufficient bandwidth - now that it's ubiquitous, latency tends to be the dominant concern?
p_j_w
Presumably from end users who care about how much time it takes to receive or send some amount of data.
wat10000
Until pretty recently, throughput dominated the actual human-relevant latency of time-until-action-completes on most connections for most tasks. "Fast" means that your downloads complete quickly, or web pages load quickly, or your e-mail client gets all of your new mail quickly. In the dialup age, just about everything took multiple seconds if not minutes, so the ~200ish ms of latency imposed by the modem didn't really matter. Broadband brought both much greater throughput and much lower latency, and then web pages bloated and you were still waiting for data to finish downloading.
throw0101d
This coming from Arista is unsurprising because their original niche was low-latency, and the first industries that they made in-roads in against the 'incumbents' was finance:
> The low-latency of Arista switches has made them prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange[50] (largest U.S. options exchange) and RBC Capital Markets.[51] As of October 2009, one third of its customers were big Wall Street firms.[52]
* https://en.wikipedia.org/wiki/Arista_Networks
They've since expanded into more areas, and are said to be fairly popular with hyper-scalers. Often recommended in forums like /r/networking (support is well-regarded).
One of the co-founders is Andy Bechtolsheim, also a co-founder of Sun, and who wrote Brin and Page one of the earliest cheques to fund Google:
nimos
This isn't really surprising. Fiber isn't better because of signal propagation speed, it's all about signal integrity.
exabrial
IIRC, the passive copper SFP Direct Attach cables are basically just a fancy "crossover cable" (for those old enough to remember those days). Essentially there is no medium conversion.
zokier
What are applications where 5ns latency improvement is significant?
thanhhaimai
High Frequency Trading is one.
Loughla
Anything else? Because that's the only one I can think of.
smj-edison
I'd expect HPC would be another, since a lot of algorithms that run on those clusters are bottlenecked by latency or throughput in communication.
empaone
any high-utilization workload with a chatty protocol dominated by small IOs such as: * distributed filesystems such as MooseFS, Ceph, Gluster used for hyperconverged infrastructure. * SANs hosting VMs with busy OLTP databases * OLTP replication * CXL memory expansion where remote memory needs to be as close to inter-NUMA node latency as possible
citizenpaul
Its been long known that Direct Attach Copper (DAC's) are faster for short runs. It makes sense since there does not need to be an analog-digital conversion.
aaron695
[dead]
vlovich123
Faster only because the distances involved are short enough that the PHY layer adds significant overhead. But if you somehow could wave a magic wand and make optical computing work, then fiber would be faster (& generate less heat).
throw0101d
> Faster only because the distances involved are short enough that the PHY layer adds significant overhead.
This specifically mentions the 7130 model, which is a specialized bit of kit, and which Arista advertises for (amongst other things):
> Arista's 7130 applications simplify and transform network infrastructure, and are targeted for use cases including ultra-low latency exchange trading, accurate and lossless network visibility, and providing vendor or broker based shared services. They enable a complete lifecycle of packet replication, multiplexing, filtering, timestamping, aggregation and capture.
* https://www.arista.com/en/products/7130-applications
It is advertised as a "Layer 1" device and has a user-programmable FPGA. Some pre-built applications are: "MetaWatch: Market data & packet capture, Regulatory compliance (MiFID II - RTS 25)", "MetaMux: Market data fan-out and data aggregation for order entry at nanosecond levels", "MultiAccess: Supporting Colo deployments with multiple concurrent exchange connection", "ExchangeApp: Increase exchange fairness, Maintain trade order based on edge timestamps".
Latency matters (and may even be regulated) in some of these use cases.
zokier
The PHY contributes only 1ns difference, but the results also show 400ps/m advantage for copper which I can only assume to come from difference in EM propagation speed in the medium.
myself248
No. Look at the graph -- the offset when extrapolated back to zero length is the PHY's contribution.
The differing slope of the lines is due to velocity factor in the cable. The speed of light in vacuum is much faster than in other media. And the lines diverge the longer you make them.
MadVikingGod
It's true, but also if you go look at their product catalog you will see none of their direct attach cables are longer then 5m, and the high bandwidth ones are 2m. So, again, it's true, but also limiting in other ways.
So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
What I would actually like to see is how this performs in a more real world situation. Like does this increase line error rates, causing the transport or application to have to resend at a higher rate, which would erase all savings by having lower latency. Also if they are really signaling these in the multi GHz are these passive cables acting like antenna, and having a cabinet full of them just killing itself on crosstalk?