a connector design for computers

=suggestion =design =consumer =explanation

 

 

CPUs provide PCIe directly. This goes to controllers, which convert between PCIe and USB/ethernet/etc.

It's obviously more efficient to use direct PCIe connections than to do extra conversions; this is a trend in datacenters now. But PCIe uses a very high frequency, meaning only short cables can be used, because signal attenuation in cables increases with frequency.

Ethernet is the opposite of PCIe: by using a lower data rate per wire and (for 10G ethernet) 16 voltage levels, 100m cables can be used. But those 16 voltage levels and the high sent/recieved power ratio of long cables at higher frequency than 1G ethernet also mean 10G ethernet<->PCIe cards are >$100 and used to be much more. 1G ethernet uses 5 voltage levels and a lower frequency, making controllers ~10% as expensive. (10G ethernet also requires more expensive cables than 1G ethernet.)

In fact, it's often cheaper to use optical fiber for 10G ethernet than copper cables. PCIe over fiber is obviously cheaper than ethernet over fiber, so that's now being implemented. Of course, fiber is still less physically flexible, but it's better to focus on <100m distances over copper these days if you need high data rates.

Thunderbolt sort of provides the worst of both worlds: it can be high frequency PCIe, so it has short range, and it can also be DisplayPort, so it requires an expensive controller to do conversions. The cables are also far more expensive than ethernet cables. Thunderbolt connectors have 20 pins, but only 8 carry high-speed data; it's much more efficient to carry DC power over the data lines, like "power over ethernet" does.

 

 

All that being said, how would I design a protocol/connector/cable for computer networking?

I'd start with PCIe.

To improve range, 1 PCIe 3.0 lane (with 2-level pulse amplitude modulation at 8 GHz) is split into 2 lanes with 4-level modulation at 2 GHz. 130 bit symbols (assuming a 128b/130b line code) are buffered and alternated between 2 balanced lines. To allow longer cables at lower data rates, the controller should start at maybe 500 MHz and negotiate up. (Lower frequencies would probably use PCI 2.0 or 1.0 mode, which PCI 3.0 is backwards compatible with. Those use 8/10 encoding which reduces circulating current, and I think they have longer timeouts for DLLP packets.)

(A "lane" is 2 balanced lines, 1 for transmitting and 1 for receiving. Each balanced line is a pair of conductors, with the signal being the difference between them; this cancels out most interference.)

For each balanced line, I'd use 4x 27 AWG wires in a star-quad configuration. This is almost as effective as shielding, it's cheaper than braided shielding, and it's less fragile than mylar shielding. The picture is supposed to show a twisted square EVA foam core, but the exact details don't matter. (Foam is good for cables because the lower dielectric constant vs solid insulation can greatly reduce capacitance between wires.)

But since this uses the same number of pairs as ethernet, existing ethernet cable could be used with an adapter.

Connectors have 8 pins, and can be rotated 180° which just inverts each balanced line. Full-duplex operation; all cables are crossover. (See picture.)

 

connector

 

Like with ethernet, each end of each balanced line has a transformer, which has a center-tap on the cable side. (At 2 GHz, you'd want air-core transformers, maybe fractal transformers.) Those 4 center-taps are used to transmit power: up to 4 amps (depending on the device) at +-24V. Each input/output crossover pair has the same DC polarity. This is similar to power over ethernet.

For security reasons, one of the ports on computers should do power transfer only. (That's also obviously cheaper, but I guess it would be a violation of Apple's minimalism.)

An IOMMU is necessary for any safe external connection; the CPU usage for PCIe IOMMU modes is relatively high just because it's fast and because some code uses map/unmap calls for each frame, so Thunderbolt skipped it, allowing DMA attacks.

Direct networking between computers could be done by carrying UDP frames in PCIe. For indirect networking, IPv6 frames in PCIe could be sent directly to a router.

 

 

For many devices, the data rate and power of this connection is overkill. Some devices could use a 1-lane cable with a half-connector on the device end; the computer end would have a plug with both a male and female full-size connector. (See picture. The female connector would only have 4 pins.) A 2nd such half-connector cable could be plugged into the female connector.

Using half-size connectors would obviously require the controller to buffer PCIe packets and switch them between lines depending on the address. This requires actually parsing the PCIe packets, which would probably increase controller costs.

 

 

Thanks to whitequark for some feedback that's been incorporated here.

 

This post describes a design for signal handling for this.

 


back to index