Man they really glossed over the part where it switched over to twisted pair from coax. It’s like saying pancakes used to be waffles and then not bothering to explain.
Hardware
A place for quality hardware news, reviews, and intelligent discussion.
Yeah. Very significant thing, coax ethernet was different enough to really be its own thing given how the networks were built.
I tend to split Ethernet into hub-era collision fest ethernet (coax and early twisted pair) and the current twisted pair connected via switches era.
Was always fun when someone would remove the terminator from the end of the coax and the network would go all screwy. For some reason our coax run ended in the demo room and they would randomly remove equipment and just disconnect the coax. Network is down again!!
Was always fun when someone would remove the terminator from the end of the coax and the network would go all screwy. For some reason our coax run ended in the demo room and they would randomly remove equipment and just disconnect the coax. Network is down again!!
pancakes used to be waffles
I cook both. The differences elude me. One's got more flour and a bit of vanilla flavor? Or is it just a shape thing? Have I been making waffle shaped pancake crêpes all this time?
I also hear microprocessors are still going strong after 50 years.
I still remember early lanparties, where you couldn't leave early because you could not just unplug your T connector from the Coax without messing up the line termination!
That was a fun aera.
That was a very visible but in the grand scheme of things it was just a small change, it still carried almost the same electric signal and the same Manchester code, the real big change was at gigabit ethernet, it's pulse-amplitude modulated since.
Don't forget the thick net cable.
And I hated base2 network, so fiddy to work with and can be unreliable if a terminator was removed by a prankster or if there's a faulty network hardware somewhere. BaseT was the best of both reliable and cheap.
I still remember a trick to remember which is which: hub, switch, and router. A hub is like an intersection with only stop sign. Good for low traffic but high collision risk with high traffic. Switch is like an intersection with traffic light, better for high traffic. And router is like an intersection with police car present to enforce traffic.
Switches are better for low traffic too though because they don't flood and they regenerate the signal. These days I don't know that you could detect a latency difference between a hub and a switch.
In my networking class in high school, we were always told that a hub is just a dumb switch.
In my networking class in high school, we were always told that a hub is just a dumb switch.
I’m still amazed that CSMA/CD works at all. We got hit with the Nimda virus in the early 2000s and our network utilization was over 50% and I’m like yup we’re going down. Once Ethernet gets bad it blows up pretty quick
It does and it doesn't. Most wired ethernet isn't on shared media, so it doesn't need/use csma/cd... But wireless is based on ethernet and uses csma/cd (wifi) presumably
I wonder if we were still using hubs back then. We bought a bunch of Cabletron switches around that time
Wireless uses csma/CA, IIRC. It avoids collisions rather than detecting and responding.
If it works, why change it?
That's why it's so bizarre that people support losing the ability to plug in headphones on their smartphones because the 3.5mm jack is "old".
.....so?
That's why it's so bizarre that people support losing the ability to plug in headphones on their smartphones because the 3.5mm jack is "old".
Please stop using the term 'old'! You won't stop them refusing it that way. All you do is to induce FOMO.
It's not old, it's proven … Proven to be sturdy, robust and long-serving technology and just reliable.
Nobody removed it because it's old
Nobody removed it because it's old, but OP is just saying that people defend the removal by saying "it's old", when that's neither a valid reason nor the actual reason.
With how stable my new WiFi 6 router is for me, personally I don't feel the need for ethernet anymore even for gaming.
So I have no issue if I buy a laptop that can be thinner without RJ45
Will I appreciate it if they still manage to fit it? Sure, a little bit? But it's definitely not a make or break decision for me.
Same case with headphone jacks, I love the benefits of wireless enough to ignore the benefits wired bring, so my purchase decision isn't considering a headphone jack, sure if a phone I have has it, it'll be a nice little thing, but I'll probably not be using it.
Why change it? To improve it. That’s why there have been dozens of changes and improvements to it.
Ethernet is too old.
Let's change the plug so it has no clip and can fall out easily.
Ethernet is too old.
Let's change the plug so it has no clip and can fall out easily.
I know it's nothing knew but one of the big changes for me is PoE.
I mean path of exile is good but I don’t know if it’s that good
I was wondering about this. We keep increasing network connection speeds and using the same cable standard… what is the max speed we can put through and Ethernet connection?? My mind goes to USB A standard… for usb 3 they added more pins in the same connector shape.
The cable standard has not remained the same. Cat5 and Cat7 are quite different despite using the same connector (sort of) and having the same number of conductors. Also Cat8 can push 40Gbit/s, so you have at least until then to look forward to.
We don't use the same cable. Cat 8 is the current standard, it can go up to 10Gbit for 100m or something like that.
what is the max speed we can put through and Ethernet connection??
Copper-based Ethernet and DSL share quite a few similarities. So why can't you just keep easily increasing DSL bandwidth? Because to send more information you either have to send it at a faster clock rate, use fancier encoding (e.g. instead of just on/off you could also carry information using light polarisation or colour, if we use light of course) or literally increase the bandwidth and use more frequencies.
A larger bandwidth means using more frequencies, and higher frequencies drop off more quickly with distance, to say nothing about cross talk and other effects. You also get cross talk, interference, and other effects which raises the noise floor and makes it harder to tell a genuine signal from background noise.
Frequencies don't propagate at the same speed either, which also poses limits on the clock speed, because over a small enough time scale the signals blur out. You're also capped by hardware limitations.
Encoding more information not only requires more processing (see above), but you need to be able to hear what's being said. Same principle as talking in a quiet room, vs. yelling and having to repeat yourself a lot very slowly in a crowded area (fall back to slower speeds, retransmission of data, increasing redundancy).
You can add more pins and wires as you say, but then that means you need more wires, more shielding, it can make the cables harder to work.
I would estimate 40GBASE-T is probably a reasonable limit for copper-based Ethernet. If you switch to fibre that limit keeps going up and up, for now. And given the ability to multiplex different wavelengths of light over the same fibre, while I'm doubtful that's going to be needed in consumer grade equipment for a long time, that gives Tb/s of capacity using today's technology over a single fibre. And even single wavelengths are doing around 400Gb/s.
you do it right, it will live on!
At least for home use it's really not. The de-facto is still 1G Ethernet from 1999.
10GBASE-T exists since almost two decades (2006) and is still expensive, and even the "affordable" NBase-T 2.5G stuff (2016) is only really cheap for the cards itself, most "router"/gateways have no or only a single 2.5G port and 2.5G switches are overpriced, unmanaged, and still in a "premium niche".
In contrast, you had Wifi6 APs for some while now that could do ~1.8Gb/s to clients and now with Wifi7 you can reach ludicrous wireless speeds of 5Gb/s+ to clients, but I'm doubtful switches or even 5/10G cards will get much cheaper because of this. It seems manufacturers don't want to address the market of people having cabled infrastructure and instead everything is supposed to wireless with be wireless mesh-backbone now.
Lets me find a place to economize in mainboards, just hook up a bit of copper wire and use the saved money for better other things. :)
I hate it. Switch Ethernet cables to USB-C style cables already.
USB-C has much more limited range than Cat5e/6a Ethernet cables so that wouldn't make sense at all. Even network runs in small buildings would basically be impossible.
I'm dumb. Can someone explain why Cat5 cables are thin and flexible and capable of great speeds over long distances, but HDMI is a big thick cable and limited to short distances?
Cat5(e) cables have 8 pins/wires and can generally do up to 1-10Gb data transfers. HDMI has 19 pins/wires and as of now can do up to 48Gb of data for displays.
Cat5(e) cables have 8 pins/wires and can generally do up to 1-10Gb data transfers. HDMI has 19 pins/wires and as of now can do up to 48Gb of data for displays.
How about that internal combustion engine?
How about that internal combustion engine?
1gbps is still perfectly fine for most domestic applications and office work, and it is still 1:1 with most domestic fiber optical internet access available today at a reasonable cost.
You "can" find a use case, like downloading a 140gb game in steam faster, using a 2.5 or 10 gig connection, but you will need good luck finding an ISP at a reasonable price.
Most streaming services will never stream high bitrate content because of the networking cost, and AV1 will save even more bandwidth in the future. So we will be fine in that regard.
Professionals who work with 4k raw video probably use Thunderbolt 4 storage solutions.
The only use case where 10gig makes sense imho is in larger distributed teams working collaboratively in video projects, where you need a 10Gig NAS, a 10gig switch, and 10gig NIC on every single workstation.
1gbps is still perfectly fine for most domestic applications and office work, and it is still 1:1 with most domestic fiber optical internet access available today at a reasonable cost.
You "can" find a use case, like downloading a 140gb game in steam faster, using a 2.5 or 10 gig connection, but you will need good luck finding an ISP at a reasonable price.
Most streaming services will never stream high bitrate content because of the networking cost, and AV1 will save even more bandwidth in the future. So we will be fine in that regard.
Professionals who work with 4k raw video probably use Thunderbolt 4 storage solutions.
The only use case where 10gig makes sense imho is in larger distributed teams working collaboratively in video projects, where you need a 10Gig NAS, a 10gig switch, and 10gig NIC on every single workstation.