DIYbanter

DIYbanter (https://www.diybanter.com/)
-   UK diy (https://www.diybanter.com/uk-diy/)
-   -   Do powerline adapters work during a power cut? (https://www.diybanter.com/uk-diy/649641-do-powerline-adapters-work-during-power-cut.html)

Another Dave May 21st 20 06:16 PM

Do powerline adapters work during a power cut?
 
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to
work, but will they? I've got some battery-powered Raspberry Pis in a
part of the house where the WiFi doesn't work.

Another Dave
--
Change nospam to techie

[email protected] May 21st 20 06:18 PM

Do powerline adapters work during a power cut?
 
On 21/05/2020 18:16, Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to
work, but will they? I've got some battery-powered Raspberry Pis in a
part of the house where the WiFi doesn't work.

Another Dave

Errrrr, how are they going to be powered during a power cut?

Another Dave May 21st 20 06:20 PM

Do powerline adapters work during a power cut?
 
On 21/05/20 18:18, wrote:
On 21/05/2020 18:16, Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I
can't think of any reason why my powerline adapters shouldn't continue
to work, but will they? I've got some battery-powered Raspberry Pis in
a part of the house where the WiFi doesn't work.

Another Dave

Errrrr, how are they going to be powered during a power cut?

Yes! I realised that just after I pressed the send button. I blame the
hot weather.

Another Dave

--
Change nospam to techie

charles May 21st 20 06:25 PM

Do powerline adapters work during a power cut?
 
In article ,
Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to
work, but will they? I've got some battery-powered Raspberry Pis in a
part of the house where the WiFi doesn't work.


Another Dave


They take power from the mains. If that's missing they can't work.

--
from KT24 in Surrey, England
"I'd rather die of exhaustion than die of boredom" Thomas Carlyle

NY[_2_] May 21st 20 06:35 PM

Do powerline adapters work during a power cut?
 
"Another Dave" wrote in message
...
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to work,
but will they? I've got some battery-powered Raspberry Pis in a part of
the house where the WiFi doesn't work.


The devices need mains to power them so they will act as comms devices - ie
to convert between Ethernet and the modulation that is applied to the mains
voltage on the house wiring, and the convert back to Ethernet in the other
device.

I wonder if they would extract power from the Ethernet if both powerlink
devices were plugged into computers that provided PoE (power over Ethernet).

I think you may be out of luck.

I presume you've got a way of powering your router by battery, to access the
internet, and to hand out IP addresses to new devices (via DHCP).


Thinking *really* laterally - and I'm not for a minute suggesting that you
try this - in theory you could connect one powerlink device and your router
to a 12V-to-mains converter (powered from a car battery) and then run an
extension cable to another place where a powerlink device is plugged in. But
I'd bet the 12V-to-main converter will output such a horrible approximation
of a sine wave that the noise it generates would swamp the powerlink signal.
DON'T TRY IT - it's only a thought experiment.

I did once earn a few brownie points when the small business where I was
working suffered a power cut (JCB through high voltage cable somewhere in
the area) and the company couldn't even make/receive phone calls because
their phone system was entirely by VOIP. They couldn't even look up their
list of customers and their work diary to see which customers they were
booked for us (PC repair engineers) to visit, or the customers' phone
numbers to explain why we may be late. Luckily I had a 12V-to-mains
converter in my car (I bought it so I could charge my laptop while I was
driving etc), so they were able to rig it up to the company van (engine
ticking over so battery didn't go flat) and power a VOIP-to-POTS interface
so all incoming phone calls came through to a backup hard-wired phone, and
they managed to boot up their diary server and a router, and access it from
a laptop. It kept them going until the mains came back. I didn't see how
well they coped because I had to go out to see one of the customers whose
details we could remember because we had a paper copy of the job sheet, and
by the time I got back, normal service had been resumed. The buggers still
made me redundant a few months later :-(


The Natural Philosopher[_2_] May 21st 20 06:38 PM

Do powerline adapters work during a power cut?
 
On 21/05/2020 18:16, Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to
work, but will they?

No. They need power.


--
If you tell a lie big enough and keep repeating it, people will
eventually come to believe it. The lie can be maintained only for such
time as the State can shield the people from the political, economic
and/or military consequences of the lie. It thus becomes vitally
important for the State to use all of its powers to repress dissent, for
the truth is the mortal enemy of the lie, and thus by extension, the
truth is the greatest enemy of the State.

Joseph Goebbels




Jim GM4DHJ ... May 21st 20 07:11 PM

Do powerline adapters work during a power cut?
 
On 21/05/2020 18:16, Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to
work, but will they? I've got some battery-powered Raspberry Pis in a
part of the house where the WiFi doesn't work.

Another Dave

they won't work....good...work of the devil anyway ....

John Rumm May 21st 20 07:44 PM

Do powerline adapters work during a power cut?
 
On 21/05/2020 18:16, Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to
work, but will they? I've got some battery-powered Raspberry Pis in a
part of the house where the WiFi doesn't work.


Think about where the electronics in the adaptor are powered from, and
you will have your answer! :-)


--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/

ARW May 21st 20 08:12 PM

Do powerline adapters work during a power cut?
 
On 21/05/2020 18:20, Another Dave wrote:
On 21/05/20 18:18, wrote:
On 21/05/2020 18:16, Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I
can't think of any reason why my powerline adapters shouldn't
continue to work, but will they? I've got some battery-powered
Raspberry Pis in a part of the house where the WiFi doesn't work.

Another Dave

Errrrr, how are they going to be powered during a power cut?

Yes! I realised that just after I pressed the send button. I blame the
hot weather.


When a local village had it's planned electricity shut down for the day
I installed the wiring for a 3 phase genny for the local pub.

As I had swapped the power from mains to the genny before 8am someone
asked how will we know when the power is turned off.

My answer was just wait for the all the alarm bell boxes to go off that
have dead batteries in the control panel.






--
Adam

Theo[_3_] May 21st 20 09:52 PM

Do powerline adapters work during a power cut?
 
NY wrote:
Thinking *really* laterally - and I'm not for a minute suggesting that you
try this - in theory you could connect one powerlink device and your router
to a 12V-to-mains converter (powered from a car battery) and then run an
extension cable to another place where a powerlink device is plugged in. But
I'd bet the 12V-to-main converter will output such a horrible approximation
of a sine wave that the noise it generates would swamp the powerlink signal.
DON'T TRY IT - it's only a thought experiment.


If you think about it, they already need an enormous notch filter at 50Hz,
and they need to compensate for all kinds of electrical noise that's on the
mains from whatever electrical gear happens to be connected (SMPSUs
especially). They're already adaptive enough to scan for good frequencies
they can use to transmit, and I suspect the harmonics from the sine wave
converter are relatively predictable. The bit rate may be well down, but I
would expect them to still work.

Theo

Brian Gaff \(Sofa\) May 22nd 20 08:58 AM

Do powerline adapters work during a power cut?
 
Don't be stupid, how do you think they get their power?
If you have a back up generator that can feed the house sockets, then
maybe, but I'd not expect great reliability myself.
Brian

--
----- --
This newsgroup posting comes to you directly from...
The Sofa of Brian Gaff...

Blind user, so no pictures please
Note this Signature is meaningless.!
"Another Dave" wrote in message
...
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to work,
but will they? I've got some battery-powered Raspberry Pis in a part of
the house where the WiFi doesn't work.

Another Dave
--
Change nospam to techie




Brian Gaff \(Sofa\) May 22nd 20 09:01 AM

Do powerline adapters work during a power cut?
 
Also how will the wifi work and indeed the router if you are connected to
the internet that way.
You could set up a mobile as a hot spot and do it over mobile data, but
then you would need to reset all your passwords and oog in stuff as well.
If the weather is good go to the coast instead!
Brian

--
----- --
This newsgroup posting comes to you directly from...
The Sofa of Brian Gaff...

Blind user, so no pictures please
Note this Signature is meaningless.!
"Another Dave" wrote in message
...
On 21/05/20 18:18,
wrote:
On 21/05/2020 18:16, Another Dave wrote:
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to
work, but will they? I've got some battery-powered Raspberry Pis in a
part of the house where the WiFi doesn't work.

Another Dave

Errrrr, how are they going to be powered during a power cut?

Yes! I realised that just after I pressed the send button. I blame the hot
weather.

Another Dave

--
Change nospam to techie




Brian Gaff \(Sofa\) May 22nd 20 09:11 AM

Do powerline adapters work during a power cut?
 
AS everyone knows powerline adaptors cause no end of problems over the short
wave bands. Sure they have notches for the ham radio bands, but not for the
international short wave bands, producing a ticking and screeching noise
from about 3Mhz all the way up to 28Mhz.
Most seem to be wired not to use the mains to carry the signal at all, but
use some form of brute force method to put out a broad band modulate
carrier and I remember one could actually use them wirelessly over short
distances due to the radiation from the house wiring. The thing is house
mains wiring is nothing like an RF feeder and as such leaks the signal over
wide areas. How on earth these were allowed to be used anywhere in the world
I have no idea. I know that some Aircraft communications stations in places
like Irish Republic (Shanwick) and the Azores have found these devices being
used locally to be a problem as they use frequencies in the short wave bands
to talk to long haul aircraft. not all are using space based communication,
and are too far out for vhf.
Bah Humbug.

Brian

--
----- --
This newsgroup posting comes to you directly from...
The Sofa of Brian Gaff...

Blind user, so no pictures please
Note this Signature is meaningless.!
"NY" wrote in message
...
"Another Dave" wrote in message
...
My electricity is being switched off next Thursday for 8 hours. I can't
think of any reason why my powerline adapters shouldn't continue to work,
but will they? I've got some battery-powered Raspberry Pis in a part of
the house where the WiFi doesn't work.


The devices need mains to power them so they will act as comms devices -
ie to convert between Ethernet and the modulation that is applied to the
mains voltage on the house wiring, and the convert back to Ethernet in the
other device.

I wonder if they would extract power from the Ethernet if both powerlink
devices were plugged into computers that provided PoE (power over
Ethernet).

I think you may be out of luck.

I presume you've got a way of powering your router by battery, to access
the internet, and to hand out IP addresses to new devices (via DHCP).


Thinking *really* laterally - and I'm not for a minute suggesting that you
try this - in theory you could connect one powerlink device and your
router to a 12V-to-mains converter (powered from a car battery) and then
run an extension cable to another place where a powerlink device is
plugged in. But I'd bet the 12V-to-main converter will output such a
horrible approximation of a sine wave that the noise it generates would
swamp the powerlink signal. DON'T TRY IT - it's only a thought experiment.

I did once earn a few brownie points when the small business where I was
working suffered a power cut (JCB through high voltage cable somewhere in
the area) and the company couldn't even make/receive phone calls because
their phone system was entirely by VOIP. They couldn't even look up their
list of customers and their work diary to see which customers they were
booked for us (PC repair engineers) to visit, or the customers' phone
numbers to explain why we may be late. Luckily I had a 12V-to-mains
converter in my car (I bought it so I could charge my laptop while I was
driving etc), so they were able to rig it up to the company van (engine
ticking over so battery didn't go flat) and power a VOIP-to-POTS interface
so all incoming phone calls came through to a backup hard-wired phone, and
they managed to boot up their diary server and a router, and access it
from a laptop. It kept them going until the mains came back. I didn't see
how well they coped because I had to go out to see one of the customers
whose details we could remember because we had a paper copy of the job
sheet, and by the time I got back, normal service had been resumed. The
buggers still made me redundant a few months later :-(




Roger Hayter[_2_] May 22nd 20 09:58 AM

Do powerline adapters work during a power cut?
 
Theo wrote:

NY wrote:
Thinking *really* laterally - and I'm not for a minute suggesting that you
try this - in theory you could connect one powerlink device and your router
to a 12V-to-mains converter (powered from a car battery) and then run an
extension cable to another place where a powerlink device is plugged in. But
I'd bet the 12V-to-main converter will output such a horrible approximation
of a sine wave that the noise it generates would swamp the powerlink signal.
DON'T TRY IT - it's only a thought experiment.


If you think about it, they already need an enormous notch filter at 50Hz,
and they need to compensate for all kinds of electrical noise that's on the
mains from whatever electrical gear happens to be connected (SMPSUs
especially). They're already adaptive enough to scan for good frequencies
they can use to transmit, and I suspect the harmonics from the sine wave
converter are relatively predictable. The bit rate may be well down, but I
would expect them to still work.

Theo

If one is determined to continue to produce wideband RF interference
over several miles one could do this. But the simpler solution would
run a bit of Cat5e instead of the mains extension cable and get a much
better data connection without the inverter and powerline adaptors.


--

Roger Hayter

NY[_2_] May 22nd 20 10:17 AM

Do powerline adapters work during a power cut?
 
"Roger Hayter" wrote in message
...
If one is determined to continue to produce wideband RF interference
over several miles one could do this. But the simpler solution would
run a bit of Cat5e instead of the mains extension cable and get a much
better data connection without the inverter and powerline adaptors.


I agree that Cat 5 is always the best solution: the one that will "just
work" without any intermittent loss of connection, failure to reconnect
after power cut (*), sudden drop in speed when a microwave is turned on, or
bizarre interactions between wifi and bluetooth (**).

But it almost always involves trying to route the cable between one room and
another, buried under the edge of the carpet, fed under the metal carpet
strips in doorways, or else drilling through walls or ceilings to feed the
cable and plug through. I did initially think of laying Cat 5 in the loft,
to feed a wifi access point for the part of the house where wireless devices
would be used, but it would have meant drilling through the ceiling into the
loft, crawling along in very confined spaces near the eaves, and finding a
way of hiding a cable going vertically from floor to ceiling. I quickly
dismissed powerline because the house has two separate "fuse boxes"
(although on the same meter) and the signal strength gets very much worse
when you cross from one ring main to the other; it was pretty dire even a
few sockets away on the same ring main. Simple wifi (even 2.4 GHz) from the
router was woefully inadequate, so we had to invest in several mesh
devices - which work beautifully most of the time, until the problem when
the power goes off and the devices don't reconnect once it comes back.






(*) We have a mesh network to get broadband from one part of the house to
cover a "wing" at right angles. Getting the devices to connect after a power
cut (as happened in the middle of last night) is a problem, because if the
power to all the nodes is restored simultaneously, the child nodes don't
connect to the parent; instead the children need to be turned off and then
back on in sequence after the central parent node has started.

(**) On my old phone, I couldn't listed to streamed radio programmes (eg via
BBC Sounds and its previous iPlayer equivalent) via bluetooth headphones
because the phone didn't like bluetooth and 2.4 GHz wifi transferring data
at the same time. The solution was to use wired earphones when streaming
over wifi, or else transfer the file so it was held locally on the phone and
then listen using bluetooth headphones.


John Rumm May 22nd 20 10:33 AM

Do powerline adapters work during a power cut?
 
On 22/05/2020 10:17, NY wrote:

(*) We have a mesh network to get broadband from one part of the house
to cover a "wing" at right angles. Getting the devices to connect after
a power cut (as happened in the middle of last night) is a problem,
because if the power to all the nodes is restored simultaneously, the
child nodes don't connect to the parent; instead the children need to be
turned off and then back on in sequence after the central parent node
has started.


Sounds like a small UPS holding up the parent and main router etc might
solve that for the majority of power outages.


--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/

Theo[_3_] May 22nd 20 11:06 AM

Do powerline adapters work during a power cut?
 
NY wrote:
"Roger Hayter" wrote in message
...
If one is determined to continue to produce wideband RF interference
over several miles one could do this. But the simpler solution would
run a bit of Cat5e instead of the mains extension cable and get a much
better data connection without the inverter and powerline adaptors.


I agree that Cat 5 is always the best solution: the one that will "just
work" without any intermittent loss of connection, failure to reconnect
after power cut (*), sudden drop in speed when a microwave is turned on, or
bizarre interactions between wifi and bluetooth (**).


If you don't have cat 5, but have decent aerial cable (ie CT100, not the
horrible brown stuff) you can use MoCA to run ethernet over coax. I have
some bonded MoCA adapters that got approaching gigabit over the TV coax.
You simply put a MoCA adapter between the TV access point on the wall and
the TV and tap off ethernet.

I never got around to measuring them in a controlled test, but internet
connection was 200Mbps and it handled that without breaking sweat, over a
fairly sprawling house.

In theory you're supposed to have a MoCA-capable splitter (one that handles
satellite frequencies is fine) and a blocking filter to prevent the signal
leaking upstream, but all the points came off an existing aerial booster
and I just ran the MoCA across ports of the booster without touching that
setup at all.

I used it to run a WAP and office at the other end of the house from the
FTTH incomer, and it worked very well.

Theo
(I have the parts for sale if anyone wants them)

NY[_2_] May 22nd 20 12:08 PM

Do powerline adapters work during a power cut?
 
"Jethro_uk" wrote in message
...
On Fri, 22 May 2020 11:06:24 +0100, Theo wrote:

NY wrote:
[quoted text muted]


If you don't have cat 5, but have decent aerial cable (ie CT100, not the
horrible brown stuff) you can use MoCA to run ethernet over coax.


Like it started out, you mean ?

50-ohm terminators to avoid reflections (from memory)


What was the reason that thin and thick coax for Ethernet had an impedance
of 50 ohms, whereas coax for TV is 75 ohm?

I remember the joys of T pieces and terminators for thin Ethernet, and the
problems that could be caused to everyone else on the LAN if an extra TV
piece had to be inserted for an additional PC, breaking the continuity of
the LAN by un-terminating it temporarily, or even if a T piece was
disconnected from a PC's network card even though the continuity of the LAN
was not disturbed (*). Structured cabling, with a separate cable back to a
central hub or switch, uses a lot more cable, but it's a lot more resilient
to connecting/disconnecting devices to/from the LAN.


(*) TCP seemed to be a lot more tolerant of a T-piece on an intact LAN being
disconnected from a network card, than OSI (ICL's OSLAN) was - the latter
almost always threw its toys out of the pram, and maybe even caused a UNIX
server to core-dump.


Fredxx[_3_] May 22nd 20 12:30 PM

Do powerline adapters work during a power cut?
 
On 22/05/2020 12:08:10, NY wrote:
"Jethro_uk" wrote in message
...
On Fri, 22 May 2020 11:06:24 +0100, Theo wrote:

NY wrote:
[quoted text muted]

If you don't have cat 5, but have decent aerial cable (ie CT100, not the
horrible brown stuff) you can use MoCA to run ethernet over coax.


Like it started out, you mean ?

50-ohm terminators to avoid reflections (from memory)


What was the reason that thin and thick coax for Ethernet had an
impedance of 50 ohms, whereas coax for TV is 75 ohm?


75 ohm is lower loss and generally matches antenna impedances. 50 ohms
can carry more power before dielectric breakdown and is used for higher
strength signals.


http://www.techplayon.com/characteri...-kept-50-ohms/

Though is a rather simplistic view.

Dave Liquorice[_2_] May 22nd 20 01:30 PM

Do powerline adapters work during a power cut?
 
On Fri, 22 May 2020 10:17:45 +0100, NY wrote:

... and finding a way of hiding a cable going vertically from floor to
ceiling.


Most doors into rooms are close to a corner and open against the
adjacent wall. The corner behind the door is agood place to "hide" a
cable. Cover with a strip of lining paper and paint or strip of the
wallpaper used and it'll be very difficult to spot.

Another place, in a recent house, is inside the boxing in of any
internal soil stacks.

Getting the devices to connect after a power cut (as happened in the
middle of last night) is a problem, because if the power to all the
nodes is restored simultaneously, the child nodes don't connect to the
parent; instead the children need to be turned off and then back on in
sequence after the central parent node has started.


APC Smart Switch(*1). Multiple outlets that can be programmed to
switch on in a specific order. Would have to run mains from it to
each child though.

Small boxes(*2) with an adjustable delay on relay at each child might
be simpler.

(*1) Other simsilar devices are available from other makers.
(*2) Deep surface mount single box might be big enough. A dual box
certainly would be with 13A socket for wallwart in one half and blank
plate on the other.

--
Cheers
Dave.




The Natural Philosopher[_2_] May 22nd 20 03:20 PM

Do powerline adapters work during a power cut?
 
On 22/05/2020 12:08, NY wrote:
What was the reason that thin and thick coax for Ethernet had an
impedance of 50 ohms, whereas coax for TV is 75 ohm?

75 ohms is what a typical dipole presents, so that was easy to adapt to.
50ohms is what a quality cable comes in at, so it was easy to use that
for lab gear.


--
"Corbyn talks about equality, justice, opportunity, health care, peace,
community, compassion, investment, security, housing...."
"What kind of person is not interested in those things?"

"Jeremy Corbyn?"


Brian Reay[_6_] May 22nd 20 04:25 PM

Do powerline adapters work during a power cut?
 
On 22/05/2020 15:20, The Natural Philosopher wrote:
On 22/05/2020 12:08, NY wrote:
What was the reason that thin and thick coax for Ethernet had an
impedance of 50 ohms, whereas coax for TV is 75 ohm?

75 ohms is what a typical dipole presents, so that was easy to adapt to.
50ohms is whatÂ* a quality cable comes in at, so it was easy to use that
for lab gear.





75 ohm coax is better in terms of attenuation characteristics but not so
good in terms of power handling.

While isn't used, 30 ohm coax would be better for power handling but
offers poor attenuation performance.

50 ohm is a compromise between power handling and attention.

If I remember the history, 75 ohm was the early choice (from dipoles -
it is near to the 72 ohm of the feed point Z of ideal dipole in free
space) but, in about 1930, someone worked out the power
handling/attenuation trade off and 50 ohm became a new standard, or at
least an alternative.


--

https://www.unitedway.org/our-impact...an-trafficking

Theo[_3_] May 22nd 20 04:31 PM

Do powerline adapters work during a power cut?
 
Jethro_uk wrote:
On Fri, 22 May 2020 11:06:24 +0100, Theo wrote:

NY wrote:
[quoted text muted]


If you don't have cat 5, but have decent aerial cable (ie CT100, not the
horrible brown stuff) you can use MoCA to run ethernet over coax.


Like it started out, you mean ?

50-ohm terminators to avoid reflections (from memory)


No. That was baseband, this uses different frequency channels in an
adaptive way - it's basically the same as powerline (and can use the same
chips) but with a much much better medium. It can coexist happily with TV
on the same coax - wouldn't like to try that with 10BASE-T...

Theo

The Natural Philosopher[_2_] May 22nd 20 09:46 PM

Do powerline adapters work during a power cut?
 
On 22/05/2020 17:50, Tim Streater wrote:
On 22 May 2020 at 15:20:23 BST, The Natural Philosopher
wrote:

On 22/05/2020 12:08, NY wrote:
What was the reason that thin and thick coax for Ethernet had an
impedance of 50 ohms, whereas coax for TV is 75 ohm?

75 ohms is what a typical dipole presents, so that was easy to adapt to.
50ohms is what a quality cable comes in at, so it was easy to use that
for lab gear.


It also stopped people trying to be smart arses and using TV cable instead of
the official spec yellow cable. The point here is that the DC resistance is
also important, since it is a factor in collision detection on the shared
medium.

I don't think so.



--
Gun Control: The law that ensures that only criminals have guns.

The Natural Philosopher[_2_] May 23rd 20 07:45 AM

Do powerline adapters work during a power cut?
 
On 22/05/2020 23:14, Tim Streater wrote:
On 22 May 2020 at 21:46:10 BST, The Natural Philosopher
wrote:

On 22/05/2020 17:50, Tim Streater wrote:
On 22 May 2020 at 15:20:23 BST, The Natural Philosopher
wrote:

On 22/05/2020 12:08, NY wrote:
What was the reason that thin and thick coax for Ethernet had an
impedance of 50 ohms, whereas coax for TV is 75 ohm?
75 ohms is what a typical dipole presents, so that was easy to adapt to.
50ohms is what a quality cable comes in at, so it was easy to use that
for lab gear.

It also stopped people trying to be smart arses and using TV cable instead of
the official spec yellow cable. The point here is that the DC resistance is
also important, since it is a factor in collision detection on the shared
medium.

I don't think so.


AIUI, the bit waveforms on the cable were designed so that any no bit pattern
would change the average DC voltage level (can't remember which encoding has
that property - perhaps many do). But, if a collision was happening, this no
longer held true, so collision detection was done by checking the DC voltage
on the wire. The signal might have to propagate 500 metres, so the DC
resistance shouldn't be too high else the collision detection would no longer
work.

We saw this at SLAC, with Thinnet, the second iteration of coax which was then
superceded by twisted pair. It was still 50 ohm, but was thinner so the length
allowance was less. Some smart arse physicists didn't really understand how
Ethernet worked, so they decided to lengthen one segment, the one in their
building, using their own 50 ohm cable - the stuff they used in their
experiment electronics - quite a long length of it. Result: collision rate
went UP and therefore overall data rate went DOWN. They got a bollocking for
that.

They also decided they didn't like that the Thinnet cable had T connectors in
it which connected directly to the rear of the computer. They thought it would
be "tidier" to add a length of cable between the T connecter and the one on
the computer. For impdance reason I no longer understand, this caused the
cable to present as 25 Ohm either at the computer or the T, can't remember
now. Thus, many signal reflections which further degarded their segment.

Cue them calling the Operations Group, which investigated and junked their
add-ons (and bollocked them). Cue amusement, later, elsewhere in the Computer
Centre.

Resistance plays no part in this: the length limits are based on
propagation delay which was slightly greater on the thinner cable as
that is what collision detection is all about,



I have no idea where you got the info in your head, but this is what
really is supposed to happen.

https://en.wikipedia.org/wiki/Carrie...sion_detection

--
€œIt is dangerous to be right in matters on which the established
authorities are wrong.€

ۥ Voltaire, The Age of Louis XIV

The Natural Philosopher[_2_] May 23rd 20 12:11 PM

Do powerline adapters work during a power cut?
 
On 23/05/2020 11:01, Tim Streater wrote:
On 23 May 2020 at 07:45:54 BST, The Natural Philosopher
wrote:

Resistance plays no part in this: the length limits are based on
propagation delay which was slightly greater on the thinner cable as
that is what collision detection is all about,

I have no idea where you got the info in your head, but this is what
really is supposed to happen.

https://en.wikipedia.org/wiki/Carrie...sion_detection


That article doesn't actually say very much; it's bare-bones, but it does say
"On a shared, electrical bus such as 10BASE5 or 10BASE2, collisions can be
detected by comparing transmitted data with received data or by recognizing a
higher than normal signal amplitude on the bus." Of these two detection
methods, the former would be done by the transmitter, but the latter would be
the method used by listening stations.

Meanwhile I dug out this long article. In 1983 a colleague attended a DECUS
meeting, and taped a talk given by Rich Seifert on "Engineering Tradeoffs in
Ethernet Configurations or How to Violate the Ethernet Spec and Hopefully Get
Away with It". This is a transcription of the talk, which I got a copy of and
include below. If you scroll down it 180 lines or so you'll see the bit about
DC resistance.

To me this is interesting as a historical note about how we did LANs in the
early 80s, and the state of electronics nearly 40 years ago. It's best read
using a fixed-width font.


I'm sorry. I read it, it makes no sense and the resistance bit is still
********. It does explain where you got it from tho


Article follows
===============

The following is an approximately literal transcription of Rich Seifert's
Fall 83 Decus talk on rationale behind the IEEE 802 Ethernet spec. The
headings are based on the slides. The title of the talk was Engineering
Tradeoffs in Ethernet Configurations or How to Violate the Ethernet Spec
and Hopefully Get Away with It.

by G.R.BOWER

I. INTRODUCTION

The assumption is that you know something about the spec. Now we'll see
how you can change some of the parameters.

When you design one of these there are lots of options, in fact, too
many. You have to chose how long, how fast, how many, how close
together, how far apart, what performance, how many users, how many
buildings, how long to ship the product, how much does it cost? You have
to make trade-offs among network length, speed, performance, cost,
satisfying your boss, shipping the product on time. The important thing
is to standardize all the critical parameters so everyone agrees to the
groundrules so that you can have compatibility between products.
Remember the intent is to design a network that is open, where you
publish the specs and you let everyone connect to the network and you
tell them exactly how to do it. You tell them how to design a
transceiver. You want everyone to make products. It's not that we are
encouraging competition but we want the market to be bigger for all of
us. To do that you must worst case the specs so that no matter how they
are applied the system still works. You don't want to give someone
enough coax cable to hang themselves. It's like squeezing a sausage, you
can pull in the specs one place but they pop out somewhere else. You are
trading off among interacting parameters. You can't make independent
decisions about the minimum and maximum packet size and the maximum cable
length. Or the cable length and the propagation delay. You have to deal
with all these things in one giant sausage.

II. DATALINK LAYER PARAMETERS.

There are decisions that have to be made in both layers of the ethernet;
datalink and physical. There are also decisions to be made at higher
layers as well but ethernet does not address those. Datalink layer
decisions will affect your network performance. The decisions as to how
many bits in a crc, how many type fields, how big are the addresses,
what's the minimum sized packet, what's the maximum sized packet, how
many stations allowed on the network - those are datalink layer
decisions. The datalink parameters you have to play with; the slot time
(the maximum roundtrip time on the cable, how long to wait to know
there's no collision). The longer the slot time the longer the cable can
be. But if slot time is longer you're vulnerable to collision longer and
that reduces performance. The interframe gap time (how long must
stations be quiet after sending a packet) - You want that short to reduce
idle time on the network but if it's too short it's a burden on
controllers in receive mode because they have to receive packets back to
back to back with no time in between with no time to recover and post
their buffers and unload them through their DMA engine and get status
recorded and get ready for the next packet. The frame length - I'd like
a one byte minimum frame so I can send one byte without padding but if I
do that I can't have a very long network and detect collisions. For crc
I'd like guaranteed error detection, a very robust algorithm, 32 bits or
better. But 32 bits takes up more bits than 16 bit crc and it's much
more complicated to implement, it takes more chips. In VLSI that's not a
big problem but the first products were not made in VLSI. The backoff
limits - how long do I keep backing off before I give up. If I backoff
for ever service time goes out the roof. Currently backing off the
maximum amount of time for 16 times would be 300 to 400 milliseconds.
That's not bad but it limits the number of stations on the network
believe it or not. There's a tradeoff between how far you let stations
backoff and the maximum number of stations you put on the network. This
session isn't going to discuss those datalink layer tradeoffs in any more
detail than this.

III. PHYSICAL LINK LAYER PARAMETERS.

We are going to look at the physical layer. The physical layer decisions
won't effect network performance so much except to the extent that
physical layer decisions effect the datalink layer decisions. The
datalink minimum packet size is a function of the physical length of the
network. What you do in the physical layer (cable lengths, number of
transceivers, speed of the network) is going to affect how you configure
networks. We wanted to make the system easy to configure, so you can't
hang yourself. What do you have to tradeoff in the physical layer? For
starters, the speed. Remember we are designing a new network. I can
make it one megabit and that makes it easy to design the product but the
product life won't be very long. I can make it a 100 megabit system and
be sure it will last through my lifetime but I don't want to have to
design that product.

I'd like the coax length to be long but cables have attenuation so I
don't want them too long. I can run 10 megabits per second through a
barbed wire fence for 10 miles but how much are you willing to pay for
the decoder? I want transceiver cables to be long but I don't want
expensive interfaces at both ends. I want lots of transceivers on the
network but I don't want them to be too expensive or have noise margins
go down and lose data because there are lots of transceivers and lumped
loads. I want the total network length to be as long as possible but you
get less performance and more delay. You have to decide where you want
to be - somewhere between a computer bus and a wide area network. That's
what a local area network is. It's longer and slower than a computer bus
but shorter and faster than a wide area network. There's a lot of area
in there for squeezing sausages.

IV. SPEED.

Let's look at how you decide how fast to make the network. I want it as
fast as possible. I want to push the technology as far as possible and
still ship on time. I want enough bandwidth to support lots of stations.
If I have a megabit of bandwidth and want to hookup a thousand stations
(and maybe you can electrically) that gives an aggregate bandwidth of
only a kilobit per station on average. Now that may be ok for people
used to 1200 baud terminals but its not what I expect when I want to do
file transfers or run graphic applications. If I have 10 megabits and
1000 stations now I have an average aggregate bandwidth of 10 kilobits
per station and that's probably ok averaged over time but it also says
when I want that big bandwidth to move a file or fill a screen with bits,
it happens very quickly. Unfortunately I also have to design
transceivers and VLSI coder/decoders and cables and it makes that job
easier if its a slower product. I can design simpler devices if I don't
worry about fancy filtering, I don't worry about cutoff frequencies of
transistors, or cable attenuation at slow speeds. That's why you can run
one megabit DMR cables long distances - because they are a tenth the
speed of an ethernet. Those are the tradeoffs and I've always got cost
to consider. You don't want to pay a lot.

The 10 megabit per second data rate is implementable. We've done it, in
both VLSI and MSI. We have to worry about both. We couldn't have said
here's the spec - you can have it in two and a half years when we build
the chips. You wouldn't want to hear that and neither would my boss. I
can do 70 megabits too, I did the CI, but you don't want to pay CI prices
for a local area network. It is pushing the technology to do 10 megabits
in VLSI but I can do it. The encoder/decoder is typically a bipolar chip
(some manufacturers are looking at doing it in CMOS) but it's not a
problem to do it bipolar. However the protocol chip (the ethernet chip)
you don't want to do in bipolar unless you want a die the size of a bread
box. Its not easy to do 10 megahertz in NMOS, CMOS or any of the dense
technologies. That is a real consideration in getting ethernet chips to
work at speed. It's pushing the IC technology for this type of device.
10 megabits supports traffic for a larger number of stations than most
people realize. A 10 megabit pipe is a very fat pipe. We have a network
up in our Spitbrook Road software development facility with 75 computers
hooked up to a single ethernet. About 50 VAXs and 25 PDP-11s running RSX
The average utilizaton of that ethernet is under 5%. That's software
development, file transfer, remote terminals and more mail than you could
possibly imagine.

Many people said "Why did you do 10 megabits? I don't need it. Give me
one megabit and make it cheaper." There is some effort in the IEEE 802
committee on the product we affectionately call Cheapernet. The question
was, "Do you want to leave it at 10 megabits and give up some performance
or make it slower and maintain performance?" Both being ways to make it
cheaper.

As we try try to violate the ethernet spec and still be ethernet this is
the one thing you can't violate and still be ethernet - you can not have
coexisting stations on an ethernet running at different speeds. The
problem is the implementation of the encoder/decoders. If I'm listening
at 10 megabits and you are sending at one megabit that's not going to
work very well. Also there are design optimizations you might call them
or design characteristics that literally limit it to 10 megabits. Not up
to 10 megabits but exactly 10 megabits. There are filters in the
transceivers, low pass as well as high pass. There are delay lines or
phase locked loops in the decoder that have fairly narrow capture ranges.
They are looking for a 10 megabit signal within .01%. If you get much
outside of that you can't guarantee tht your decoder is going to synch up
in time. You've only got 64 bits to do it in. You can make a baseband
CSMA/CD network that runs at another speed but it's not ethernet.

V. COAX CABLE SEGMENT LENGTH.

-ATTENUATION

Let's look at coax cable length. The maximum coax cable length according
to ethernet spec is 500 meters. That's between terminators. That can be
any number of shorter pieces of cable connected with end connectors and
barrels. Why 500 meters? There are a number of characteristics of the
cable that are going to limit the length. Number one is cable
attenuation. You lose signal voltage and current as you transmit down
the cable. It gets weaker and weaker. At 10 megahertz the attenuation
at 500 meters is 8.5 db. That means you get about a third of your
signal. If you transmit 2 volts at one end you get six or seven hundred
millivolts at the other. I can design a transceiver that can tolerate
that sort of dynamic range. I don't want to have to tolerate a whole lot
more.

-DC RESISTANCE

I'm limited, believe it or not, by the DC resistance of the cable. This
is separate from the attenuation which is for high frequency and is based
on the skin effect losses in the center conductor primarily. It's not
dielectric loss. The DC resistance is important because that's how I do
my collision detection. I'm looking for DC voltage to do collision
detection and if there is a lot of resistance in the cable I get less of
my voltage and I'm not guaranteed to detect collisions.

-PROPOGATION DELAY

I'm limited by the propogation delay, in other words by the speed of
light. I've got some guys in research working on the warp drive and
we'll have faster than light cables as soon as we get negative delay
lines. The restriction that ethernet can't exceed 2.8 kilometers is
really the restriction that the propogation delay can't exceed 46.4
microseconds(1). If you had faster cables you could have them longer.
The ethernet cables are pretty fast as cables go. Typical plastic cables
are 66% propogation velocity. Ethernet cables are 77-80% because they are
foam. You can get a little better but over 80% is a pretty nifty cable.
It's faster than an optical fiber.

-TIMING DISTORTION

Finally, timing distortions. Cables introduce timing distortions in the
signals. It's what's called intersymbol interference in
telecommunications. But because that's an often misused term we just
call it timing distortion. When you are decoding the signal you want to
see where the signals cross through zero. That's how I recover the clock
in Manchester decoding and its how I get my data. If those zero
crossings shift too far I can't properly decode the data and I get all
errors.

-STRETCHING THE COAX

How can I stretch the cable? The 500 meter limit is based primarily on
attenuation. The other factors, timing distortion, propagation delay,
the DC resistance and the cable attenuation all will limit it at some
point. But the one that limits it for my purposes is the attenuation.
The signal gets weaker beyond 500 meters. What happens if you exceed 500
meters? You start to lose signal but unfortunately I don't lose noise.
The more cable I have, in fact, the more noise I'm going to pick up.
It's a big antenna. The signal to noise ratio will start to decrease as
I get over 500 meters and I'm going to have increased error rates. The
ethernet was designed for a one in ten to the ninth (a billion) bit error
rate in a 14 db signal to noise ratio. That's pretty good and a 14db
signal to noise ratio is pretty low, you normally have much much better
than that. So under light noise environments where you are not in a
factory or near a broadcast radio station (eg in an office) your signal
to noise ratio is going to be a lot better than that. If you don't have
100 transceivers on the network your signal ratio will be a lot better
than that. If your cable is one piece rather than many your signal to
noise ratio will be a lot better than that. Worst case if you did
everything bad, if you made it out of lots of little pieces and you put
all the transceivers on and you had 500 meters and a high noise
environment you still have a 14db signal to noise ratio and you can still
run it over 500 meters. But if everything isn't all that bad you can go
a little longer and not impair the system. But you have to know what you
are doing! Now it can't be configured by dummies anymore. You have to
understand the tradeoffs.

What else happens if I keep going? Suppose I can tolerate the signal to
noise ratio. Then the DC resistance starts to hit me. I start to lose
collision detect margin. Maybe I can't guarantee collision detection any
more when the resistance increases. The loop resistance of a 500 meter
coax is about 4 ohms plus and my limit is 5 ohms after which I can't
guarantee collision detection - unless you don't have 100 transceivers on
the network. Then you can go a little farther. This is the sausage. If
there isn't as much meat in the sausage I can stuff more in the casing.
As I keep going I get more and more timing distortion. The ethernet coax
introduces about plus or minus 7 nanoseconds of timing distortion. I've
got 25 to play with in the system - 5 for my decoder, 7 for my coax, 1 is
for my transceiver cable, 4 is for my transceiver and the rest is for
noise margin. Well you can start eating up your noise margin.
Obviously, if I go too far beyond 500 meters I pass my 46.4
microsecond(1) round trip and that's the end of the ball game right
there. I'm now no longer on an ethernet because you can't guarantee to
detect collisions. You'll detect some collisions but the stations out in
the suburbs will be running on a carrier sense aloha system with respect
to some of the stations. They will be carrier sense collision detect
with respect to those nearby and the stations in the center of the net
will be carrier sense collision detect with respect to every one. This
may not sound catastrophic but I would hate to do the performance
analysis of that system. I couldn't guarantee that the system is stable,
in fact. That's what I consider hitting the wall.

The 500 meter restriction results in, number one, collision detection and
without that you don't have a stable network. Also, adequate decoder
phase margin, I need 5 nanoseconds to play with. With a 100 nanosecond
bit cell you've got 25 nanoseconds of timing distortion before you start
strobing the bit in the wrong place. And I need 5 nanoseconds of that
for my phase lock loop.

VI. TRANSCEIVER CABLE LENGTH

Let's look at the transceiver cable length. This is the drop cable
between the transceiver and the controller. The tradeoffs here are
roughly the same. Currently it's 50 meters maximum and that is also a
function of the attenuation. The attenuation of that cable is a function
of the wire gauge and a number of manufacturers are making transceiver
cables of different wire gauge. The spec is for 50 meters and if you
make it out of 30 gauge that's not a big enough pipe for reasons which we
will discuss. The DC resistance is also important not because of
collision detect but because I'm powering the transceiver at the other
end of that cable out of the controller. If I have too much resistance I
have too much voltage drop in the cable and I can't guarantee that the
transceiver will power up correctly depending on wire gauge.

-PROPOGATION DELAY AND TIMING DISTORTION

The transceiver cable length also affects propagation delay although the
main contributors are coax length, fiber optic repeater length and the
repaters themselves. The cable also affects timing distortion but here
it's much much less significant than the coax. Even if you doubled the
length the distortion would only be 2 nanoseconds which is almost
negligible. So the same parameters apply here as on the coax but in
differing degrees. It's not so sensitive to propagation delay or timing
distortion but very sensitive to DC resistance and attenuation.

-DC RESISTANCE

The 500 meter coax limit is based on attenuation while the 50 meter
transceiver cable length is based on DC resistance. The cable is spec'ed
to have maximum loop resistance between power supply and transceiver of 4
ohms. The cable is about 3.5 of those ohms and that's if it's 20 gauge
wire. If it's 22 gauge wire I don't believe you can run 50 meters so you
must be careful at least on the power pair. As you increase the cable
length the transceiver may not power up or it may blow a fuse.
Transceivers are negative resistance devices. As you decrease the
voltage it increases the current and blows the fuse (up to a certain
point).

-ATTENUATION

If you extend the length and remain functional say by having more power,
then you hit the signal to noise ratio and your error rates go back up.
It's the same situation as with the coax cable. But you have this other
nasty thing called a squelch circuit. Communications systems designers
always design squelch circuits. Any time you are at the end of a long
communications channel you don't want to do anything unless you are sure
what you are hearing is signal and not noise. You don't want to turn on
the amplifiers for noise. On the coax that's fairly easy to do since my
signaling is DC. I'm unipolar signaling and I can simply have a DC
threshold detector. There's no such thing as DC noise. If there was we
could tap into it and turn on the lights in Chicago. Noise by definition
always has zero average, there's just as much positive as negative -
something about entropy. So on the coax I'm using DC levels and I can
just pickup the DC levels and my squelch is very easy. On the
transceiver not so lucky. Since I'm transformer coupled (that's where
isolation is done) I can't send any DC through the transformer. I have
to use the signal itself to determine if there is any signal. That's
like chasing your own tail. What it says is that I assume I am going to
get at least so much signal at the end of the transceiver cable in the
worst case. If I don't, I don't even turn on the receiver in the
transceiver. If the attenuation is too great in the transceiver cable
not only do I have a worse signal to noise ratio, I might not even have
enough signal to turn on the squelch circuits which again makes it
non-functional. With the ethernet spec numbers in the worst case with
transceiver cable with 4 db loss, etc you are guaranteed plus or minus
400 millivolts at the far end of the cable. That's not a whole lot.
Less than that and I can't guarantee the squelch circuit turns on.

50 meters of 20 gauge wire will result in more than 9.4 volts DC
available to power the transceiver. The transceiver has got to power up
with that. It does not have to power up with 9.3. If you've got half an
amp and 4 ohms that's a two volt drop that says you had 11.4 at the
sending end which is
a 12 volt supply minus 5% which is what most of you have. 50 meters
maintains my signal to noise ratio and degrades beyond that. I keep my
decoder phase margins and I'm guaranteed of detecting collisions (the
last wall, the one you can't run through). It's harder to stretch the
transceiver cable than to stretch the coax because stretching the coax
only gets you more errors (only if you are worst case) whereas stretching
the transceiver cable makes you non-functional. That's where you have
least flexibility but it can be done if you use heavier gauge wires or
design your own transceiver or retune the squelch circuits or have more
than 12 volts to power it.

VII. NUMBER OF TRANSCEIVERS ON A SEGMENT.

There's a limit of 100 transceivers on a cable. The number is limited by
the shunt resistance of the cable (not the capacitance.) Each transceiver
is a resistor across the cable. The resistance should be as high as
possible, the spec says at least 100K ohms. The DEC H4000 is typically
250-300K minimum. Each one of those shunt resistors bleeds off a little
of that DC current I'm using to detect my collisions so I don't want too
many. Also the number of transceivers is limited by the input bias
current. When I'm powered on I'm drawing a little bit of current out of
the cable partly due to the resistance and partly due to the electronics
since I can't perfectly back bias a diode. Diodes are leaky, they have
leakage. I'm allowed 2 microamps which is not much but when multiplied
by 100 transceivers, 200 microamps is starting to be some real current.
So that limits me for collision detect reasons and for no other reasons.
Also I've got a tolerance on the drive level. I'm driving it (I think)
with 64 milliamps but I can't hold that very accurately. I defy anyone
to design a 10 megahertz, 25 nanosecond slew rate limited high frequency
current driver that's held to very tight tolerances. I can make the
receivers pretty good but it's hard to make the drivers that accurate.
So I've got weak transmitters and strong transmitters and I've got to
detect collisions between all of them. In the worst case I'm a strong
transmitter and 500 meters away is a weak transmitter and I have to make
sure that even in the presence of my own strong signal I can hear his
weak signal. So that all ties into it and into DC resistance on the
cable.

VIII. PLACEMENT OF TRANSCEIVERS.

Where I can place the transceivers is limited by the shunt capacitance.
This is the 2.5 meter rule. I don't want big AC loads on the cable
because I get big reflections. This is the exact same problem, by the
way, as with boards in a unibus backplane or in any backplane. You don't
want all the loads in one place. You want them distributed so that
there's time between the reflections.

So how can we get around all this? The 100 transceiver limit is primarily
based on the shunt resistance. Remember we were limited by shunt
resistance, bias current and transmit level tolerance but the first wall
you hit is shunt resistance believe it or not. That 100K ohms when
multiplied by 100 is 1000 ohms and that is a lot of leak, it's a leaky
pipe. If I lose more than that I lose my assurance of detecting
collisions. The placement is limited by shunt capacitance and it assumes
the 100 transceiver limit. Actually it turns out the worst case for
placement is not 100 but 30-40 transceivers. In fact, 100 is better than
30-40 because some of the reflections start to cancel each other out.
This can be done in simulation and we can prove this. As you increase
the number of transceivers per segment over 100 I start to lose collision
detect margins and can no longer guarantee collision detection and that
blows me out of the water. If I vary from the 2.5 meter placement rule I
get reflection increases. This doesn't stop things from working. It
introduces more errors into the system. If I have error margin ie, I am
not using 500 meters of cable, I am not in a high noise environment and I
haven't segmented my cable in lots of little pieces, then I can start to
violate the 2.5 meter rule. I can lump a few transceivers here and some
there, etc. The 100 transceiver limit will hit the collision detect
limit. The 2.5 meter spacing hits the signal to noise ratio all other
things being equal.

IX. TOTAL NETWORK LENGTH.

Now let's look at total network length. I'm allowed 2.8K meters or 46.4
microseconds(1) end to end. That doesn't mean I can't have more than
2.8K meters of cable in the system. Within a particular topology with a
maximal end to end path I may be able to stretch, say, a 500 meter
segment but I would have to take some off somewhere else to stay within
the 2.8K limit. For example I could have 2 1000 meter segments connected
with a repeater if I could live with the signal to noise ratio. But I'm
back to doing some engineering on the system.

We wanted ease of configuration. If you stick by the rules (500 m coax,
50 m transceiver cable, 1000 m fiber, no more than 2 repeaters in the
maximum path and no more than 100 transceivers spaced at 2.5 m) and don't
violate any of them you can hit the limit on every one of those rules in
one network and we guarantee the configuration works. You will have
adequate noise margin, detect collisions all the time, you will not have
excessive timing distortion, you will not exceed the propogation delay
limit and you won't have excessive reflections. It's a worse case design
system. If you need to break any of those rules it's no longer a worst
case design system. That doesn't mean that it won't work in your
configuration, only that in the general case it won't work.

We wanted the minimum frame length to be as short as possible. In fact,
there was a movement while we were writing the spec to cut it from 2800
meters maximum down to 1500 meters in order to shorten that minimum frame
length and get a little more performance out of the network. But you
turn out to be on the knee of the curve at that point. You don't get
that much more performance and we concluded the extra kilometer was more
than worth it. The 2800 meters is a tradeoff between your need for a
long network and performance. With the specs it is impossible to exceed
the round trip delay constraint.

You can squeeze the sausage, in conclusion, if you know how strong the
casing is, if you know where the limits are and what happens when you
start hitting those limits. If you try and stuff too much in the sausage
the casing will pop out. How? Maybe it won't work. Maybe the
transceiver won't power up, maybe too many errors, maybe you won't detect
collisions all the time. If you exceed the 2800 meter limit you won't
guarantee collision detect by stations in the suburbs. To exceed the
2800 meter limit you have to violate something else. The configuration
constraints are based on real physical limits; speed of light, electronic
circuits, timing distortion, phase locked decoders and cable design.
Thorough engineering went into the design of this system. You can break
the rules of the system if you can understand and do over again the
engineering. You can stretch the cables, put it in high noise
environments, put on more transceivers or closer together, just make sure
you know what you are doing.

As always there's no free lunch. The free lunch they serve you here
you've paid for already. There are two kinds of free lunches in
engineering, those you have already paid for and those you have not yet
paid for.

At this point I'll open the the floor to questions.

X. QUESTIONS AND ANSWERS.

Question: What's the two repeater limit? Can you go to three, four?
Answer: The spec says no more than two repeaters in tandem between any
two stations. The reason is to keep from violating the 2800 meter limit.
If I allow three, even sticking by all the other rules you could violate
the 2800 meter rule. Other nasty things start to happen when you have
more repeaters. It turns out that the interpacket gap shrinks when you
go through a repeater. It's virtually time dilation. When you go
through two properly designed repeaters the 9.6 microsecond interpacket
gap you started with shrinks to about 6.3 microseconds. If you want to
go through more repeaters you will shrink that gap more in the worst
case. You have the possibility that stations will not be able to receive
back to back packets. If you can live with that you can use more than
two repeaters. Clearly, you don't want to shrink the gap to less than
zero. If you use more than two repeaters the 2800 meter end to end
restriction still holds. A repeater in itself with its squelch circuits
and its buffers and its read time is the equivalent of about 200 meters
of cable. So to have a third repeater you are giving up 200 meters of
coax somewhere.

Q: Experimentally what coax cable lengths have you successfully
transmitted with, say, a dozen transceivers on the cable?
A: I haven't. Folks at 3COM have run transceivers over almost 1000
meters without a repeater. But that's a fairly lightly loaded net, a
small handful of transceivers. I generally don't violate the rules
because I don't trust myself. I don't want to have to go through the
analysis every time I configure an ethernet. Doing an ethernet I want it
to be expandable. I want to be able to hang 100 transceivers on it when
I want it.

Q:Presumably you could add a repeater when you hit the limit adding
transceivers.
A:Absolutely. But how do you know when you've hit there? Do you have the
maintainablility primitives built into your system to detect when you are
no longer able to detect collisions? How do you know when you are not
detecting collisions? How do you know when the errors you are getting are
too many? Or what they are caused by? It's very hard to maintain a system
that's breaking the rules because you can't tell whether the system is
not working because you've broken the rules or because something is not
working.

Q: You mentioned the antenna effect. Can you comment on interference
generated by ethernet cables and the susceptibility to noise interference
in, say, the environment we have where the cable runs close to closed
circuit TV cables?
A: The ethernet system with a little good engineering easily meets FCC
requirements. Current tests show that it in fact may meet Tempest EMI
requirements. The cable shielding is unbelievable. You've got quadruple
shield on the coax and triple shield on the drop cables. I've tested the
ethernet system under 5 volt per meter and, in fact, 10 volt per meter rf
field strengths up to a gigahertz and get absolutely no errors under
those conditions. I've tested it under static discharge up to 20,000
volts directly discharging to the shield of the coax. In fact, I was
able to draw St Elmo's fire off one of the terminators and not only did I
not blow up any of the equipment, I did not get any CRC errors. It's
fairly roboust.

Q: My question deals with the H4000 transceiver. Should you be able to
plug in a transceiver to the DEUNA with the system running and not cause
a transient power surge that would pull DC low down on you?
A: The DEUNA has a bulkhead assembly that comes with it which is
specifically designed to limit surge currents into the transceiver so you
can do exactly what you described. There's an SCR/RC surge limiter and
circuit breaker built into that bulkhead for exactly that reason.

Q: I'd just like to comment that I thought this was an excellent
presentation.
A: Thank you.

Q: What type of throughput could I expect given maximum packet size?
A: The throughput of an ethernet is much more a function of your higher
layer software than it is of the data rate of the ethernet. You have to
know what other stations are trying to use it, what your packet formats
are, what the controllers are doing. What I'm saying is, "That's not an
easy question to answer." The answer is, "Under what conditions?" Using
DEUNAs? Under what load? What software? It's not something that can be
answered with a number. I wish I could. It's the kind of thing that
everyone asks.

Q: Can you give an approximate answer for only two systems?
A: We have run VAXs with DEUNAs through the ethernet at continuous
transmission and reception of between 1.2 and 1.5 megabits per second.
And that clearly doesn't use up a whole lot of the ethernet. There's
room still for two more VAXs to do the same thing. That's limited more
by the DEUNA than anything else.

Q: What effect does a DELNI have on all these considerations?
A: Good point. The DELNI is another piece of the physical channel. You
can have transceiver cable between the DEUNA and the DELNI. You can have
transceiver cable between the DELNI and the H4000. The DELNI has some
propogation delay and another copy of the squelch circuits. The net
effect is that on an absolutely maximally configured ethernet you can't
put the DELNIs on the very very ends of the horizon (the suburbs) because
of the additional delay. The configuration guidelines in the DELNI
documentation describe all that.

Q: 3COM is pushing this thin ethernet cable for their IBM PC connections
and they say all I'm giving up is maximum coax length. What kind of
trouble am I asking for if I use that in a network with real ethernet
coax?
A: Well, in fact, what 3COM is doing is prereleasing the Cheapernet
product. I was out there a couple weeks ago talking to Ron Crane who, by
the way, is one of the developers of the ethernet. They are pretty smart
people. You are giving up a little more than the maximum cable length,
you are giving up signal to noise ratio. The design center for the
Cheapernet is two orders of magnitude worse than for ethernet. The basic
error performance of the channel is significantly worse.

Q: Am I going to get bad reflections at the point at which I switch these
cables? In particular, one's tempted to run thick cable for long runs
then hit a cluster of offices and have a little bit of thin cable then
switch back to thick. Would it be bad to do that three or four times?
A: Yes.

Q: Every picture we see of legal ethernet has one point to point segment
in it exactly 1000 meters long. Is it really the case that I can have as
many of those point to point segments as I want coming off of a base rib
as long as the total of the two longest ones is not more than 1000
meters.
A: Yes. You can have 100 repeaters connected to the center segment each
with 500 meter fiber links to a separate segment and it works just fine.

Q: Thank you. Would you tell that to all the people that are selling
ethernet for DEC?
A: The reason that the charts are drawn that way is because it's easier
to draw the charts that way.

Q: The problem is that the local people don't seem to understand that
and keep telling me that's not a legal ethernet.
A: I promise the next ethernet presentation I give will have that chart
with multiple fiber optic repeaters.

Q: Variation of the same question. What is the likely impact of optical
technology on your strategy for the introduction of routers in an
internet context? I presume they are not bound by the same attenuation
delay.
A: Right. Routers do store and forward. As such they are not part of
the 46 microsecond collision detect. Fiber technology is super for long
distance point to point links. In fact, that's why we specify it for the
long distance point to point link. They are suitable for even longer
distances. The phone companies use them regularly. The problem with
fiber is it's special. You have to pull it between buildings or where
you are going to be routing. Most people aren't prepared to do that.
They want to use leased lines or X.25 or some publically available
channel. If your are willing to pull fiber it should not be a problem
having high speed links between routers using fiber. That's a technology
we are very interested in.

Q: I can infer from that the distance consideration will be considerably
different.
A: Absolutely. That's right.

Q: I need to hook up an ethernet between three different buildings and I
hear that there can be potential grounding problems. Could you talk a
little about that?
A: Sure. You generally want to avoid grounding a piece of wire in two
separate buildings because there can be differences in the ground
potential between those buildings and you'll get lots of amperes flowing
through the shield of that cable. If the voltage difference between the
buildings is low enough and the source impedance is high enough you can
ground the cable at both ends. Where it enters the building is the
preferable place to do that for lightning protection. If you can't tell
by measurements (I would find a qualified technician or electrician to
make those measurements) I would ground it in one building and put a
lightning arrester in the other building to prevent the lightning hit
from propagating through.

Q: You connect to one of the barrel connectors and physically ground it
to the...
A: That's correct, I would put a barrel where the cable enters the
building and ground it to the frame of the building where it enters.
That's exactly what I did in a number of the field test sites for
ethernet.

Q: How about the ground rods most computer rooms have?
A: I would do it where the cable enters the building for maximum
lightning protection.

Q: We are getting a VAX cluster and hope in the next year to plan for
ethernet. I would like to know how a VAX cluster fits on to the the
ethernetwork and what its growth potential is.
A: The VAX cluster doesn't directly connect to the ethernet. You can
connect VAX cluster interfaces to the VAXs and you can connect ethernet
interfaces to the VAXs and the DECNET will rout between the VAX cluster
and the ethernet. The VAXs will be acting as routers but there is no
direct connection between the VAX cluster and the ethernet. However, if
anyone has noticed, the cable in the VAX cluster is the ethernet cable.
It's the same coax.

Q: Is there any plan to put anything into the star coupler or the HSC
directly?
A: No. The reason is that it's an enormous amount of difference between
the two. You've got speed conversions and protocol conversions. The CI
datalink is very very different from the ethernet datalink because it
was designed for a different application and the speeds are seven to one
difference. There would have to be a computer in there.

Q: This may not be a proper forum to ask this question but how has DEC
violated the rules in its own installation?
A: I have a number of ethernets in my lab that have the transceivers too
close. I have some that have transceivers that exceed their own specs.
I generally don't stretch the transceiver cables beyond 50 meters. I have
hooked up five repeaters in tandem, two of them being fiber optic.

Q: How about cable length?
A: I usually don't go beyond 500 meters because I usually don't need to.


Transcriber's note:
(1) The Blue Book says 44.9 microseconds.




--
"I am inclined to tell the truth and dislike people who lie consistently.
This makes me unfit for the company of people of a Left persuasion, and
all women"

The Natural Philosopher[_2_] May 23rd 20 12:13 PM

Do powerline adapters work during a power cut?
 
On 23/05/2020 11:18, Tim Streater wrote:
The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
less critical.


No, it isn't. This seems to have been a myth that got around

The rest is valid tho.


--
Any fool can believe in principles - and most of them do!



The Natural Philosopher[_2_] May 23rd 20 12:46 PM

Do powerline adapters work during a power cut?
 
On 23/05/2020 12:13, The Natural Philosopher wrote:
On 23/05/2020 11:18, Tim Streater wrote:
The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
less critical.


No, it isn't. This seems to have been a myth that got around

The rest is valid tho.


I want you to not accept what an expert has said, but explain WHY what
he says, is true?
Especially since I can find no other reference anywhere to 'DC' or
'cable *resistance* in any paper on CSMA/CD systems. Cable impedance,
yes, resistance no.

And today's systems still implement CSMA/CD but are universally coupled
in with transformers. That don't pass DC..

And DC won't propagate any faster than someone else's pulse train anyway.

AIUI collision detection is dome by sensing that the output on the wire
which you have 'grabbed' is not exactly what you are putting on it.

Cable *impedance* and of course attenuation matters but as long as they
are within tolerance that's OK, what is crucial however is that you
don't have excessive propagation delays and that is what screws you with
long cable runs. Two stations - one at each end - can transmit, see
their packets go clear and un buggered before detecting each others
transmission if the cable is too long.

In short.

- impedance is determined as your article says by ratio of inner to
outer conductor diameter, modified by the dielectric in between - air
cored polythene usually.

- Attenuation is a function of resistance, which given the above is a
function of cable core circumference and material - usually silver plate
on quality cables, or just copper. (fat cables lose less).

- propagation delay is determined by the dielectric and the cable length
- the nearer a vacuum the nearer the speed of light the signal travels.


As it happens for all cables and electronics in use then and now,
propagation delay is the limiting factor. Because you have to have a
line that is clear of all other travelling waves to transmit on.

Collision detection is a simple matter of receiving stuff you did not
transmit.

No DC involved at all.


Same as wifi today. No DC involved at all.






--
No Apple devices were knowingly used in the preparation of this post.

NY[_2_] May 23rd 20 01:41 PM

Do powerline adapters work during a power cut?
 
"Jethro_uk" wrote in message
...
The whole thing has gone way beyond my will to dig deeply, but my
recollection was the impedance was specifically about preventing
reflections along the cable ...


But I imagine the important part of the impedance is the imaginary (*)
part - due to effects of L and C of the cable - rather than the DC
resistance. Maybe I'm wrong.


(*) The part that's multiplied by i / j / square-root-of-minus-one. I
remember getting into a heated discussion in a pub quiz which asked "what
letter is used to denote square-root-of-minus-one?" and they would only
accept "i" and not "j" - the latter being used in electronics because "i" is
used for instantaneous current. I won my point, but it was a hard fight -
Google to the rescue! I worked with a guy called John William Taylor, aka
Bill Taylor but referred to universally as J-Omega (as in the algebraic term
j-omega-t that occurs all over in electronics).


The Natural Philosopher[_2_] May 23rd 20 01:44 PM

Do powerline adapters work during a power cut?
 
On 23/05/2020 13:29, Jethro_uk wrote:
The whole thing has gone way beyond my will to dig deeply, but my
recollection was the impedance was specifically about preventing
reflections along the cable ...

that is why you terminated, and why you HAD to use a t-piece on the back
of stuff. Not a cable to a T piece from the NIC.


Having an unterminated end meant your packets interfered with
themselves. Instant non working for anything near the OTHER end

Yes, I was the network manager for a 70+ node coaxial ethernet
network..until 10baseT came along, thank Clapton..



--
If I had all the money I've spent on drink...
...I'd spend it on drink.

Sir Henry (at Rawlinson's End)

NY[_2_] May 23rd 20 02:23 PM

Do powerline adapters work during a power cut?
 
"The Natural Philosopher" wrote in message
...
On 23/05/2020 13:29, Jethro_uk wrote:
The whole thing has gone way beyond my will to dig deeply, but my
recollection was the impedance was specifically about preventing
reflections along the cable ...

that is why you terminated, and why you HAD to use a t-piece on the back
of stuff. Not a cable to a T piece from the NIC.


Having an unterminated end meant your packets interfered with themselves.
Instant non working for anything near the OTHER end

Yes, I was the network manager for a 70+ node coaxial ethernet
network..until 10baseT came along, thank Clapton..


I remember our lab had two LANs - a private one just for our project (other
projects had their own) and a building-wide and external one. For some
reason there weren't switches to allow all the networks to be connected
together, but with private traffic remaining on our project's segment.

We got used to making changes to our private LAN, such as breaking the LAN
to insert a new computer, or connecting/disconnecting a T-piece from a
computer; the difficulty was remembering *not* to make any such changes to
the public LAN. There was a sign "Anyone who fails to terminate the company
LAN will be terminated."


All that improved when the company changed to UTP Cat5 cable - we got
switches which allowed the same computers to see the outside world and yet
to keep their traffic private.

Another company I worked for had an additional problem: we were working on a
process for "building" servers (install OS, install apps, make configuration
tweaks) and these servers were configured as DHCP servers. Someone
accidentally connected such a server to the company LAN instead of the
project one and brought the company to halt because any computers that were
turned on after that had a 50:50 chance of being given a 192.168.x.x company
address or a 10.0.x.x private address - and only 192 addresses would talk to
the outside world or to company servers (eg mail). Whoops! Mind you, I
perpetrated a similar "company-stopping" faux pas. I was working a mechanism
for cloning a PC disc image from server to a corrupted client, to restore it
to factory state. I remembered to disable DHCP on that server and started to
run Norton Ghost to transfer the image. Unfortunately that generated traffic
at an alarming rate, saturating the 100 Mbps LAN and thus "killing"
everything else on that switch. That only "killed" out private LAN, but we
realised it would be catastrophic in a real customer environment. It was
due to this experience that I got Norton to implement a "throttling" setting
on their server software so a server could be set to use a maximum of n Mbps
for the PC-build process, leaving a usable proportion of the LAN for other
traffic.


The Natural Philosopher[_2_] May 23rd 20 02:47 PM

Do powerline adapters work during a power cut?
 
On 23/05/2020 13:41, NY wrote:
"Jethro_uk" wrote in message
...
The whole thing has gone way beyond my will to dig deeply, but my
recollection was the impedance was specifically about preventing
reflections along the cable ...


But I imagine the important part of the impedance is the imaginary (*)
part - due to effects of L and C of the cable - rather than the DC
resistance. Maybe I'm wrong.


in terms of maintaining wave shape, it is, but in terms of attenuation
it's the actual (skin) resistance that counts.

I remember doing that ghastly calculus of an infinite number of Ls in
series and Cs in p[parallel and showing that in the limit it looked just
like a resistor...



(*) The part that's multiplied by i / j / square-root-of-minus-one. I
remember getting into a heated discussion in a pub quiz which asked
"what letter is used to denote square-root-of-minus-one?" and they would
only accept "i" and not "j" - the latter being used in electronics
because "i" is used for instantaneous current. I won my point, but it
was a hard fight - Google to the rescue! I worked with a guy called John
William Taylor, aka Bill Taylor but referred to universally as J-Omega
(as in the algebraic term j-omega-t that occurs all over in electronics).



--
€œIdeas are inherently conservative. They yield not to the attack of
other ideas but to the massive onslaught of circumstance"

- John K Galbraith


John Rumm May 23rd 20 05:50 PM

Do powerline adapters work during a power cut?
 
On 23/05/2020 12:13, The Natural Philosopher wrote:
On 23/05/2020 11:18, Tim Streater wrote:


So the guy who actually made the decision on the cable type for use on
ethernet, world renowned IEEE expert member, who on several occasions
the Task Force chairman, and editor of 802.3x standards document[1],
explains why the DC resistance is an important parameter for the CD
mechanism:

The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
less critical.


and some decades later, TNP says:

No, it isn't. This seems to have been a myth that got around


What a dilemma, who should we believe?

The rest is valid tho.


Well I am sure he will be pleased to have your validation.


[1]
https://ecfsapi.fcc.gov/file/1050839...20Ethernet.pdf

--
Cheers,

John.

/================================================== ===============\
| Internode Ltd - http://www.internode.co.uk |
|-----------------------------------------------------------------|
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/

Vir Campestris May 23rd 20 09:24 PM

Do powerline adapters work during a power cut?
 
On 22/05/2020 14:08, Jethro_uk wrote:
Mists of time, I'm afraid.

Last coax ethernet I worked on had some really weird things happening.
There were certain pairs of devices that simply could not see each other
- despite seeing every other device on the network. Swapping the order of
one in the ring fixed it ... but introduced weirdness elsewhere.


Well, I think I see the problem. And I'm astonished it worked at all -
Ethernet should not be configured as a ring...

Elsewhere in the thread - Manchester encoding is used because it has no
net DC component.

And a story. I once lost complete faith in my engineering manager when
he plugged a bit of coax into a T-piece, took the entire office network
down, and didn't even connect him doing this with the way the entire
office erupted in expletives.

Perhaps that's why he made me redundant a year or so later!

Andy

Paul[_46_] May 24th 20 03:55 AM

Do powerline adapters work during a power cut?
 
The Natural Philosopher wrote:
On 23/05/2020 11:18, Tim Streater wrote:
The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
less critical.


No, it isn't. This seems to have been a myth that got around

The rest is valid tho.


How does the collision detect work ?

Paul


Roger Hayter[_2_] May 24th 20 10:40 AM

Do powerline adapters work during a power cut?
 
The Natural Philosopher wrote:

On 23/05/2020 12:13, The Natural Philosopher wrote:
On 23/05/2020 11:18, Tim Streater wrote:
The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
less critical.


No, it isn't. This seems to have been a myth that got around

The rest is valid tho.


I want you to not accept what an expert has said, but explain WHY what
he says, is true?
Especially since I can find no other reference anywhere to 'DC' or
'cable *resistance* in any paper on CSMA/CD systems. Cable impedance,
yes, resistance no.

And today's systems still implement CSMA/CD but are universally coupled
in with transformers. That don't pass DC..

And DC won't propagate any faster than someone else's pulse train anyway.

AIUI collision detection is dome by sensing that the output on the wire
which you have 'grabbed' is not exactly what you are putting on it.

Cable *impedance* and of course attenuation matters but as long as they
are within tolerance that's OK, what is crucial however is that you
don't have excessive propagation delays and that is what screws you with
long cable runs. Two stations - one at each end - can transmit, see
their packets go clear and un buggered before detecting each others
transmission if the cable is too long.

In short.

- impedance is determined as your article says by ratio of inner to
outer conductor diameter, modified by the dielectric in between - air
cored polythene usually.

- Attenuation is a function of resistance, which given the above is a
function of cable core circumference and material - usually silver plate
on quality cables, or just copper. (fat cables lose less).

- propagation delay is determined by the dielectric and the cable length
- the nearer a vacuum the nearer the speed of light the signal travels.


As it happens for all cables and electronics in use then and now,
propagation delay is the limiting factor. Because you have to have a
line that is clear of all other travelling waves to transmit on.

Collision detection is a simple matter of receiving stuff you did not
transmit.

No DC involved at all.


Same as wifi today. No DC involved at all.


What causes dispersion then? Does not loss (resistance) slow higher
frequencies more and reduce pulser risetime? I have no recollection of
theory, and I am quite prepared to be wrong.

--

Roger Hayter

Roger Hayter[_2_] May 24th 20 10:49 AM

Do powerline adapters work during a power cut?
 
NY wrote:

"Jethro_uk" wrote in message
...
The whole thing has gone way beyond my will to dig deeply, but my
recollection was the impedance was specifically about preventing
reflections along the cable ...


But I imagine the important part of the impedance is the imaginary (*)
part - due to effects of L and C of the cable - rather than the DC
resistance. Maybe I'm wrong.



No. The impedance is real, although brought about by Ls and Cs. But
the real energy passed into this impedance is not dissipatet in the
(hypothetically and practically) negligible resistance but propagated
away until it gets to the matched resistive load at the end of the
cable.[1]




(*) The part that's multiplied by i / j / square-root-of-minus-one. I
remember getting into a heated discussion in a pub quiz which asked "what
letter is used to denote square-root-of-minus-one?" and they would only
accept "i" and not "j" - the latter being used in electronics because "i" is
used for instantaneous current. I won my point, but it was a hard fight -
Google to the rescue! I worked with a guy called John William Taylor, aka
Bill Taylor but referred to universally as J-Omega (as in the algebraic term
j-omega-t that occurs all over in electronics).


[1] If the cable has not got a matched load at the end then you *don't*
see the characteristic impedance at the beginning. But the impedance
you see with a matched load *isn't* the load itself but the cable
impedance.

--

Roger Hayter

The Natural Philosopher[_2_] May 24th 20 11:37 AM

Do powerline adapters work during a power cut?
 
On 24/05/2020 10:40, Roger Hayter wrote:
The Natural Philosopher wrote:

On 23/05/2020 12:13, The Natural Philosopher wrote:
On 23/05/2020 11:18, Tim Streater wrote:
The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
less critical.

No, it isn't. This seems to have been a myth that got around

The rest is valid tho.


I want you to not accept what an expert has said, but explain WHY what
he says, is true?
Especially since I can find no other reference anywhere to 'DC' or
'cable *resistance* in any paper on CSMA/CD systems. Cable impedance,
yes, resistance no.

And today's systems still implement CSMA/CD but are universally coupled
in with transformers. That don't pass DC..

And DC won't propagate any faster than someone else's pulse train anyway.

AIUI collision detection is dome by sensing that the output on the wire
which you have 'grabbed' is not exactly what you are putting on it.

Cable *impedance* and of course attenuation matters but as long as they
are within tolerance that's OK, what is crucial however is that you
don't have excessive propagation delays and that is what screws you with
long cable runs. Two stations - one at each end - can transmit, see
their packets go clear and un buggered before detecting each others
transmission if the cable is too long.

In short.

- impedance is determined as your article says by ratio of inner to
outer conductor diameter, modified by the dielectric in between - air
cored polythene usually.

- Attenuation is a function of resistance, which given the above is a
function of cable core circumference and material - usually silver plate
on quality cables, or just copper. (fat cables lose less).

- propagation delay is determined by the dielectric and the cable length
- the nearer a vacuum the nearer the speed of light the signal travels.


As it happens for all cables and electronics in use then and now,
propagation delay is the limiting factor. Because you have to have a
line that is clear of all other travelling waves to transmit on.

Collision detection is a simple matter of receiving stuff you did not
transmit.

No DC involved at all.


Same as wifi today. No DC involved at all.


What causes dispersion then? Does not loss (resistance) slow higher
frequencies more and reduce pulser risetime? I have no recollection of
theory, and I am quite prepared to be wrong.

theoretically dispersion doesn't happen :-)

AFAICR its down to the effective dielectric constant not being constant
with frequency. Bit like chromatic aberration in glass.



--
"First, find out who are the people you can not criticise. They are your
oppressors."
- George Orwell

The Natural Philosopher[_2_] May 24th 20 11:40 AM

Do powerline adapters work during a power cut?
 
On 24/05/2020 10:49, Roger Hayter wrote:
If the cable has not got a matched load at the end then you*don't*
see the characteristic impedance at the beginning.


Oh, you do...until the reflection comes back!
And unterminated but infinitely long transmission line ...looks like a
terminated one


But the impedance
you see with a matched load *isn't* the load itself but the cable
impedance.


Which is the same value....


--
€œIt is dangerous to be right in matters on which the established
authorities are wrong.€

ۥ Voltaire, The Age of Louis XIV

Paul[_46_] May 24th 20 08:59 PM

Do powerline adapters work during a power cut?
 
The Natural Philosopher wrote:
On 24/05/2020 10:40, Roger Hayter wrote:
The Natural Philosopher wrote:

On 23/05/2020 12:13, The Natural Philosopher wrote:
On 23/05/2020 11:18, Tim Streater wrote:
The collision detect budget is critically dependent on the
DC resistance of the system; the high-frequency attenuation is much
less critical.

No, it isn't. This seems to have been a myth that got around

The rest is valid tho.


I want you to not accept what an expert has said, but explain WHY what
he says, is true?
Especially since I can find no other reference anywhere to 'DC' or
'cable *resistance* in any paper on CSMA/CD systems. Cable impedance,
yes, resistance no.

And today's systems still implement CSMA/CD but are universally coupled
in with transformers. That don't pass DC..

And DC won't propagate any faster than someone else's pulse train
anyway.

AIUI collision detection is dome by sensing that the output on the wire
which you have 'grabbed' is not exactly what you are putting on it.

Cable *impedance* and of course attenuation matters but as long as they
are within tolerance that's OK, what is crucial however is that you
don't have excessive propagation delays and that is what screws you with
long cable runs. Two stations - one at each end - can transmit, see
their packets go clear and un buggered before detecting each others
transmission if the cable is too long.

In short.

- impedance is determined as your article says by ratio of inner to
outer conductor diameter, modified by the dielectric in between - air
cored polythene usually.

- Attenuation is a function of resistance, which given the above is a
function of cable core circumference and material - usually silver plate
on quality cables, or just copper. (fat cables lose less).

- propagation delay is determined by the dielectric and the cable length
- the nearer a vacuum the nearer the speed of light the signal travels.


As it happens for all cables and electronics in use then and now,
propagation delay is the limiting factor. Because you have to have a
line that is clear of all other travelling waves to transmit on.

Collision detection is a simple matter of receiving stuff you did not
transmit.

No DC involved at all.


Same as wifi today. No DC involved at all.


What causes dispersion then? Does not loss (resistance) slow higher
frequencies more and reduce pulser risetime? I have no recollection of
theory, and I am quite prepared to be wrong.

theoretically dispersion doesn't happen :-)

AFAICR its down to the effective dielectric constant not being constant
with frequency. Bit like chromatic aberration in glass.


Did you ever look at the signal on the central coax cable
with an oscilloscope ? What did you see ? Were you surprised
at what you saw ?

https://datasheets.maximintegrated.c...8Q8392LA03.pdf

"to the transmit output driver, which drives the
transmission medium through a high impedance current source."
--------------

"the transition on TXO proceeds monotonically to zero current."
------------

Notice how they talk about current and not voltage.

The coax is 50 ohm doubly terminated, with one ground connection per cable segment.

You pump a current into it. The terminations convert that current
into a voltage.

Once you've seen an oscilloscope picture of what's on the cable,
you won't forget it.

Since V=R*I , the resistance of the cable is important to proper
function. Errors in resistance will lead to errors in developed
voltage, causing problems with collision detection and so on.

The transceivers I used at work, claimed a certain operating
distance, but the manufacturer claimed they would do double
that distance (with the proper coax in usage). Really quite
amazing tech, compared to the home-made transceivers some
of the engineers at work used when prototyping. The first
day I got on the job, my first task was to repair enough
coaxial sections so I could have a network :-( Our home made
transceivers were voltage sources, at a guess, and
I soon discarded them. The commercial replacements were *expensive*,
but, the difference was, they worked, and we could easily
build max-length networks with them.

There's one other thing you should have learned about that
network, as a "manager". Did you notice something ? It only
took a couple years before our first disaster. When I saw it,
a ****ing light bulb went on...

Paul


All times are GMT +1. The time now is 11:21 PM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004 - 2014 DIYbanter