Woodworking (rec.woodworking) Discussion forum covering all aspects of working with wood. All levels of expertise are encouraged to particiapte.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 643
Default Move over, SawStop ...

.... here comes BladeStop!

https://www.youtube.com/embed/NiRegdech_E?autoplay=1

http://www.bladestop.com

"BladeStop„˘ improves band saw safety, with the ability to stop a
bandsaw blade within a fraction of a second from when contact is made
with the operator, reducing serious band saw blade injuries.
BladeStop„˘ mechanically stops the bandsaw blade when the control unit
determines a person has come in contact with the blade -- stopping
the blade operation within 9 milliseconds of sensing a person's
finger or hand!"

  #2   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 3,287
Default Move over, SawStop ...

I have to say, I am sorry to see that.

It means that all over the internet, in a high concentration here, and at the old men's table at Woodcraft the teeth gnashing will start.

Screams of civil rights violations, chest thumping of those declaring that their generation had no guards or safety devices and they were fine, the paranoids buying saws now before the nanny state Commie/weenies make safety some kind of bull**** issue... all of it.

Ready for the first 250 thread here for a long, long time. Nothing like getting a good bitch on to fire one up, though.

Robert

  #3   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Tuesday, November 21, 2017 at 2:04:21 AM UTC-5, wrote:
I have to say, I am sorry to see that.

It means that all over the internet, in a high concentration here, and at the old men's table at Woodcraft the teeth gnashing will start.

Screams of civil rights violations, chest thumping of those declaring that their generation had no guards or safety devices and they were fine, the paranoids buying saws now before the nanny state Commie/weenies make safety some kind of bull**** issue... all of it.

Ready for the first 250 thread here for a long, long time. Nothing like getting a good bitch on to fire one up, though.

Robert


I think it's a great idea.

There you go, I just canceled out your bitch and saved us 248 posts.

You're welcome. ;-)

  #5   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 2,349
Default Move over, SawStop ...

On 2017-11-21, Ed Pawlowski wrote:

Just think of the lives of depressed people it will save.


????

nb


  #7   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 493
Default Move over, SawStop ...

replying to Spalted Walt, Iggy wrote:
Wow, stops before you even get there. Not better than SawStop with the glove
requirement, but mighty impressive in the right usage.

--
for full context, visit https://www.homeownershub.com/woodwo...op-812400-.htm


  #8   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 11,640
Default Move over, SawStop ...

On 11/21/2017 9:44 AM, notbob wrote:
On 2017-11-21, Ed Pawlowski wrote:

Just think of the lives of depressed people it will save.


????

nb


Stops this sort of thing
https://www.documentingreality.com/f...bandsaw-11688/
  #9   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 3,287
Default Move over, SawStop ...

On Tuesday, November 21, 2017 at 5:59:00 AM UTC-6, DerbyDad03 wrote:

I think it's a great idea.

There you go, I just canceled out your bitch and saved us 248 posts.

You're welcome. ;-)


¡Gracias!

Robert
  #10   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1
Default Move over, SawStop ...

replying to Spalted Walt, Ernesto wrote:
I recently read Max Tegmark's new book "Life 3.0" and have read others by Ray
Kurzweil and other physicists and engineers that touch on this subject. As a
result I know that everything in this video is accurate and disturbingly
likely. I don't believe those who are never content with the amount of power
and wealth they have will refrain from developing this technology, especially
because in their ignorance and arrogance they will mistakenly believe they
will be able to control it once they have it. As such, I don't think the
efforts to restrict and guide the development of AI will be successful in
keeping us safe from the dark side of strong AI.


--
for full context, visit https://www.homeownershub.com/woodwo...op-812400-.htm




  #12   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.


technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?
  #13   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 783
Default Move over, SawStop ...

"DerbyDad03" wrote in message
...

Here goes:


5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.


You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.


You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.


Which option do you choose?


As my school bus driver explained nearly 50 years ago, the lessor of evils
in this case would be to kill the lone worker... In the case of the bus, it
would be to run over a kid on the side of the road rather than have a
head-on collision with a large truck.

While troubling as a kid it made sense then and it still makes sense...

Just to throw this in: In real life a speeding train suddenly and
unknowingly switching tracks is not a good thing... hundreds could be killed
or injured if it were a passenger train!



  #14   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 12,155
Default Move over, SawStop ...

On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.


technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.
  #15   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 11,640
Default Move over, SawStop ...

On 11/22/2017 7:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.


technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


The short answer is to pull the switch and save as many lives as possible.

The long answer, it depends. Would you make that same decision if the
lone person was a family member? If the lone person was you? Five old
people or one child? Of course, AI would take all the emotions out of
the decision making. I think that is what you may be getting at.


  #16   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 643
Default Move over, SawStop ...

DerbyDad03 wrote:

On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.


technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


I think humans have an aversion to harming others that needs to be
overridden by something (artificial intelligence). By rational
thinking we can sometimes override it -- by thinking about the people
we will save, for example. But for some people, that increase in
anxiety may be so overpowering that they don't make the utilitarian
choice, the choice for the greater good.

  #17   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 7:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


The short answer is to pull the switch and save as many lives as possible.

The long answer, it depends. Would you make that same decision if the
lone person was a family member? If the lone person was you? Five old
people or one child? Of course, AI would take all the emotions out of
the decision making. I think that is what you may be getting at.


AI will not take *all* of the emotion out of it. More on that later.
  #18   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 643
Default Move over, SawStop ...

DerbyDad03 wrote:

On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 7:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0

I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


The short answer is to pull the switch and save as many lives as possible.

The long answer, it depends. Would you make that same decision if the
lone person was a family member? If the lone person was you? Five old
people or one child? Of course, AI would take all the emotions out of
the decision making. I think that is what you may be getting at.


AI will not take *all* of the emotion out of it. More on that later.


When do we get to the 'pushing the fat guy off the bridge' part of
this moral dilemma quiz? ;')

https://www.youtube.com/embed/bOpf6KcWYyw?autoplay=1

  #19   Report Post  
Posted to rec.woodworking
dpb dpb is offline
external usenet poster
 
Posts: 12,595
Default Move over, SawStop ...

On 22-Nov-17 10:08 AM, DerbyDad03 wrote:
....

AI will not take *all* of the emotion out of it. More on that later.


Will depend up what is encoded as "intelligence"; there's always a bias
of some sort; just what and how much dependent upon who's doing the doing.

--

  #20   Report Post  
Posted to rec.woodworking
dpb dpb is offline
external usenet poster
 
Posts: 12,595
Default Move over, SawStop ...

On 22-Nov-17 9:39 AM, Spalted Walt wrote:
....

I think humans have an aversion to harming others that needs to be
overridden by something (artificial intelligence). By rational
thinking we can sometimes override it -- by thinking about the people
we will save, for example. But for some people, that increase in
anxiety may be so overpowering that they don't make the utilitarian
choice, the choice for the greater good.


But if the one happens to Einstein or similar vis a vis the five "just
ordinary folks" what's utilitarian?

--





  #21   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Wednesday, November 22, 2017 at 11:21:24 AM UTC-5, Spalted Walt wrote:
DerbyDad03 wrote:

On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 7:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0

I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


The short answer is to pull the switch and save as many lives as possible.

The long answer, it depends. Would you make that same decision if the
lone person was a family member? If the lone person was you? Five old
people or one child? Of course, AI would take all the emotions out of
the decision making. I think that is what you may be getting at.


AI will not take *all* of the emotion out of it. More on that later.


When do we get to the 'pushing the fat guy off the bridge' part of
this moral dilemma quiz? ;')

https://www.youtube.com/embed/bOpf6KcWYyw?autoplay=1


After we get enough "Pull the lever" answers. ;-)

Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?
  #22   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Wed, 22 Nov 2017 04:52:04 -0800 (PST), DerbyDad03
wrote:

On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.


technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


The problem with this is that if I am the one pulling the switch I can
see more than what is being presented.

If all the workers are wearing prison uniforms and busy working, and I
pull the switch to kill the one who has ten kids verses the 5 who have
none?

Or I see that the one alone never left the detail and the other five
are escapee's then I leave it as is. I'm the one with the shotgun.

However, based on just your statement alone then I would leave the
switch alone, it is locked, so I couldn't change it anyhow, and the
five are working where they should not be as the train always runs on
schedules so the five are not to be there in the first place.
  #23   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 12,155
Default Move over, SawStop ...

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

Â* technophobia [tek-nuh-foh-bee-uh]
Â* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.


Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.
  #24   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 11,640
Default Move over, SawStop ...

On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.

Next, are the answers you get to the question what would actually
happen? It is easy to say "sure, I'd push the guy and save the other
lives" but IRL, would that happen? I can sit at my computer and
rationalize but if the time came, emotion may take over.


  #25   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

* technophobia [tek-nuh-foh-bee-uh]
* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.


Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.


Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.

So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


  #26   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 2,143
Default Move over, SawStop ...

On Tue, 21 Nov 2017 05:18:37 +0000
Spalted Walt wrote:

"BladeStop„˘ improves band saw safety, with the ability to stop a
bandsaw blade within a fraction of a second from when contact is made


why do sawstop have to move over do they have a bandsaw product too


seems like a good idea but i think they need to make a razor knife
that will not cut the operator and also wood that has no splinters










  #27   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

  #28   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


Then there is the added input of the possible wealth of those rickety
old people or the political power the office holder holds and the
disruption to the economy or social power.

Who lives and who dies would have to be state mandated or lawsuits
filed on the programmer would ensue.
  #29   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 643
Default Move over, SawStop ...

DerbyDad03 wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


https://pbs.twimg.com/media/Cp0D5oCWIAAxSUT.jpg

LOL!
  #30   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 643
Default Move over, SawStop ...

Electric Comet wrote:

why do sawstop have to move over do they have a bandsaw product too


YBTJ https://www.youtube.com/watch?v=W3PLwNccpXU


  #31   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 499
Default Move over, SawStop ...

On Wednesday, November 22, 2017 at 2:08:27 PM UTC-6, Electric Comet wrote:

but i think they need to make a razor knife
that will not cut the operator


Hell man. I'd like a razor SHAVER that cuts the whiskers but does not cut me.
  #32   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Wed, 22 Nov 2017 21:06:38 +0000, Spalted Walt
wrote:

DerbyDad03 wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."


The problem with this scenario is that it assumes that the AI has only
human eyes for sensors. It sees the four year old on radar near the
side of the road, detects a possible hazard, and slows down before
arriving near the four year old.

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


https://pbs.twimg.com/media/Cp0D5oCWIAAxSUT.jpg

LOL!

  #33   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 12,155
Default Move over, SawStop ...

On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

Â* technophobia [tek-nuh-foh-bee-uh]
Â* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.


Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.


Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.


No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


Again you are adding what-if's.
  #34   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 2,833
Default Move over, SawStop ...

On Wed, 22 Nov 2017 08:45:53 -0500, "John Grossbohlin"
wrote:

"DerbyDad03" wrote in message
...

Here goes:


5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.


You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.


You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.


Which option do you choose?


As my school bus driver explained nearly 50 years ago, the lessor of evils
in this case would be to kill the lone worker... In the case of the bus, it
would be to run over a kid on the side of the road rather than have a
head-on collision with a large truck.


However, you have no participated in a murder. How do you look in
orange?

While troubling as a kid it made sense then and it still makes sense...

Just to throw this in: In real life a speeding train suddenly and
unknowingly switching tracks is not a good thing... hundreds could be killed
or injured if it were a passenger train!


We're not talking about reality, here. ;-)
  #35   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 2,833
Default Move over, SawStop ...

On Wed, 22 Nov 2017 08:08:26 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 10:32:54 AM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 7:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0

I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


The short answer is to pull the switch and save as many lives as possible.

The long answer, it depends. Would you make that same decision if the
lone person was a family member? If the lone person was you? Five old
people or one child? Of course, AI would take all the emotions out of
the decision making. I think that is what you may be getting at.


AI will not take *all* of the emotion out of it. More on that later.


Right. If it did, it wouldn't be "AI".


  #36   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 2,833
Default Move over, SawStop ...

On Wed, 22 Nov 2017 21:06:38 +0000, Spalted Walt
wrote:

DerbyDad03 wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


https://pbs.twimg.com/media/Cp0D5oCWIAAxSUT.jpg


ROTF

LOL!

  #37   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Wednesday, November 22, 2017 at 6:38:28 PM UTC-5, J. Clarke wrote:
On Wed, 22 Nov 2017 21:06:38 +0000, Spalted Walt
wrote:

DerbyDad03 wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."


The problem with this scenario is that it assumes that the AI has only
human eyes for sensors. It sees the four year old on radar near the
side of the road, detects a possible hazard, and slows down before
arriving near the four year old.


Gee, I don't know which of my 2 comments to post first...

No, the problem is that you did not read the description of the scenario
carefully enough. "Can't go right because of the line of parked
cars." Unless the radar is airborne or can see through metal, it won't
detect the kid in time.

But - and this is big but - it ain't about the seeing of the kid or not.
You could shoot a thousand arrows through the the scenario. It was never
meant to be perfect. The point is that there are still humans involved
that have to write the rules related which person or persons to kill.

People have to decide when it's OK to push the big guy off the bridge.

  #38   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Wednesday, November 22, 2017 at 7:12:18 PM UTC-5, Leon wrote:
On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

Â* technophobia [tek-nuh-foh-bee-uh]
Â* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.


Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.


No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


I tried, I really tried:

"Please just accept that the situation is as stated and that you only have
2 choices. If we get into "Well, in a real life situation, you'd have to
factor in this, that and the other thing" we'll never get through this
exercise."


So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


Again you are adding what-if's.


  #39   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 12,155
Default Move over, SawStop ...

On 11/22/2017 9:47 PM, DerbyDad03 wrote:
On Wednesday, November 22, 2017 at 7:12:18 PM UTC-5, Leon wrote:
On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

Â* technophobia [tek-nuh-foh-bee-uh]
Â* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.

Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.


No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


I tried, I really tried:

"Please just accept that the situation is as stated and that you only have
2 choices. If we get into "Well, in a real life situation, you'd have to
factor in this, that and the other thing" we'll never get through this
exercise."




Precisely!
  #40   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Wed, 22 Nov 2017 18:12:06 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

* technophobia [tek-nuh-foh-bee-uh]
* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.


Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.


No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


Again you are adding what-if's.


I understand what you are saying, but I would consider them inherent
to the scenario.
Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Over a grand to move the electric meter! [email protected] UK diy 34 November 16th 11 04:02 AM
The SawStop, How will you let it affect you? (Long) Leon Woodworking 15 July 18th 03 02:41 PM
SawStop files with GPO/CPSC for mandatory use in US Charlie Self Woodworking 145 July 16th 03 09:08 PM
Sawstop question? Al Kyder Woodworking 3 July 11th 03 09:55 AM


All times are GMT +1. The time now is 04:32 PM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 DIYbanter.
The comments are property of their posters.
 

About Us

"It's about DIY & home improvement"