Woodworking (rec.woodworking) Discussion forum covering all aspects of working with wood. All levels of expertise are encouraged to particiapte.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #41   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Wed, 22 Nov 2017 19:47:39 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 7:12:18 PM UTC-5, Leon wrote:
On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

* technophobia [tek-nuh-foh-bee-uh]
* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.

Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.


No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


I tried, I really tried:

"Please just accept that the situation is as stated and that you only have
2 choices. If we get into "Well, in a real life situation, you'd have to
factor in this, that and the other thing" we'll never get through this
exercise."

Snip

Ok, then I opt to let er fly, and not interfere since morals or values
cannot be a part of the scenario without it being a "what if".
  #42   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 12,155
Default Move over, SawStop ...

On 11/23/2017 1:14 AM, OFWW wrote:
On Wed, 22 Nov 2017 18:12:06 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

Â* technophobia [tek-nuh-foh-bee-uh]
Â* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.

Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.


No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


Again you are adding what-if's.


I understand what you are saying, but I would consider them inherent
to the scenario.


LOL. Yeah well blame Derby for leaving out details to consider. ;~)
  #43   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 12,155
Default Move over, SawStop ...

On 11/23/2017 1:14 AM, OFWW wrote:
On Wed, 22 Nov 2017 18:12:06 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

Â* technophobia [tek-nuh-foh-bee-uh]
Â* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.

Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.


No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


Again you are adding what-if's.


I understand what you are saying, but I would consider them inherent
to the scenario.


LOL. Yeah well blame Derby for leaving out details to consider. ;~)
  #44   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Thursday, November 23, 2017 at 10:21:38 AM UTC-5, Leon wrote:
On 11/23/2017 1:14 AM, OFWW wrote:
On Wed, 22 Nov 2017 18:12:06 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

Â* technophobia [tek-nuh-foh-bee-uh]
Â* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.

Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.

No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


Again you are adding what-if's.


I understand what you are saying, but I would consider them inherent
to the scenario.


LOL. Yeah well blame Derby for leaving out details to consider. ;~)


The train schedule, labor contract and key access process was not available
at the time of my posting. Sorry.
  #45   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Thu, 23 Nov 2017 07:36:23 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 10:21:38 AM UTC-5, Leon wrote:
On 11/23/2017 1:14 AM, OFWW wrote:
On Wed, 22 Nov 2017 18:12:06 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 1:17 PM, OFWW wrote:
On Wed, 22 Nov 2017 12:45:11 -0600, Leon lcb11211@swbelldotnet
wrote:

On 11/22/2017 8:45 AM, Leon wrote:
On 11/22/2017 6:52 AM, DerbyDad03 wrote:
On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

* technophobia [tek-nuh-foh-bee-uh]
* noun -- abnormal fear of or anxiety about the effects of advanced
technology.

https://www.youtube.com/embed/NzEeJc...policy=3&rel=0


I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48
hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in
their
direction. They have no escape route. If the train continues down the
tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Pull the lever, Choosing to do nothing is the choice to kill 5.

Well I have mentioned this before, and it goes back to comments I have
made in the past about decision making. It seems the majority here use
emotional over rational thinking to come up with a decision.

It was said you only have two choices and who these people are or might
be is not a consideration. You can't make a rational decision with
what-if's. You only have two options, kill 5 or kill 1. Rational for
me says save 5, for the rest of you that are bringing in scenarios past
what should be considered will waste too much time and you end up with a
kill before you decide what to do.

Rational thinking would state that trains run on a schedule, the
switch would be locked, and for better or worse the five were not
supposed to be there in the first place.

No, you are adding "what if's to the given restraints. This is easy, you
either choose to move the switch or not. There is no other situation to
consider.


So how can I make a decision more rational than the scheduler, even if
I had the key to the lock.


Again you are adding what-if's.

I understand what you are saying, but I would consider them inherent
to the scenario.


LOL. Yeah well blame Derby for leaving out details to consider. ;~)


The train schedule, labor contract and key access process was not available
at the time of my posting. Sorry.


Thinking along the lines if I were the programmer for the code, I
would have to conclude insufficient info and let what happens happen
until such time as there is more info.


  #46   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.
  #47   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Wednesday, November 22, 2017 at 6:38:28 PM UTC-5, J. Clarke wrote:

....snip...

The problem with this scenario is that it assumes that the AI has only
human eyes for sensors. It sees the four year old on radar near the
side of the road, detects a possible hazard, and slows down before
arriving near the four year old.


OK, have it your way.

"To truly guarantee a pedestrians safety, an AV would have to slow to a
crawl any time a pedestrian is walking nearby on a sidewalk, in case the
pedestrian decided to throw themselves in front of the vehicle," Noah
Goodall, a scientist with the Virginia Transportation Research Council,
wrote by email."

http://www.businessinsider.com/self-...o-kill-2016-12

....snip...
  #48   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.


So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.
  #49   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.


So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.


Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.
  #50   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Thu, 23 Nov 2017 18:44:05 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.


So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.


Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.


I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?




  #51   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.


Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.


I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?


You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.

But we should be sticking to this hypothetical example given us.

  #52   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Thu, 23 Nov 2017 20:52:09 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.


I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?


You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.


Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?

But we should be sticking to this hypothetical example given us.


It was suggested that someone would go to jail. I still want to know
who and what crime they committed.
  #53   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,043
Default Move over, SawStop ...

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.


Damages would be a tort case, as to who and what crime that would be
determined in court. Some DA looking for publicty would brings
charges.
  #54   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Thu, 23 Nov 2017 23:46:52 -0600, Markem
wrote:

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.


Damages would be a tort case,


So why do you mention damages?

as to who and what crime that would be
determined in court. Some DA looking for publicty would brings
charges.


What charges? To bring charges there must have been a chargeable
offense, which means that a plausible argument can be made that some
law was violated. So what law do you believe would have been
violated? Or do you just _like_ being laughed out of court?
  #55   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?


You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.


Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?

But we should be sticking to this hypothetical example given us.


It was suggested that someone would go to jail. I still want to know
who and what crime they committed.


The person who did not stay in their own lane, and ended up committing
involuntary manslaughter.

In the case you bring up the AV can be currently over ridden at
anytime by the occupant. There are already AV vehicles operating on
the streets.


Regarding your "whose at fault" scenario, just look at the court cases
against gun makers, as if guns kill people.

So can we know return to the question or at the least, wood working?


  #56   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Thu, 23 Nov 2017 23:00:51 -0800, OFWW
wrote:

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?


You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.


Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?

But we should be sticking to this hypothetical example given us.


It was suggested that someone would go to jail. I still want to know
who and what crime they committed.


The person who did not stay in their own lane, and ended up committing
involuntary manslaughter.


Are you arguing that an autonomous vehicle is a "person"? You
really don't seem to grasp the concept. Rather than a car with an
occupant, make it a car, say a robot taxicab, that is going somewhere
or other unoccupied.

In the case you bring up the AV can be currently over ridden at
anytime by the occupant. There are already AV vehicles operating on
the streets.


In what case that I bring up? Globalhawk doesn't _have_ an occupant.
(when people use words with which you are unfamiliar, you should at
least Google those words before opining). There are very few
autonomous vehicles and currently they are for the most part operated
with a safety driver, but that is not anybody's long-term plan. Google
already has at least one demonstrator with no steering wheel or pedals
and Uber is planning on using driverless cars in their ride sharing
service--ultimately those would also have no controls accessible to
the passenger.

Regarding your "whose at fault" scenario, just look at the court cases
against gun makers, as if guns kill people.


I have not introduced a "who's at fault scenariao". I have asked what
law would be violated and who would be jailed. "At fault" decides who
pays damages, not who goes to jail. I am not discussing damages, I am
discussing JAIL. You do know what a jail is, do you not?

So can we know return to the question or at the least, wood working?


You're the one who started feeding the troll.
  #57   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 11,640
Default Move over, SawStop ...

On 11/24/2017 12:37 AM, J. Clarke wrote:


I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail.



They can impound your car in a drug bust. Maybe they will impound your
car for the offense. We'll build special long term impound lots for
serious offenses, just disconnect the battery for lesser ones.


You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.


Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.


The programmer will be jailed. Or maybe they will stick a pin in a
Voodoo doll to punish him.



We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.


The sensible thing would be to gather the most brilliant minds of the TV
ambulance chasing lawyers and let them come up with guidelines for
liability. Can you think of anything more fair than that?
  #58   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Friday, November 24, 2017 at 9:11:22 AM UTC-5, J. Clarke wrote:
On Thu, 23 Nov 2017 23:00:51 -0800, OFWW
wrote:

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?


You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.

Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?

But we should be sticking to this hypothetical example given us.

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.


The person who did not stay in their own lane, and ended up committing
involuntary manslaughter.


Are you arguing that an autonomous vehicle is a "person"? You
really don't seem to grasp the concept. Rather than a car with an
occupant, make it a car, say a robot taxicab, that is going somewhere
or other unoccupied.

In the case you bring up the AV can be currently over ridden at
anytime by the occupant. There are already AV vehicles operating on
the streets.


In what case that I bring up? Globalhawk doesn't _have_ an occupant.
(when people use words with which you are unfamiliar, you should at
least Google those words before opining). There are very few
autonomous vehicles and currently they are for the most part operated
with a safety driver, but that is not anybody's long-term plan. Google
already has at least one demonstrator with no steering wheel or pedals
and Uber is planning on using driverless cars in their ride sharing
service--ultimately those would also have no controls accessible to
the passenger.

Regarding your "whose at fault" scenario, just look at the court cases
against gun makers, as if guns kill people.


I have not introduced a "who's at fault scenariao". I have asked what
law would be violated and who would be jailed. "At fault" decides who
pays damages, not who goes to jail. I am not discussing damages, I am
discussing JAIL. You do know what a jail is, do you not?

So can we know return to the question or at the least, wood working?


You're the one who started feeding the troll.


....and then you joined the meal.
  #59   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,966
Default Move over, SawStop ...

On Nov 24, 2017, OFWW wrote
(in ):

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800,
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800,
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading
in their
direction. They have no escape route. If the train continues down
the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is
a fairly
large person. We'll save you some trouble and let that person be a
stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers
will
be killed. If you push the stranger off the bridge, the train will
kill
him but be stopped before the 5 workers are killed. (Don't question
the
physics, just accept the outcome.)

Which option do you choose?

I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't
think I
could push the guy either.

And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine
that an AV
is traveling down the road, with its AI in complete control of the
vehicle.
The driver is using one hand get a cup of coffee from the built-in
Keurig
machine and choosing a Pandora station with the other. He is
completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all
of the
data at its disposal (speed, distance, weather conditions, tire
pressure,
etc.) and decides that it will not be able to stop in time. It checks
the
input from its 360° cameras. Can't go right because of the line of
parked
cars. They won't slow the vehicle enough to avoid hitting the kid.
Using
facial recognition the AI determines that the mini-van on the left
contains
5 elderly people. If the AV swerves left, it will push the mini-van
into
oncoming traffic, directly into the path of a 18 wheeler. The AI
communicates
with the 18 wheeler's AI who responds and says "I have no place to
go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first
scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the
initial
AI rules have been written, not once the facial recognition database
has
been built. The question is who wrote those rules? Who decided it's
OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait,
maybe
it's better to save the kid and let the old folks die. They've had a
full
life. Who wrote that rule? In other words, someone(s) have to decide
whose
life is worth more than another's. They are essentially standing on a
bridge
deciding whether to push the guy or not. They have to write the rule.
They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table.
Having
to make the decisions would be one thing. Having to sit next to the
person
that would push the guy off the bridge with a gleam in his eye would
be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?

You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.


Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?


GlobalHawk drones do have human pilots. Although they are not on board, they
are in control via a stellite link and can be thousands of miles away.

..http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/

Joe Gwinn


  #60   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Friday, November 24, 2017 at 10:10:01 AM UTC-5, Ed Pawlowski wrote:
On 11/24/2017 12:37 AM, J. Clarke wrote:


I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail.



They can impound your car in a drug bust. Maybe they will impound your
car for the offense. We'll build special long term impound lots for
serious offenses, just disconnect the battery for lesser ones.


You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.


Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.


The programmer will be jailed. Or maybe they will stick a pin in a
Voodoo doll to punish him.



We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.


The sensible thing would be to gather the most brilliant minds of the TV
ambulance chasing lawyers and let them come up with guidelines for
liability. Can you think of anything more fair than that?


Sure. Build a random number generator into the AI. The AI simply uses the
random number to decide who to take out at the time of the incident.

"Step right up, spin the wheel, take your chances."

It'll all be "hit or miss" so to speak.


  #61   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 643
Default Move over, SawStop ...

DerbyDad03 wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.


USATODAY: Self-driving cars programmed to decide who dies in a crash
https://www.usatoday.com/story/money...ash/891493001/

  #62   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,043
Default Move over, SawStop ...

On Fri, 24 Nov 2017 00:53:07 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 23:46:52 -0600, Markem
wrote:

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.


Damages would be a tort case,


So why do you mention damages?

as to who and what crime that would be
determined in court. Some DA looking for publicty would brings
charges.


What charges? To bring charges there must have been a chargeable
offense, which means that a plausible argument can be made that some
law was violated. So what law do you believe would have been
violated? Or do you just _like_ being laughed out of court?


I am not looking for political office, ever heard the saying a DA can
indict a ham sandwich.
  #63   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,043
Default Move over, SawStop ...

On Thu, 23 Nov 2017 23:00:51 -0800, OFWW
wrote:

So can we know return to the question or at the least, wood working?


Probably not
  #64   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Fri, 24 Nov 2017 11:58:06 -0600, Markem
wrote:

On Fri, 24 Nov 2017 00:53:07 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 23:46:52 -0600, Markem
wrote:

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.

Damages would be a tort case,


So why do you mention damages?

as to who and what crime that would be
determined in court. Some DA looking for publicty would brings
charges.


What charges? To bring charges there must have been a chargeable
offense, which means that a plausible argument can be made that some
law was violated. So what law do you believe would have been
violated? Or do you just _like_ being laughed out of court?


I am not looking for political office, ever heard the saying a DA can
indict a ham sandwich.


But when was the last time a ham sandwich was imprisoned?
  #65   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, OFWW wrote
(in ):

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800,
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800,
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading
in their
direction. They have no escape route. If the train continues down
the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is
a fairly
large person. We'll save you some trouble and let that person be a
stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers
will
be killed. If you push the stranger off the bridge, the train will
kill
him but be stopped before the 5 workers are killed. (Don't question
the
physics, just accept the outcome.)

Which option do you choose?

I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't
think I
could push the guy either.

And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine
that an AV
is traveling down the road, with its AI in complete control of the
vehicle.
The driver is using one hand get a cup of coffee from the built-in
Keurig
machine and choosing a Pandora station with the other. He is
completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all
of the
data at its disposal (speed, distance, weather conditions, tire
pressure,
etc.) and decides that it will not be able to stop in time. It checks
the
input from its 360° cameras. Can't go right because of the line of
parked
cars. They won't slow the vehicle enough to avoid hitting the kid.
Using
facial recognition the AI determines that the mini-van on the left
contains
5 elderly people. If the AV swerves left, it will push the mini-van
into
oncoming traffic, directly into the path of a 18 wheeler. The AI
communicates
with the 18 wheeler's AI who responds and says "I have no place to
go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first
scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the
initial
AI rules have been written, not once the facial recognition database
has
been built. The question is who wrote those rules? Who decided it's
OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait,
maybe
it's better to save the kid and let the old folks die. They've had a
full
life. Who wrote that rule? In other words, someone(s) have to decide
whose
life is worth more than another's. They are essentially standing on a
bridge
deciding whether to push the guy or not. They have to write the rule.
They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table.
Having
to make the decisions would be one thing. Having to sit next to the
person
that would push the guy off the bridge with a gleam in his eye would
be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?

You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.

Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?


GlobalHawk drones do have human pilots. Although they are not on board, they
are in control via a stellite link and can be thousands of miles away.

.http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/


You are conflating Reaper and Globalhawk and totally missing the
point.


  #66   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Fri, 24 Nov 2017 10:09:56 -0500, Ed Pawlowski wrote:

On 11/24/2017 12:37 AM, J. Clarke wrote:


I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail.



They can impound your car in a drug bust. Maybe they will impound your
car for the offense. We'll build special long term impound lots for
serious offenses, just disconnect the battery for lesser ones.


And of course that impoundment was ordered by a jury. You seem to not
understand the difference between seizure of property and jail. And
also totally miss the point.

You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.


Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.


The programmer will be jailed. Or maybe they will stick a pin in a
Voodoo doll to punish him.


Which programmer? This isn't some guy working alone in his basement.
Is it the guy who wrote the code, the one who wrote the spec he
implemented, the manager who approved it? And when has anyone ever
been jailed because a device on which he was an engineer worked as
designed and someone came to harm?

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.


The sensible thing would be to gather the most brilliant minds of the TV
ambulance chasing lawyers and let them come up with guidelines for
liability. Can you think of anything more fair than that?


You might actually have something.
  #67   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,043
Default Move over, SawStop ...

On Fri, 24 Nov 2017 16:23:59 -0500, J. Clarke
wrote:

On Fri, 24 Nov 2017 11:58:06 -0600, Markem
wrote:

On Fri, 24 Nov 2017 00:53:07 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 23:46:52 -0600, Markem
wrote:

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.

Damages would be a tort case,

So why do you mention damages?

as to who and what crime that would be
determined in court. Some DA looking for publicty would brings
charges.

What charges? To bring charges there must have been a chargeable
offense, which means that a plausible argument can be made that some
law was violated. So what law do you believe would have been
violated? Or do you just _like_ being laughed out of court?


I am not looking for political office, ever heard the saying a DA can
indict a ham sandwich.


But when was the last time a ham sandwich was imprisoned?


It transformed into a penicillin based mold and could no longer be
held.
  #68   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,966
Default Move over, SawStop ...

On Nov 24, 2017, J. Clarke wrote
(in ):

On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, OFWW wrote
(in ):

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800,
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800,
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading
in their
direction. They have no escape route. If the train continues down
the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you
is
a fairly
large person. We'll save you some trouble and let that person be a
stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers
will
be killed. If you push the stranger off the bridge, the train will
kill
him but be stopped before the 5 workers are killed. (Don't
question
the
physics, just accept the outcome.)

Which option do you choose?

I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't
think I
could push the guy either.

And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous
vehicles,
but please avoid the rabbit hole and realize that the concept
applies
to just about any where AI is used and people are involved.
Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine
that an AV
is traveling down the road, with its AI in complete control of the
vehicle.
The driver is using one hand get a cup of coffee from the built-in
Keurig
machine and choosing a Pandora station with the other. He is
completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses
all
of the
data at its disposal (speed, distance, weather conditions, tire
pressure,
etc.) and decides that it will not be able to stop in time. It
checks
the
input from its 360° cameras. Can't go right because of the line of
parked
cars. They won't slow the vehicle enough to avoid hitting the kid.
Using
facial recognition the AI determines that the mini-van on the left
contains
5 elderly people. If the AV swerves left, it will push the mini-van
into
oncoming traffic, directly into the path of a 18 wheeler. The AI
communicates
with the 18 wheeler's AI who responds and says "I have no place to
go. If
you push the van into my lane, I'm taking out a bunch of Grandmas
and
Grandpas."

Now the AI has to make basically the same decision as in my first
scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once
the
initial
AI rules have been written, not once the facial recognition database
has
been built. The question is who wrote those rules? Who decided it's
OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait,
maybe
it's better to save the kid and let the old folks die. They've had a
full
life. Who wrote that rule? In other words, someone(s) have to decide
whose
life is worth more than another's. They are essentially standing on
a
bridge
deciding whether to push the guy or not. They have to write the
rule.
They
are either going to kill the kid or push the car into the other
lane.

I, for one, don't think that I want to be sitting around that table.
Having
to make the decisions would be one thing. Having to sit next to the
person
that would push the guy off the bridge with a gleam in his eye would
be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the
swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane
and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?

You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.

Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?


GlobalHawk drones do have human pilots. Although they are not on board, they
are in control via a stellite link and can be thousands of miles away.

.http://www.aviationtoday.com/2017/03...e-drone-pilot/


You are conflating Reaper and Globalhawk and totally missing the
point.


Could you be more specific? Exactly what is wrong?

Joe Gwinn

  #69   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 2,833
Default Move over, SawStop ...

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800, OFWW
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you is a fairly
large person. We'll save you some trouble and let that person be a stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you push the stranger off the bridge, the train will kill
him but be stopped before the 5 workers are killed. (Don't question the
physics, just accept the outcome.)

Which option do you choose?


I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't think I
could push the guy either.


And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous vehicles,
but please avoid the rabbit hole and realize that the concept applies
to just about any where AI is used and people are involved. Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine that an AV
is traveling down the road, with its AI in complete control of the vehicle.
The driver is using one hand get a cup of coffee from the built-in Keurig
machine and choosing a Pandora station with the other. He is completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses all of the
data at its disposal (speed, distance, weather conditions, tire pressure,
etc.) and decides that it will not be able to stop in time. It checks the
input from its 360° cameras. Can't go right because of the line of parked
cars. They won't slow the vehicle enough to avoid hitting the kid. Using
facial recognition the AI determines that the mini-van on the left contains
5 elderly people. If the AV swerves left, it will push the mini-van into
oncoming traffic, directly into the path of a 18 wheeler. The AI communicates
with the 18 wheeler's AI who responds and says "I have no place to go. If
you push the van into my lane, I'm taking out a bunch of Grandmas and
Grandpas."

Now the AI has to make basically the same decision as in my first scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once the initial
AI rules have been written, not once the facial recognition database has
been built. The question is who wrote those rules? Who decided it's OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait, maybe
it's better to save the kid and let the old folks die. They've had a full
life. Who wrote that rule? In other words, someone(s) have to decide whose
life is worth more than another's. They are essentially standing on a bridge
deciding whether to push the guy or not. They have to write the rule. They
are either going to kill the kid or push the car into the other lane.

I, for one, don't think that I want to be sitting around that table. Having
to make the decisions would be one thing. Having to sit next to the person
that would push the guy off the bridge with a gleam in his eye would be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.


Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.


I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?

The software developer who signed off on the failing module.
  #70   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Fri, 24 Nov 2017 18:39:03 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, J. Clarke wrote
(in ):

On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, OFWW wrote
(in ):

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800,
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800,
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is heading
in their
direction. They have no escape route. If the train continues down
the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you
is
a fairly
large person. We'll save you some trouble and let that person be a
stranger.

You have 2, and only 2, options. If you do nothing, all 5 workers
will
be killed. If you push the stranger off the bridge, the train will
kill
him but be stopped before the 5 workers are killed. (Don't
question
the
physics, just accept the outcome.)

Which option do you choose?

I don't know. It was easy to pull the switch as there was a bit of
disconnect there. Now it is up close and you are doing the pushing.
One alternative is to jump yourself, but I'd not do that. Don't
think I
could push the guy either.

And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous
vehicles,
but please avoid the rabbit hole and realize that the concept
applies
to just about any where AI is used and people are involved.
Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine
that an AV
is traveling down the road, with its AI in complete control of the
vehicle.
The driver is using one hand get a cup of coffee from the built-in
Keurig
machine and choosing a Pandora station with the other. He is
completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses
all
of the
data at its disposal (speed, distance, weather conditions, tire
pressure,
etc.) and decides that it will not be able to stop in time. It
checks
the
input from its 360° cameras. Can't go right because of the line of
parked
cars. They won't slow the vehicle enough to avoid hitting the kid.
Using
facial recognition the AI determines that the mini-van on the left
contains
5 elderly people. If the AV swerves left, it will push the mini-van
into
oncoming traffic, directly into the path of a 18 wheeler. The AI
communicates
with the 18 wheeler's AI who responds and says "I have no place to
go. If
you push the van into my lane, I'm taking out a bunch of Grandmas
and
Grandpas."

Now the AI has to make basically the same decision as in my first
scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us, right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once
the
initial
AI rules have been written, not once the facial recognition database
has
been built. The question is who wrote those rules? Who decided it's
OK to
kill a young kid to save the lives of 5 rickety old folks? Oh wait,
maybe
it's better to save the kid and let the old folks die. They've had a
full
life. Who wrote that rule? In other words, someone(s) have to decide
whose
life is worth more than another's. They are essentially standing on
a
bridge
deciding whether to push the guy or not. They have to write the
rule.
They
are either going to kill the kid or push the car into the other
lane.

I, for one, don't think that I want to be sitting around that table.
Having
to make the decisions would be one thing. Having to sit next to the
person
that would push the guy off the bridge with a gleam in his eye would
be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the
swerving
driver assumes all the damages that incur from his action, including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane
and
kills the four "highly rated" pedestrians there are no charges, but if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...1b008b5aea-800

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?

You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.

Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?

GlobalHawk drones do have human pilots. Although they are not on board, they
are in control via a stellite link and can be thousands of miles away.

.http://www.aviationtoday.com/2017/03...e-drone-pilot/


You are conflating Reaper and Globalhawk and totally missing the
point.


Could you be more specific? Exactly what is wrong?


Reaper is a combat drone and is normally operated manually. We don't
let robots decided to shoot people yet. Globalhawk is a recon drone
and is normally autonomous. It has no weapons so shooting people is
not an issue. It can be operated manually and normally is in high
traffic areas for exactly the "what if it hits an airliner" reason,
but for most of its mission profile it is autonomous.

The article mentions Globalhawk in passing but then goes on to spend
the rest of its time discussing piloting Predator, which while still
in the inventory is ancestral to Reaper.


  #71   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Fri, 24 Nov 2017 09:11:16 -0500, J. Clarke
wrote:

But we should be sticking to this hypothetical example given us.

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.


The person who did not stay in their own lane, and ended up committing
involuntary manslaughter.


Are you arguing that an autonomous vehicle is a "person"? You
really don't seem to grasp the concept. Rather than a car with an
occupant, make it a car, say a robot taxicab, that is going somewhere
or other unoccupied.


Is not a "who" a person? and yes, I realize the optimum goal is for a
stand alone vehicle independent of owner operator. The robotic taxicab
is already in test mode.

In the case you bring up the AV can be currently over ridden at
anytime by the occupant. There are already AV vehicles operating on
the streets.


In what case that I bring up?


The case of the option for switching lanes. Your questioning as who
can be at fault. I brought up the fact that experiment air craft have
a lifetime indebtedness going back to the original maker and designer.
It was to answer just who was culpable.

Globalhawk doesn't _have_ an occupant.
(when people use words with which you are unfamiliar, you should at
least Google those words before opining). There are very few
autonomous vehicles and currently they are for the most part operated
with a safety driver, but that is not anybody's long-term plan. Google
already has at least one demonstrator with no steering wheel or pedals
and Uber is planning on using driverless cars in their ride sharing
service--ultimately those would also have no controls accessible to
the passenger.


There are a lot of autonomous vehicles running around, it just depends
are where you are, some have already been in real world accidents,
Uber already were testing vehicles but required a person in the case
just in case.

And yes, I knew globalhawks do not have an occupant resident in the
vehicle, but they are all monitored. As to vehicles some have a safety
driver and some do not. The globalhawks have built in sensory devices
themselves for alarming, etc. and all the data from radar, satellites
etc. The info for the full technology that they and the operators have
is not disclosed. Plus it is a secret as to who all are operating the
vehicles so the bottom line would be the government operating them.

But thank you for your comment on my knowledge and how to fix it.

Regarding your "whose at fault" scenario, just look at the court cases
against gun makers, as if guns kill people.


I have not introduced a "who's at fault scenariao". I have asked what
law would be violated and who would be jailed. "At fault" decides who
pays damages, not who goes to jail. I am not discussing damages, I am
discussing JAIL. You do know what a jail is, do you not?


Sorry, my Internet connection is down and I cannot google it.

So can we know return to the question or at the least, wood working?


You're the one who started feeding the troll.


Sorry, I am not privy to the list, so I'll just make this my last post
on the subject, but I will read your reply.
  #72   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 401
Default Move over, SawStop ...

On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
wrote:

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?


GlobalHawk drones do have human pilots. Although they are not on board, they
are in control via a stellite link and can be thousands of miles away.

.http://www.aviationtoday.com/2017/03/16/day-life-us-air-force-drone-pilot/

Joe Gwinn


Yes, I know. They, some versions, can be refueled in air.
  #73   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,648
Default Move over, SawStop ...

DerbyDad03 wrote in news:1bb19287-aa33-4417-b009-
:

On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.


technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...e=1&showinfo=0

&iv_load_policy=3&rel=0

I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Neither one. This is a classic example of the logical fallacy "false choice", the assumption
that the choices presented are the only ones available.

I'd choose instead to yell "move your ass, there's a train coming!".
  #74   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Fri, 24 Nov 2017 18:09:51 -0800, OFWW
wrote:

On Fri, 24 Nov 2017 09:11:16 -0500, J. Clarke
wrote:

But we should be sticking to this hypothetical example given us.

It was suggested that someone would go to jail. I still want to know
who and what crime they committed.

The person who did not stay in their own lane, and ended up committing
involuntary manslaughter.


Are you arguing that an autonomous vehicle is a "person"? You
really don't seem to grasp the concept. Rather than a car with an
occupant, make it a car, say a robot taxicab, that is going somewhere
or other unoccupied.


Is not a "who" a person? and yes, I realize the optimum goal is for a
stand alone vehicle independent of owner operator. The robotic taxicab
is already in test mode.

In the case you bring up the AV can be currently over ridden at
anytime by the occupant. There are already AV vehicles operating on
the streets.


In what case that I bring up?


The case of the option for switching lanes. Your questioning as who
can be at fault. I brought up the fact that experiment air craft have
a lifetime indebtedness going back to the original maker and designer.
It was to answer just who was culpable.


Check your attributions. There are many people participating in this
discussion. I did not bring up that case.

Globalhawk doesn't _have_ an occupant.
(when people use words with which you are unfamiliar, you should at
least Google those words before opining). There are very few
autonomous vehicles and currently they are for the most part operated
with a safety driver, but that is not anybody's long-term plan. Google
already has at least one demonstrator with no steering wheel or pedals
and Uber is planning on using driverless cars in their ride sharing
service--ultimately those would also have no controls accessible to
the passenger.


There are a lot


For certain rather small values of "lot".

of autonomous vehicles running around, it just depends
are where you are, some have already been in real world accidents,


Yes, mostly other vehicles hitting them. I believe that there has
been one Google car collision that was attributed to decisionmaking by
the software. I'm ignoring the Tesla incident because that is not
supposed to be a completely autonomous system.

Uber already were testing vehicles but required a person in the case
just in case.


I believe it is the government requring the person.

And yes, I knew globalhawks do not have an occupant resident in the
vehicle, but they are all monitored.


What do you mean when you say "monitored"? A human has to detect that
there is a danger, turn off the robot, and take control. If the robot
does not know that there is a danger it is unlikely that the human
will have any more information than the robot does.

As to vehicles some have a safety
driver and some do not. The globalhawks have built in sensory devices
themselves for alarming, etc. and all the data from radar, satellites
etc. The info for the full technology that they and the operators have
is not disclosed. Plus it is a secret as to who all are operating the
vehicles so the bottom line would be the government operating them.


So you're saying that the entire government would go to jail? Dream
on.

But thank you for your comment on my knowledge and how to fix it.

Regarding your "whose at fault" scenario, just look at the court cases
against gun makers, as if guns kill people.


I have not introduced a "who's at fault scenariao". I have asked what
law would be violated and who would be jailed. "At fault" decides who
pays damages, not who goes to jail. I am not discussing damages, I am
discussing JAIL. You do know what a jail is, do you not?


Sorry, my Internet connection is down and I cannot google it.


And yet you can post here.

So can we know return to the question or at the least, wood working?


You're the one who started feeding the troll.


Sorry, I am not privy to the list, so I'll just make this my last post
on the subject, but I will read your reply.


Hope springs eternal.
  #75   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,966
Default Move over, SawStop ...

On Nov 24, 2017, J. Clarke wrote
(in ):

On Fri, 24 Nov 2017 18:39:03 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, J. Clarke wrote
(in ):

On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, OFWW wrote
(in ):

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800,
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800,
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is
heading
in their
direction. They have no escape route. If the train continues
down
the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you
is
a fairly
large person. We'll save you some trouble and let that person
be a
stranger.

You have 2, and only 2, options. If you do nothing, all 5
workers
will
be killed. If you push the stranger off the bridge, the train
will
kill
him but be stopped before the 5 workers are killed. (Don't
question
the
physics, just accept the outcome.)

Which option do you choose?

I don't know. It was easy to pull the switch as there was a bit
of
disconnect there. Now it is up close and you are doing the
pushing.
One alternative is to jump yourself, but I'd not do that. Don't
think I
could push the guy either.

And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous
vehicles,
but please avoid the rabbit hole and realize that the concept
applies
to just about any where AI is used and people are involved.
Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine
that an AV
is traveling down the road, with its AI in complete control of the
vehicle.
The driver is using one hand get a cup of coffee from the built-in
Keurig
machine and choosing a Pandora station with the other. He is
completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses
all
of the
data at its disposal (speed, distance, weather conditions, tire
pressure,
etc.) and decides that it will not be able to stop in time. It
checks
the
input from its 360° cameras. Can't go right because of the line
of
parked
cars. They won't slow the vehicle enough to avoid hitting the kid.
Using
facial recognition the AI determines that the mini-van on the left
contains
5 elderly people. If the AV swerves left, it will push the
mini-van
into
oncoming traffic, directly into the path of a 18 wheeler. The AI
communicates
with the 18 wheeler's AI who responds and says "I have no place to
go. If
you push the van into my lane, I'm taking out a bunch of Grandmas
and
Grandpas."

Now the AI has to make basically the same decision as in my first
scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us,
right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once
the
initial
AI rules have been written, not once the facial recognition
database
has
been built. The question is who wrote those rules? Who decided
it's
OK to
kill a young kid to save the lives of 5 rickety old folks? Oh
wait,
maybe
it's better to save the kid and let the old folks die. They've
had a
full
life. Who wrote that rule? In other words, someone(s) have to
decide
whose
life is worth more than another's. They are essentially standing
on
a
bridge
deciding whether to push the guy or not. They have to write the
rule.
They
are either going to kill the kid or push the car into the other
lane.

I, for one, don't think that I want to be sitting around that
table.
Having
to make the decisions would be one thing. Having to sit next to
the
person
that would push the guy off the bridge with a gleam in his eye
would
be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the
swerving
driver assumes all the damages that incur from his action,
including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane
and
kills the four "highly rated" pedestrians there are no charges, but
if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...61b008b5aea-80
0

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?

You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.

Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?

GlobalHawk drones do have human pilots. Although they are not on board,
they
are in control via a stellite link and can be thousands of miles away.

.http://www.aviationtoday.com/2017/03...rce-drone-pilo
t/

You are conflating Reaper and Globalhawk and totally missing the
point.


Could you be more specific? Exactly what is wrong?


Reaper is a combat drone and is normally operated manually. We don't
let robots decided to shoot people yet. Globalhawk is a recon drone
and is normally autonomous. It has no weapons so shooting people is
not an issue. It can be operated manually and normally is in high
traffic areas for exactly the "what if it hits an airliner" reason,
but for most of its mission profile it is autonomous.


So GlobalHawk is autonomous in the same sense as an airliner under autopilot
during the long flight to and from the theater. It is the human pilot who is
responsible for the whole flight.

..
The article mentions Globalhawk in passing but then goes on to spend
the rest of its time discussing piloting Predator, which while still
in the inventory is ancestral to Reaper.


Yep.

Joe Gwinn



  #76   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 524
Default Move over, SawStop ...

On Sat, 25 Nov 2017 12:25:28 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, J. Clarke wrote
(in ):

On Fri, 24 Nov 2017 18:39:03 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, J. Clarke wrote
(in ):

On Fri, 24 Nov 2017 11:33:41 -0500, Joseph Gwinn
wrote:

On Nov 24, 2017, OFWW wrote
(in ):

On Fri, 24 Nov 2017 00:37:20 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 20:52:09 -0800,
wrote:

On Thu, 23 Nov 2017 23:10:05 -0500, J. Clarke
wrote:

On Thu, 23 Nov 2017 18:44:05 -0800,
wrote:

On Thu, 23 Nov 2017 11:53:47 -0800 (PST), DerbyDad03
wrote:

On Thursday, November 23, 2017 at 11:40:13 AM UTC-5, OFWW wrote:
On Wed, 22 Nov 2017 12:36:05 -0800 (PST), DerbyDad03
wrote:

On Wednesday, November 22, 2017 at 1:51:05 PM UTC-5, Ed Pawlowski
wrote:
On 11/22/2017 1:20 PM, DerbyDad03 wrote:


Oh, well, no sense in waiting...

2nd scenario:

5 workers are standing on the railroad tracks. A train is
heading
in their
direction. They have no escape route. If the train continues
down
the tracks,
it will most assuredly kill them all.

You are standing on a bridge overlooking the tracks. Next to you
is
a fairly
large person. We'll save you some trouble and let that person
be a
stranger.

You have 2, and only 2, options. If you do nothing, all 5
workers
will
be killed. If you push the stranger off the bridge, the train
will
kill
him but be stopped before the 5 workers are killed. (Don't
question
the
physics, just accept the outcome.)

Which option do you choose?

I don't know. It was easy to pull the switch as there was a bit
of
disconnect there. Now it is up close and you are doing the
pushing.
One alternative is to jump yourself, but I'd not do that. Don't
think I
could push the guy either.

And there in lies the rub. The "disconnected" part.

Now, as promised, let's bring this back to technology, AI and most
certainly, people. Let's talk specifically about autonomous
vehicles,
but please avoid the rabbit hole and realize that the concept
applies
to just about any where AI is used and people are involved.
Autonomus
vehicles (AV) are just one example.

Imagine it's X years from now and AV's are fairly common. Imagine
that an AV
is traveling down the road, with its AI in complete control of the
vehicle.
The driver is using one hand get a cup of coffee from the built-in
Keurig
machine and choosing a Pandora station with the other. He is
completely
oblivious to what's happening outside of his vehicle.

Now imagine that a 4 year old runs out into the road. The AI uses
all
of the
data at its disposal (speed, distance, weather conditions, tire
pressure,
etc.) and decides that it will not be able to stop in time. It
checks
the
input from its 360° cameras. Can't go right because of the line
of
parked
cars. They won't slow the vehicle enough to avoid hitting the kid.
Using
facial recognition the AI determines that the mini-van on the left
contains
5 elderly people. If the AV swerves left, it will push the
mini-van
into
oncoming traffic, directly into the path of a 18 wheeler. The AI
communicates
with the 18 wheeler's AI who responds and says "I have no place to
go. If
you push the van into my lane, I'm taking out a bunch of Grandmas
and
Grandpas."

Now the AI has to make basically the same decision as in my first
scenario:
Kill 1 or kill 5. For the AI, it's as easy as it was for us,
right?

"Bye Bye, kid. You should have stayed on the sidewalk."

No emotion, right? Right, not once the AI is programmed, not once
the
initial
AI rules have been written, not once the facial recognition
database
has
been built. The question is who wrote those rules? Who decided
it's
OK to
kill a young kid to save the lives of 5 rickety old folks? Oh
wait,
maybe
it's better to save the kid and let the old folks die. They've
had a
full
life. Who wrote that rule? In other words, someone(s) have to
decide
whose
life is worth more than another's. They are essentially standing
on
a
bridge
deciding whether to push the guy or not. They have to write the
rule.
They
are either going to kill the kid or push the car into the other
lane.

I, for one, don't think that I want to be sitting around that
table.
Having
to make the decisions would be one thing. Having to sit next to
the
person
that would push the guy off the bridge with a gleam in his eye
would
be a
totally different story.

I reconsidered my thoughts on this one as well.

The AV should do as it was designed to do, to the best of its
capabilities. Staying in the lane when there is no option to swerve
safely.

There is already a legal reason for that, that being that the
swerving
driver assumes all the damages that incur from his action,
including
manslaughter.

So in the following brake failure scenario, if the AV stays in lane
and
kills the four "highly rated" pedestrians there are no charges, but
if
it changes lanes and takes out the 4 slugs, jail time may ensue.

http://static6.businessinsider.com/i...61b008b5aea-80
0

Interesting.

Yes, and I've been warned that by my taking evasive action I could
cause someone else to respond likewise and that I would he held
accountable for what happened.

I find the assumption that a fatality involving a robot car would lead
to someone being jailed to be amusing. The people who assert this
never identify the statute under which someone would be jailed or who,
precisely this someone might be. They seem to assume that because a
human driving a car could be jailed for vehicular homicide or criminal
negligence or some such, it is automatic that someone else would be
jailed for the same offense--the trouble is that the car is legally an
inanimate object and we don't put inanimate objects in jail. So it
gets down to proving that the occupant is negligent, which is a hard
sell given that the government allowed the car to be licensed with the
understanding that it would not be controlled by the occupant, or
proving that the engineering team responsible for developing it was
negligent, which given that they can show the logic the thing used and
no doubt provide legal justification for the decision it made, will be
another tall order. So who goes to jail?

You've taken it to the next level, into the real word scenario and out
of the programming stage.

Personally I would assume that anything designed would have to
co-exist with real world laws and responsibilities. Even the final
owner could be held responsible. See the laws regarding experimental
aircraft, hang gliders, etc.

Experimental aircraft and hang gliders are controlled by a human. If
they are involved in a fatl accident, the operator gets scrutinized.
An autonomous car is not under human control, it is its own operator,
the occupant is a passenger.

We don't have "real world law" governing fatalities involving
autonomous vehicles. The engineering would, initially (I hope) be
based on existing case law involving human drivers and what the courts
held that they should or should not have done in particular
situations. But there won't be any actual law until either the
legislatures write statutes or the courts issue rulings, and the
latter won't happen until there are such vehicles in service in
sufficient quantity to generate cases.

Rather than hang gliders and homebuilts, consider a Globalhawk that
hits an airliner. Assuming no negligence on the part of the airliner
crew, who do you go after? Do you go after the Air Force, Northrop
Grumman, Raytheon, or somebody else? And of what are they likely to
be found guilty?

GlobalHawk drones do have human pilots. Although they are not on board,
they
are in control via a stellite link and can be thousands of miles away.

.http://www.aviationtoday.com/2017/03...rce-drone-pilo
t/

You are conflating Reaper and Globalhawk and totally missing the
point.

Could you be more specific? Exactly what is wrong?


Reaper is a combat drone and is normally operated manually. We don't
let robots decided to shoot people yet. Globalhawk is a recon drone
and is normally autonomous. It has no weapons so shooting people is
not an issue. It can be operated manually and normally is in high
traffic areas for exactly the "what if it hits an airliner" reason,
but for most of its mission profile it is autonomous.


So GlobalHawk is autonomous in the same sense as an airliner under autopilot
during the long flight to and from the theater. It is the human pilot who is
responsible for the whole flight.


How is any of this relevant to criminal offenses regarding autonomous
vehicles?
..
  #77   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 1,043
Default Move over, SawStop ...

On Sat, 25 Nov 2017 12:45:15 -0500, J. Clarke
wrote:

How is any of this relevant to criminal offenses regarding autonomous
vehicles?


Thread drift the whole thing changes and you still have not had your
question answered, oh well.
  #78   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 12,155
Default Move over, SawStop ...

On 11/24/2017 9:20 PM, Doug Miller wrote:
DerbyDad03 wrote in news:1bb19287-aa33-4417-b009-
:

On Tuesday, November 21, 2017 at 10:04:43 AM UTC-5, Spalted Walt wrote:
wrote:

I have to say, I am sorry to see that.

technophobia [tek-nuh-foh-bee-uh]
noun -- abnormal fear of or anxiety about the effects of advanced technology.

https://www.youtube.com/embed/NzEeJc...e=1&showinfo=0

&iv_load_policy=3&rel=0

I'm not sure how this will work out on usenet, but I'm going to present
a scenario and ask for an answer. After some amount of time, maybe 48 hours,
since tomorrow is Thanksgiving, I'll expand on that scenario and ask for
another answer.

Trust me, this will eventually lead back to technology, AI and most
certainly, people.

In the following scenario you must assume that all options have been
considered and narrowed down to only 2. Please just accept that the
situation is as stated and that you only have 2 choices. If we get into
"Well, in a real life situation, you'd have to factor in this, that and
the other thing" we'll never get through this exercise.

Here goes:

5 workers are standing on the railroad tracks. A train is heading in their
direction. They have no escape route. If the train continues down the tracks,
it will most assuredly kill them all.

You are standing next to the lever that will switch the train to another
track before it reaches the workers. On the other track is a lone worker,
also with no escape route.

You have 2, and only 2, options. If you do nothing, all 5 workers will
be killed. If you pull the lever, only 1 worker will be killed.

Which option do you choose?


Neither one. This is a classic example of the logical fallacy "false choice", the assumption
that the choices presented are the only ones available.

I'd choose instead to yell "move your ass, there's a train coming!".

;~) BUT that was not one of the options. You have 2, and only 2, options
  #79   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 2,559
Default Move over, SawStop ...

Leon lcb11211@swbelldotnet wrote in
news

;~) BUT that was not one of the options. You have 2, and only 2,
options


There's always the third option... Probably the only good part of that
movie:
The only winning move is not to play.

Puckdropper
--
http://www.puckdroppersplace.us/rec.woodworking
A mini archive of some of rec.woodworking's best and worst!
  #80   Report Post  
Posted to rec.woodworking
external usenet poster
 
Posts: 14,845
Default Move over, SawStop ...

On Monday, November 27, 2017 at 1:35:23 AM UTC-5, wrote:
Leon lcb11211@swbelldotnet wrote in
news

;~) BUT that was not one of the options. You have 2, and only 2,
options


There's always the third option... Probably the only good part of that
movie:
The only winning move is not to play.


If you choose not to decide, you still have made a choice. "Freewill", Rush, 1980

Not playing is the same thing as Option 1, doing nothing. 5 workers die.
Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Over a grand to move the electric meter! [email protected] UK diy 34 November 16th 11 04:02 AM
The SawStop, How will you let it affect you? (Long) Leon Woodworking 15 July 18th 03 02:41 PM
SawStop files with GPO/CPSC for mandatory use in US Charlie Self Woodworking 145 July 16th 03 09:08 PM
Sawstop question? Al Kyder Woodworking 3 July 11th 03 09:55 AM


All times are GMT +1. The time now is 05:28 AM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 DIYbanter.
The comments are property of their posters.
 

About Us

"It's about DIY & home improvement"