View Single Post
  #13   Report Post  
Posted to rec.crafts.metalworking
Tim Wescott[_6_] Tim Wescott[_6_] is offline
external usenet poster
 
Posts: 223
Default PID calculations

On Tue, 23 Sep 2014 20:22:38 -0500, RogerN wrote:

"Tim Wescott" wrote in message
...

On Mon, 22 Sep 2014 21:23:51 -0500, RogerN wrote:

One way would be to have an array of error[xx] and take the difference
in change over a period of time involving several of times through the
loop.


That's a BAD way to do it. It's what's known in the more esoteric
corners of the trade as a non-minimum phase filter, which basically
means that the filter has more delay than necessary for the amount of
amplitude shaping vs. frequency. Delay is a Bad Thing in a control
loop, and is to be avoided.


I don't necessarily agree that it's a bad way because you can look up
how much time since the last change, for example using slow 100ms loop
times and a 1 degree change in temperature, at the instant of the
change, rate was 1 degree per 100ms, this would trigger a strong
response, next loop, one deg per 200ms, next would be 1 degree in
300ms... and so on, decaying every loop until further change. I think
this would be similar to difference in average error. I like the
average error calculation you showed me because it does (nearly?) the
same thing without the array of previous error data.


Well, if what you want to do is find a fascinating array of subtle and not-
so-subtle pitfalls, then by all means give it a whirl.

I know a way to do this but I'm just looking for better ideas, perhaps
more efficient memory usage.


You want a band-limited derivative term. The best way to do this is
pretty simple, too (calculate this as one hunk-of-code each sample
time):

derivative = (current_error - average_error);

average_error = average_error + k * (current_error - average_error);


snip

Thanks Tim, I figured you'd have a better way of doing what I'm wanting
to do. There is a huge relative speed difference in applications, for
example changing a room temperature at 1 degree per minute is fast but a
motor moving at 1 encoder count per minute would be very slow for a 500
line encoder. The calculations I have seen before is amount of changer
per time period and what I thought would be more useful is amount of
time per change,
or low rates of change.


I forgot to mention that if you're really going to sample at 1kHz and
close a loop with a settling time of a minute, then your 'k' value is
going to be damned small -- like on the order of 1/6000.

This, in turn, means that you are exposed to your "average_error" term not
having enough precision to keep track of changes. If your current_error
is good to 12 bits, then your current_error * k must be good to 25 bits or
so. That's barely within the ability of a single-precision floating point
number to keep track. Even if you're only measuring temperatures with 8-
bit ADCs, you're beyond the ability to keep track with 16-bit integers.

With modern processors, at a 1kHz sampling rate, you can take care of this
by using double-precision floating point numbers (with something like a 53-
bit mantissa). If you're using an older processor, using 32-bit numbers
and playing lots of scaling games will work.

Speaking of subtle pitfalls.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com