Home |
Search |
Today's Posts |
|
Electronic Schematics (alt.binaries.schematics.electronic) A place to show and share your electronics schematic drawings. |
Reply |
|
LinkBack | Thread Tools | Display Modes |
#1
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
It's been too long ago, I can't remember how to reduce difference
equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson -- | James E.Thompson, CTO | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona 85048 Skype: Contacts Only | | | Voice480)460-2350 Fax: Available upon request | Brass Rat | | E-mail Icon at http://www.analog-innovations.com | 1962 | I love to cook with wine. Sometimes I even put it in the food. |
#2
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/5/2013 11:05 AM, Jim Thompson wrote:
It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 USA +1 845 480 2058 hobbs at electrooptical dot net http://electrooptical.net |
#3
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs
wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? ...Jim Thompson -- | James E.Thompson, CTO | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona 85048 Skype: Contacts Only | | | Voice480)460-2350 Fax: Available upon request | Brass Rat | | E-mail Icon at http://www.analog-innovations.com | 1962 | I love to cook with wine. Sometimes I even put it in the food. |
#4
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 05/02/2013 16:21, Jim Thompson wrote:
On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) He is right though you have it for N=0 F(0), F(1) = 0.97*F(0) and then the recurrence relation gives you F(N+1) and if paranoid you can check it out formally by substituting your guess into it explicitly F(N) = k*(0.97^N) F(N+1) = 0.97*k*(0.97^N) == k*(0.97^(N+1)) QED which as Phil says is proof by induction. This one is just a dull harmonic series. Isn't there some rigorous way to derive the "continuous" equation from the difference equation? ...Jim Thompson Not necessarily - the finite difference equation is always a discrete approximation to the true full differential equation. How well the finite difference equation reflects reality and what other approximations you have made along the way affects whether or not you can get back to the continuous version unambiguously. Numerical approximations to derivatives are notoriously fickle. -- Regards, Martin Brown |
#5
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/5/2013 11:05 AM, Jim Thompson wrote:
It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson You've received good answers so far, but another way to do it is through generating functions. The differential equation for the "exponential generating function" of this recurrence can be written by inspection; it's y' = 0.97*y. Assuming F(0) = 1, the solution to this differential equation is y = e^(0.97*x). The solution to your recurrence is then the coefficient of x^k/k! in the Taylor series expansion of e^(0.97*x), or 0.97^k. |
#6
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Tue, 05 Feb 2013 13:41:05 -0500, bitrex
wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson You've received good answers so far, but another way to do it is through generating functions. The differential equation for the "exponential generating function" of this recurrence can be written by inspection; it's y' = 0.97*y. Assuming F(0) = 1, the solution to this differential equation is y = e^(0.97*x). The solution to your recurrence is then the coefficient of x^k/k! in the Taylor series expansion of e^(0.97*x), or 0.97^k. Aha! Thanks, bitrex, you rang the right chime :-} I'm slow this morning, that should have been obvious to me :-( ...Jim Thompson -- | James E.Thompson, CTO | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona 85048 Skype: Contacts Only | | | Voice480)460-2350 Fax: Available upon request | Brass Rat | | E-mail Icon at http://www.analog-innovations.com | 1962 | I love to cook with wine. Sometimes I even put it in the food. |
#7
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/5/2013 1:49 PM, Jim Thompson wrote:
On Tue, 05 Feb 2013 13:41:05 -0500, bitrex wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson You've received good answers so far, but another way to do it is through generating functions. The differential equation for the "exponential generating function" of this recurrence can be written by inspection; it's y' = 0.97*y. Assuming F(0) = 1, the solution to this differential equation is y = e^(0.97*x). The solution to your recurrence is then the coefficient of x^k/k! in the Taylor series expansion of e^(0.97*x), or 0.97^k. Aha! Thanks, bitrex, you rang the right chime :-} I'm slow this morning, that should have been obvious to me :-( ...Jim Thompson You can also "solve" it just by plugging in a trial solution r^n into the equation - any linear recurrence with constant coefficients can be solved this way. See: http://en.wikipedia.org/wiki/Recurre...eneral_methods |
#8
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Tue, 05 Feb 2013 09:21:34 -0700, Jim Thompson wrote:
On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out). I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this. Yes, there are recipes for these solutions, but all of of them (including the use of the z transform) are just the results of the guess-then-prove technique being carried out for a whole class of difference equations, rather than any one specific one. In the case of a linear, shift-invariant difference equation, the recipe is to find the auxiliary polynomial of the difference equation, and "posit" that the solution is the sum of A_k * d_k^N, where d_k is the k'th root of the auxiliary polynomial and A_k is a constant that goes with it. In your case (assuming that f(N) is the N'th element in f, which is a possibly infinitely long vector of values), then your auxiliary polynomial is z - 0.97, your "posited" values of d are just d = 0.97, and your "posited" solution is f(N) = A * (0.97)^N Note, too, that just like differential equations, linear difference equations can be homogeneous or non-homogeneous (yours is homogeneous). You can find the non-homogeneous solutions to difference equations the same way as you do for differential equations. And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com |
#9
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott
wrote: On Tue, 05 Feb 2013 09:21:34 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out). I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this. Yes, there are recipes for these solutions, but all of of them (including the use of the z transform) are just the results of the guess-then-prove technique being carried out for a whole class of difference equations, rather than any one specific one. In the case of a linear, shift-invariant difference equation, the recipe is to find the auxiliary polynomial of the difference equation, and "posit" that the solution is the sum of A_k * d_k^N, where d_k is the k'th root of the auxiliary polynomial and A_k is a constant that goes with it. In your case (assuming that f(N) is the N'th element in f, which is a possibly infinitely long vector of values), then your auxiliary polynomial is z - 0.97, your "posited" values of d are just d = 0.97, and your "posited" solution is f(N) = A * (0.97)^N Note, too, that just like differential equations, linear difference equations can be homogeneous or non-homogeneous (yours is homogeneous). You can find the non-homogeneous solutions to difference equations the same way as you do for differential equations. And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( ...Jim Thompson -- | James E.Thompson, CTO | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona 85048 Skype: Contacts Only | | | Voice480)460-2350 Fax: Available upon request | Brass Rat | | E-mail Icon at http://www.analog-innovations.com | 1962 | I love to cook with wine. Sometimes I even put it in the food. |
#10
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote:
On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: On Tue, 05 Feb 2013 09:21:34 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out). I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this. Yes, there are recipes for these solutions, but all of of them (including the use of the z transform) are just the results of the guess-then-prove technique being carried out for a whole class of difference equations, rather than any one specific one. In the case of a linear, shift-invariant difference equation, the recipe is to find the auxiliary polynomial of the difference equation, and "posit" that the solution is the sum of A_k * d_k^N, where d_k is the k'th root of the auxiliary polynomial and A_k is a constant that goes with it. In your case (assuming that f(N) is the N'th element in f, which is a possibly infinitely long vector of values), then your auxiliary polynomial is z - 0.97, your "posited" values of d are just d = 0.97, and your "posited" solution is f(N) = A * (0.97)^N Note, too, that just like differential equations, linear difference equations can be homogeneous or non-homogeneous (yours is homogeneous). You can find the non-homogeneous solutions to difference equations the same way as you do for differential equations. And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com |
#11
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Tue, 05 Feb 2013 13:26:23 -0600, Tim Wescott
wrote: On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: On Tue, 05 Feb 2013 09:21:34 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out). I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this. Yes, there are recipes for these solutions, but all of of them (including the use of the z transform) are just the results of the guess-then-prove technique being carried out for a whole class of difference equations, rather than any one specific one. In the case of a linear, shift-invariant difference equation, the recipe is to find the auxiliary polynomial of the difference equation, and "posit" that the solution is the sum of A_k * d_k^N, where d_k is the k'th root of the auxiliary polynomial and A_k is a constant that goes with it. In your case (assuming that f(N) is the N'th element in f, which is a possibly infinitely long vector of values), then your auxiliary polynomial is z - 0.97, your "posited" values of d are just d = 0.97, and your "posited" solution is f(N) = A * (0.97)^N Note, too, that just like differential equations, linear difference equations can be homogeneous or non-homogeneous (yours is homogeneous). You can find the non-homogeneous solutions to difference equations the same way as you do for differential equations. And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. That's probably what surprised me most... start the class with, "There are these few basic equations we can solve exactly..." :-( ...Jim Thompson -- | James E.Thompson, CTO | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona 85048 Skype: Contacts Only | | | Voice480)460-2350 Fax: Available upon request | Brass Rat | | E-mail Icon at http://www.analog-innovations.com | 1962 | I love to cook with wine. Sometimes I even put it in the food. |
#12
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/5/2013 2:16 PM, Tim Wescott wrote:
On Tue, 05 Feb 2013 09:21:34 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out). I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this. There are exceptions to this rule, especially transform methods and variation of parameters. It's much more true with PDEs, where e.g. there are 7 types of coordinate systems in which the Laplacian separates, including such intuitive ones as parabolic cylindrical coordinates. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 USA +1 845 480 2058 hobbs at electrooptical dot net http://electrooptical.net |
#13
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Tue, 05 Feb 2013 23:32:24 -0500, Phil Hobbs wrote:
On 2/5/2013 2:16 PM, Tim Wescott wrote: On Tue, 05 Feb 2013 09:21:34 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out). I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this. There are exceptions to this rule, especially transform methods and variation of parameters. It's much more true with PDEs, where e.g. there are 7 types of coordinate systems in which the Laplacian separates, including such intuitive ones as parabolic cylindrical coordinates. I would contend, though, that all those methods basically came about either because someone guessed an answer to a general class of problems and proved it's validity for that class (e.g. the Laplace transform as used for solving LTI differential equations), or because someone found a trick (variation of parameters, separation of variables, etc.) that applies to a number of different problems. So in those cases the "guess and prove" has been done for you, but you still have to make sure that your problem fits into the class of things that the method solves for (and heaven knows, I've seen people trying to cram nonlinear or time varying system descriptions into the Laplace transform, and being all confused when their nonsensical formulations give nonsensical results). -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com |
#14
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 05/02/2013 19:26, Tim Wescott wrote:
On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. I still recall my first serious brush with exotic differential equations in the freshman year although for different reasons. The lecturer was an internationally famous astronomer and brilliant analytical solver of novel differential equations. The snag was that he could not teach for toffee and merely demonstrated pulling rabbit out of hat again and again and again. His coursework was impossible. This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. -- Regards, Martin Brown |
#15
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown
wrote: On 05/02/2013 19:26, Tim Wescott wrote: On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. I still recall my first serious brush with exotic differential equations in the freshman year although for different reasons. The lecturer was an internationally famous astronomer and brilliant analytical solver of novel differential equations. The snag was that he could not teach for toffee and merely demonstrated pulling rabbit out of hat again and again and again. His coursework was impossible. This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Bob Masta DAQARTA v7.21 Data AcQuisition And Real-Time Analysis www.daqarta.com Scope, Spectrum, Spectrogram, Sound Level Meter Frequency Counter, Pitch Track, Pitch-to-MIDI FREE Signal Generator, DaqMusic generator Science with your sound card! |
#16
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Wed, 06 Feb 2013 13:29:16 +0000, Bob Masta wrote:
On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown wrote: On 05/02/2013 19:26, Tim Wescott wrote: On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. I still recall my first serious brush with exotic differential equations in the freshman year although for different reasons. The lecturer was an internationally famous astronomer and brilliant analytical solver of novel differential equations. The snag was that he could not teach for toffee and merely demonstrated pulling rabbit out of hat again and again and again. His coursework was impossible. This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform): 1: All real-world systems are nonlinear and time varying. 2: The Laplace transform only works on systems that are linear and time- invariant. 3: Thus, I cannot use the Laplace transform to solve this problem. 4: But I can come _close_ by linearizing this here system 5: And now I can use Laplace! This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com |
#17
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/6/2013 12:59 AM, Tim Wescott wrote:
On Tue, 05 Feb 2013 23:32:24 -0500, Phil Hobbs wrote: On 2/5/2013 2:16 PM, Tim Wescott wrote: On Tue, 05 Feb 2013 09:21:34 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 11:13:45 -0500, Phil Hobbs wrote: On 2/5/2013 11:05 AM, Jim Thompson wrote: It's been too long ago, I can't remember how to reduce difference equations. Suppose I have... f(N+1) = 0.97*f(N) Now I know, off the seat of my pants (I think :-), that... f(N) = Const*(0.97^N) But I can't remember how to rigorously get to that conclusion. ...Jim Thompson Mathematical induction. It's true for N=0, by construction, and if it's true for N, it's true for N+1 by the use of your recurrence relation. Cheers Phil Hobbs "Truth" isn't "solution" ;-) Isn't there some rigorous way to derive the "continuous" equation from the difference equation? I'm not sure what you mean by "continuous" equation -- a difference equation lives in discrete-time, and has no direct relationship to a differential equation in continuous time. (You can _make_ direct relationships, but to do so you have to specify how sampling and reconstruction are carried out). I was taught in differential equations class that when you dig right down to the bottom of things, you solve a differential equation by guessing an answer, then proving that you were right. Finding symbolic solutions to difference equations is the same general procedure. So I think that's as much rigor as you're going to find in this. There are exceptions to this rule, especially transform methods and variation of parameters. It's much more true with PDEs, where e.g. there are 7 types of coordinate systems in which the Laplacian separates, including such intuitive ones as parabolic cylindrical coordinates. I would contend, though, that all those methods basically came about either because someone guessed an answer to a general class of problems and proved it's validity for that class (e.g. the Laplace transform as used for solving LTI differential equations), or because someone found a trick (variation of parameters, separation of variables, etc.) that applies to a number of different problems. So in those cases the "guess and prove" has been done for you, but you still have to make sure that your problem fits into the class of things that the method solves for (and heaven knows, I've seen people trying to cram nonlinear or time varying system descriptions into the Laplace transform, and being all confused when their nonsensical formulations give nonsensical results). Well, the question of how mathematical theorems are discovered is an interesting one--George Polya wrote a series of books on the subject back in (iirc) the 1950s, of which the best known is the elementary one, "How To Solve It". Great book. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 USA +1 845 480 2058 hobbs at electrooptical dot net http://electrooptical.net |
#18
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/6/2013 10:51 AM, Tim Wescott wrote:
On Wed, 06 Feb 2013 13:29:16 +0000, Bob Masta wrote: On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown wrote: On 05/02/2013 19:26, Tim Wescott wrote: On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. I still recall my first serious brush with exotic differential equations in the freshman year although for different reasons. The lecturer was an internationally famous astronomer and brilliant analytical solver of novel differential equations. The snag was that he could not teach for toffee and merely demonstrated pulling rabbit out of hat again and again and again. His coursework was impossible. This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform): 1: All real-world systems are nonlinear and time varying. 2: The Laplace transform only works on systems that are linear and time- invariant. 3: Thus, I cannot use the Laplace transform to solve this problem. 4: But I can come _close_ by linearizing this here system 5: And now I can use Laplace! This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so. You can often use it to derive a perturbation series for those times when linearizing isn't quite enough. I highly recommend Bender & Orszag's "Advanced Mathematical Methods for Scientists and Engineers", which has all that sort of stuff--boundary layer theory, steepest descents, and a whole lot of other asymptotic methods. Arfken's applied math book is good as well--it used to be the standard textbook for physics undergraduates. It's often possible to derive a series solution, which may or may not converge, and then convert to a continued fraction or apply convergence tricks such as Shanks's algorithm or Richardson extrapolation. When those work, which isn't always, they can be practically supernatural. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 USA +1 845 480 2058 hobbs at electrooptical dot net http://electrooptical.net |
#19
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/6/2013 4:51 PM, Tim Wescott wrote:
On Wed, 06 Feb 2013 13:29:16 +0000, Bob Masta wrote: On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown wrote: On 05/02/2013 19:26, Tim Wescott wrote: On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. I still recall my first serious brush with exotic differential equations in the freshman year although for different reasons. The lecturer was an internationally famous astronomer and brilliant analytical solver of novel differential equations. The snag was that he could not teach for toffee and merely demonstrated pulling rabbit out of hat again and again and again. His coursework was impossible. This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform): 1: All real-world systems are nonlinear and time varying. 2: The Laplace transform only works on systems that are linear and time- invariant. 3: Thus, I cannot use the Laplace transform to solve this problem. 4: But I can come _close_ by linearizing this here system 5: And now I can use Laplace! This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so. These discussion brings back some memories. I hope i don't have bad dreams tonight. Following to Laplace we where tortured with Hilbert transformations. After university i never had to solve a differential- or integral equations. |
#20
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Wed, 06 Feb 2013 23:14:04 +0100, tuinkabouter
wrote: On 2/6/2013 4:51 PM, Tim Wescott wrote: On Wed, 06 Feb 2013 13:29:16 +0000, Bob Masta wrote: On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown wrote: On 05/02/2013 19:26, Tim Wescott wrote: On Tue, 05 Feb 2013 12:18:20 -0700, Jim Thompson wrote: On Tue, 05 Feb 2013 13:16:39 -0600, Tim Wescott wrote: And, finally, if you do this a lot with linear, shift-invariant difference equations, it pays to learn how to use the z-transform. It simplifies things almost as much as the Laplace transform does for linear time-invariant differential equations, and makes all of this folderol much easier to remember. Yep, It's all coming back to me... guess a solution and prove it fits :-( The guy who taught my second term of diff eqs clearly wanted us to remember this for all time, because he made this a mantra. Nearly every class meeting he would put some new form of differential equation up on the board, and he'd say "Now, how do we solve this differential equation?" then (because we didn't all shout it out in unison) he'd answer himself: "We guess, and prove that we're right!" That was 30 years ago. It's stuck with me, so I guess he met his goal in my case. I still recall my first serious brush with exotic differential equations in the freshman year although for different reasons. The lecturer was an internationally famous astronomer and brilliant analytical solver of novel differential equations. The snag was that he could not teach for toffee and merely demonstrated pulling rabbit out of hat again and again and again. His coursework was impossible. This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform): 1: All real-world systems are nonlinear and time varying. 2: The Laplace transform only works on systems that are linear and time- invariant. 3: Thus, I cannot use the Laplace transform to solve this problem. 4: But I can come _close_ by linearizing this here system 5: And now I can use Laplace! This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so. These discussion brings back some memories. I hope i don't have bad dreams tonight. Following to Laplace we where tortured with Hilbert transformations. After university i never had to solve a differential- or integral equations. Pure Laplace is a PITA, but the Heaviside short-hand version is a beautiful engineering tool. ...Jim Thompson -- | James E.Thompson, CTO | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona 85048 Skype: Contacts Only | | | Voice480)460-2350 Fax: Available upon request | Brass Rat | | E-mail Icon at http://www.analog-innovations.com | 1962 | I love to cook with wine. Sometimes I even put it in the food. |
#21
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 06/02/2013 22:14, tuinkabouter wrote:
On 2/6/2013 4:51 PM, Tim Wescott wrote: On Wed, 06 Feb 2013 13:29:16 +0000, Bob Masta wrote: On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown wrote: This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform): 1: All real-world systems are nonlinear and time varying. 2: The Laplace transform only works on systems that are linear and time- invariant. 3: Thus, I cannot use the Laplace transform to solve this problem. 4: But I can come _close_ by linearizing this here system 5: And now I can use Laplace! This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so. These discussion brings back some memories. I hope i don't have bad dreams tonight. Following to Laplace we where tortured with Hilbert transformations. After university i never had to solve a differential- or integral equations. I still have serious hatred of Green's functions arising from that particular course. Lucky we no longer do it analytically any more. -- Regards, Martin Brown |
#22
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 06/02/2013 17:42, Phil Hobbs wrote:
You can often use it to derive a perturbation series for those times when linearizing isn't quite enough. I highly recommend Bender & Orszag's "Advanced Mathematical Methods for Scientists and Engineers", which has all that sort of stuff--boundary layer theory, steepest descents, and a whole lot of other asymptotic methods. Arfken's applied math book is good as well--it used to be the standard textbook for physics undergraduates. Our local choice was Matthews & Walker Mathematical Methods of Physics. It's often possible to derive a series solution, which may or may not converge, and then convert to a continued fraction or apply convergence tricks such as Shanks's algorithm or Richardson extrapolation. When those work, which isn't always, they can be practically supernatural. Always used to be popular with the turbulent flow brigade - often gave divergent power series that could only be tamed with Shank's. The pure mathematicians used to cringe at the abuse of method but they could not deny that the results it predicted matched experimental data well! Works best on alternating series with poor convergence or divergence! I was once very interested in extended convergence tricks and have used the rational approximation for Log(1+x) in anger several times. Log(1+x) = x(6+x)/(6+4x) where -1/2 x 1 The other common one sometimes useful is Sqrt(x) = (1+3x)/(3+x) where -1/2 x 2 Only really any good if you have fast hardware divide. These tricks sometimes allow a rough polynomial approximation soluble analytically to provide a much better initial input guess for more rapid convergence of an iterative method like N-R. -- Regards, Martin Brown |
#23
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
Martin Brown wrote:
On 06/02/2013 22:14, tuinkabouter wrote: On 2/6/2013 4:51 PM, Tim Wescott wrote: On Wed, 06 Feb 2013 13:29:16 +0000, Bob Masta wrote: On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown wrote: This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform): 1: All real-world systems are nonlinear and time varying. 2: The Laplace transform only works on systems that are linear and time- invariant. 3: Thus, I cannot use the Laplace transform to solve this problem. 4: But I can come _close_ by linearizing this here system 5: And now I can use Laplace! This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so. These discussion brings back some memories. I hope i don't have bad dreams tonight. Following to Laplace we where tortured with Hilbert transformations. After university i never had to solve a differential- or integral equations. I still have serious hatred of Green's functions arising from that particular course. Lucky we no longer do it analytically any more. I recall the materials course where the prof explained everything using tensor calculus. Unfortunately, nobody had previously bothered to teach us tensor calculus. All the lectures were pure gibberish. I'm not at all sure how I passed that course. |
#24
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 2/7/2013 5:40 AM, Martin Brown wrote:
On 06/02/2013 17:42, Phil Hobbs wrote: You can often use it to derive a perturbation series for those times when linearizing isn't quite enough. I highly recommend Bender & Orszag's "Advanced Mathematical Methods for Scientists and Engineers", which has all that sort of stuff--boundary layer theory, steepest descents, and a whole lot of other asymptotic methods. Arfken's applied math book is good as well--it used to be the standard textbook for physics undergraduates. Our local choice was Matthews & Walker Mathematical Methods of Physics. It's often possible to derive a series solution, which may or may not converge, and then convert to a continued fraction or apply convergence tricks such as Shanks's algorithm or Richardson extrapolation. When those work, which isn't always, they can be practically supernatural. Always used to be popular with the turbulent flow brigade - often gave divergent power series that could only be tamed with Shank's. The pure mathematicians used to cringe at the abuse of method but they could not deny that the results it predicted matched experimental data well! Works best on alternating series with poor convergence or divergence! I was once very interested in extended convergence tricks and have used the rational approximation for Log(1+x) in anger several times. Log(1+x) = x(6+x)/(6+4x) where -1/2 x 1 The other common one sometimes useful is Sqrt(x) = (1+3x)/(3+x) where -1/2 x 2 Only really any good if you have fast hardware divide. These tricks sometimes allow a rough polynomial approximation soluble analytically to provide a much better initial input guess for more rapid convergence of an iterative method like N-R. Good stuff. Did you ever read Forman Acton's "Numerical Methods That Work"? Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 USA +1 845 480 2058 hobbs at electrooptical dot net http://electrooptical.net |
#25
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On Fri, 08 Feb 2013 02:23:34 GMT, the renowned Ralph Barone
wrote: Martin Brown wrote: On 06/02/2013 22:14, tuinkabouter wrote: On 2/6/2013 4:51 PM, Tim Wescott wrote: On Wed, 06 Feb 2013 13:29:16 +0000, Bob Masta wrote: On Wed, 06 Feb 2013 07:56:07 +0000, Martin Brown wrote: This particular course was so incomprehensible that after a while the best of us went to the other stream of maths on group theory since it was so much easier and the exam questions were likely to be possible to solve in finite time. Some from the other course which then became badly overcrowded then went to the ODE course so they could sit down. Indirectly he probably contributed to the increased use of computers to solve differential equations as we later moved into research. I recall struggling through Diffy-Q, which was widely regarded as a "bag of tricks" subject... you just had to figure out which trick to pull out of the bag for each special case. Then next term came Laplace Transforms, where we learned that everything that needed doing could be done with simple algebra via Laplace... all that Diffy-Q torture had been just for background information and building character! Using the Laplace transform is just a really versatile trick for approximately solving real-world problems. Here's the reasoning that you should keep in mind whenever you use it (or the z transform): 1: All real-world systems are nonlinear and time varying. 2: The Laplace transform only works on systems that are linear and time- invariant. 3: Thus, I cannot use the Laplace transform to solve this problem. 4: But I can come _close_ by linearizing this here system 5: And now I can use Laplace! This works great a whole lot of the time -- but it doesn't always, and engineers who are steeped in Laplace (or the z transform) tend to forget that they've skipped over steps 1-3, and did 4 without questioning why, or when it is not valid to do so. These discussion brings back some memories. I hope i don't have bad dreams tonight. Following to Laplace we where tortured with Hilbert transformations. After university i never had to solve a differential- or integral equations. I still have serious hatred of Green's functions arising from that particular course. Lucky we no longer do it analytically any more. I recall the materials course where the prof explained everything using tensor calculus. Unfortunately, nobody had previously bothered to teach us tensor calculus. All the lectures were pure gibberish. I'm not at all sure how I passed that course. Bell curve. Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com |
#26
Posted to sci.electronics.design,alt.binaries.schematics.electronic,sci.electronics.cad,sci.electronics.basics
|
|||
|
|||
Difference Equations
On 08/02/2013 02:32, Phil Hobbs wrote:
On 2/7/2013 5:40 AM, Martin Brown wrote: Works best on alternating series with poor convergence or divergence! I was once very interested in extended convergence tricks and have used the rational approximation for Log(1+x) in anger several times. Log(1+x) = x(6+x)/(6+4x) where -1/2 x 1 The other common one sometimes useful is Sqrt(x) = (1+3x)/(3+x) where -1/2 x 2 Only really any good if you have fast hardware divide. These tricks sometimes allow a rough polynomial approximation soluble analytically to provide a much better initial input guess for more rapid convergence of an iterative method like N-R. Good stuff. Did you ever read Forman Acton's "Numerical Methods That Work"? Yes - but along time ago. I don't possess a copy any more. One memorable fun version of 3 term Shanks with terrible numerical stability (but you can divide it out to get something much better) is difference of geometric mean over arithmetic mean given the partial sums a, b, c x' = (a*c-b^2)/(a+c-2*b) And if you feed it 1,3,7 it gives x' = (1*7-3^2)/(1+7-2*3) = (7-9)/(8-6) = -1 Users of twos compliment arithmetic take note! A numerically stable version that is formally equivalent is x' = (a+2b+c)/4 - (a-c)^2/(a+c-2b)/4 Subject to typos and lapses of memory... -- Regards, Martin Brown |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Forum | |||
instructor's solutions manual for Elementary Differential Equations (Werner E. Kohler, Johnson) | Metalworking | |||
solutions manual for A First Course in Differential Equations - TheClassic Fifth Edition By Zill, Dennis G | Electronics Repair | |||
How to tell the difference between RG-59, RG-59U or RG-6? | Home Repair | |||
Difference | Electronics Repair |