Metalworking (rec.crafts.metalworking) Discuss various aspects of working with metal, such as machining, welding, metal joining, screwing, casting, hardening/tempering, blacksmithing/forging, spinning and hammer work, sheet metal work.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Gunner
 
Posts: n/a
Default Need a computer system for your nuclear program?

http://cgi.ebay.com/ws/eBayISAPI.dll...5579 961&rd=1


"If I'm going to reach out to the the Democrats then I need a third
hand.There's no way I'm letting go of my wallet or my gun while they're
around."

"Democrat. In the dictionary it's right after demobilize and right
before demode` (out of fashion).
-Buddy Jordan 2001
  #2   Report Post  
Miki Kanazawa
 
Posts: n/a
Default

Most excellent!!!

http://www.spikynorman.dsl.pipex.com...NCE.950313.txt

According to that article, the Cray J932 can contain up to 32
processors and run 6.4 gflops. That's a whole lotta power for the $99
opening bid... assuming the system is in complete working condition.
If it's not, good luck getting parts!

---

Cray/Media: Steve Conway, 612/683-7133
Mardi Larson, 612/683-3538
Cray/Financial: Brad Allen, 612/683-7395

CRAY RESEARCH DOUBLES TOP PERFORMANCE OF FAST-
SELLING CRAY J90 PRODUCT LINE

New CRAY J932 UNIX-Based Systems Scale Low-Cost
Supercomputer Line Up To 32 Processors, 6.4 Billion
Calculations Per Second

DALLAS, March 13, 1995 -- At the UniForum 95 open
computing conference, Cray Research, Inc. (NYSE: CYR)
announced today its new CRAY J932 supercomputer that
doubles the maximum performance of the company's fastest-
selling product line ever -- the CRAY J90 series of low-cost,
UNIX-based supercomputer systems.

The new CRAY J932 products are air-cooled, compact
supercomputer systems with 16 to 32 processors that provide
up to 6.4 billion calculations per second (gigaflops) of peak
computing speed -- twice the maximum offered in the
previously announced, binary-compatible CRAY J916 systems,
Cray officials said. The CRAY J932 systems, slated to begin
shipping in second-quarter of this year, are priced in the U.S.
from under $1 million to about $2.6 million.

Cray reported it has already received 11 advance orders for the
new CRAY J932 systems. The orders are from British Gas,
London, England; Howard University, Washington D.C.; Penn
State University, University Park, Penn.; University Groningen,
The Netherlands; and two German organizations, Konrad-Zuse-
Zentrum fuer Informationstechnik Berlin (Konrad-Zuse Center
for Information Technology Berlin), and the Christian-
Albrechts-Universitaet zu Kiel (University of Kiel). An
additional undisclosed customer has ordered five CRAY J932
systems that will be clustered using Cray Research's
SuperCluster software environment. Terms were not
disclosed.

The CRAY J90 Series is aimed at small- to mid-sized
organizations, or departments within larger organizations,
that do scientific and technical simulation work in the
government, industrial and university sectors. These Cray
systems are designed to operate as simulation servers for
large, complex problems that exhaust the capabilities of
workstations and workstation-based server products. A single
CRAY J932 system's large, real central memory -- up to eight
gigabytes -- can handle heavy workloads that would exhaust
the cache memory and memory bandwidths of uniprocessor
workstations, Cray said.

"Our J90 customers have anywhere from a few users to more
than a hundred running a wide range of third-party and
proprietary applications," said Robert H. Ewald president and
chief operating officer of Cray Research. "Cray Research's
complete catalogue of more than 600 third-party
supercomputer applications is available on these systems,
compared with far fewer application offerings on competing
systems. The CRAY J90 systems are the most general-purpose
supercomputers in their price class."

The company has captured about 100 orders for the CRAY J916
supercomputer, the smaller system in the series that was
announced in September 1994, Ewald said. The CRAY J916
systems begin volume shipments this month, at U.S. list prices
ranging from $225,000 to $1.5 million for systems with four
to 16 processors and up to 3.2 gigaflops of peak performance.
He said nearly 40 percent of the 110 CRAY J90 Series orders
are from new customers.

"With the volume of CRAY J90 Series orders already in hand,
we are well on our way to, in 1995, substantially increasing
Cray Research's share in the growing open systems market for
high-performance simulation servers," Ewald said.

According to Gary Smaby, president of Smaby Group, Inc., a
high-performance computing (HPC) analyst firm in Minneapolis,
"Early order activity leads us to believe that with the J90
Series, Cray has finally hit on a compelling combination of
price, performance and application breadth to address the sub-
$2 million HPC server market. While Cray historically has
dominated the market for high-end enterprise supercomputers,
the company has not had a significant presence in this price
tier. We expect the J90 Series to finally put Cray on the map
in this billion dollar, 30+ percent growth market tier,
propelling the company from a mere three percent market
share in 1994 to a respectable 11 percent by year end.

"In our view, the catalyst will be the ability of CRAY J90
Series users to gain plug and play' access to Cray's impressive
third-party applications library -- including the fifty or so key
codes which drive the majority of the HPC systems sales in
this price range."

Ewald said that the CRAY J90 Series systems are scaled down
versions of Cray Research's proven supercomputing technology
combined with the cost advantages of CMOS. To maintain the
product line's unrivaled performance on real-world
applications -- which average 60 to 70 percent of peak
processor speed -- Cray has doubled the maximum memory
bandwidth over the CRAY J916 system to 51.2 billion bytes
(gigabytes) per second in the CRAY J932 system. This memory
bandwidth is more than 20 times faster than similarly priced
competing products and maintains Cray's leadership in high-
bandwidth computing, Ewald said.

The new systems are based on custom RISC microprocessors
that Cray designed and has manufactured by an undisclosed
outside vendor, Ewald said. The microprocessor is two CMOS
ASICs (Application Specific Integrated Circuit) with
approximately 800,000 cells per chip. The microprocessor is
an innovative design that reduces the Cray Research central
processing unit (CPU) -- previously consisting of hundreds of
chips and multiple multi-layered printed circuit boards -- to
two powerful processor chips and a handful of supporting
chips, he said.

"All Cray supercomputers to date have been based on RISC
architecture. In fact, in the world of supercomputing, the Cray
RISC architecture is the most commonly used," Ewald said.
"By applying the price/performance advantages of CMOS to our
proven architecture, our new J90 line features RISC
microprocessors tuned specifically for hundreds of
supercomputing applications."

Ewald said the CRAY J932 supercomputers:

- Can be installed in an office environment with standard air
conditioning.

- Offer sustained multiple gigaflops on a wide range of
applications for under $1.5 million.

- Are compatible with workstations and designed to operate
in a client/server environment. As a central simulation
server, the CRAY J932 supercomputer is an open system and
easily connects to workstation devices made by a variety of
suppliers. It also supports the most popular workstation
data formats with transparent, automatic data conversion.

- Run the same standard UNIX operating system (UNICOS) as
larger Cray systems, with multi-user features and other
enhancements for the demanding high-end computing
market.

- Have unmatched, flexible scaling features. Scalability of
competing low-cost supercomputers is limited by adding
another processor or workstation to the cluster. Many of
these products rely on the low speeds of ethernet as their
"system" interconnect and offer limited scalability, as they
are confined to low-end configurations when total system
performance is considered. With the CRAY J932 system's
balanced design and leading bandwidth, customers can
easily and inexpensively add processors at $30,000 each
(U.S. list price) in a single system chassis and gain
significantly more from the addition of the processors.

- Are based on Cray Research supercomputer technology for
industry-leading "delivered" price/performance. Because
of the system's leading bandwidth of 51.2 gigabytes per
second, the sustained-to-peak ratio for this system on a
wide range of application far exceeds the competition. Per
processor the system achieves from 60-70 percent of its
peak speed of 200 megaflops (million floating-point
operations per second).

- Can be clustered with Cray's SuperCluster software
environment, providing virtually unlimited, versatile
capacity upgrades and total peak capabilities as high as 200
gigaflops. With this Cray software, customers can link
multiple supercomputers for a tightly integrated
distributed memory environment to tackle even larger
workloads.

Cray today also announced that it has received Advanced
Systems Magazine's 1994 Best Products Award for the
previous-generation CRAY EL94 system that was introduced
last year. Cray officials will be given this award at a luncheon
ceremony held here at the UniForum show tomorrow.

Cray Research provides the leading supercomputing tools and
services to help solve customers' most challenging problems.

###

Editor's Note: A CRAY J916 system is being showcased in
Cray's booth (#2125) at the UniForum conference and
exhibition held here.
  #3   Report Post  
Lawrence Glickman
 
Posts: n/a
Default

On 3 Dec 2004 06:47:49 -0800, (Miki Kanazawa)
wrote:

Most excellent!!!

http://www.spikynorman.dsl.pipex.com...NCE.950313.txt

According to that article, the Cray J932 can contain up to 32
processors and run 6.4 gflops. That's a whole lotta power for the $99
opening bid... assuming the system is in complete working condition.
If it's not, good luck getting parts!


University lab might be the market.

I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.

In the meantime, good luck paying the electric bill for that *thing.*

Lg
misc.survivalism

  #4   Report Post  
dalecue
 
Posts: n/a
Default


Lawrence Glickman wrote in message ...
On 3 Dec 2004 06:47:49 -0800, (Miki Kanazawa)
wrote:

Most excellent!!!

http://www.spikynorman.dsl.pipex.com...oduct/J932_ANN

OUNCE.950313.txt

According to that article, the Cray J932 can contain up to 32
processors and run 6.4 gflops. That's a whole lotta power for the $99
opening bid... assuming the system is in complete working condition.
If it's not, good luck getting parts!


University lab might be the market.



I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.


LOL - the Cray could do jobs in a few hours that your toy computer
would not be able to complete in the rest of your lifetime

Dale

In the meantime, good luck paying the electric bill for that *thing.*

Lg
misc.survivalism



  #5   Report Post  
Lawrence Glickman
 
Posts: n/a
Default

On Fri, 03 Dec 2004 19:40:10 GMT, "dalecue"
wrote:


Lawrence Glickman wrote in message ...
On 3 Dec 2004 06:47:49 -0800, (Miki Kanazawa)
wrote:

Most excellent!!!

http://www.spikynorman.dsl.pipex.com...oduct/J932_ANN

OUNCE.950313.txt

According to that article, the Cray J932 can contain up to 32
processors and run 6.4 gflops. That's a whole lotta power for the $99
opening bid... assuming the system is in complete working condition.
If it's not, good luck getting parts!


University lab might be the market.



I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.


LOL - the Cray could do jobs in a few hours that your toy computer
would not be able to complete in the rest of your lifetime


I don't need that kind of power for any applications that I can think
of. I'm not modeling a virtual nuclear detonation, for example. But
I _can_ keep track of every satellite in orbit that is still up there,
in Real Time.

You need more than that? Buy it.

2.5 gigs is screaming lightening for most *consumer* applications.

Furthermore, the fact that IBM is getting out of the PC market should
tell you something. Compaq even folded, and was bought out by HP.
Get a clue.

What people want these days is connectivity, to all the other
computers *out there.* So the ceiling on CPU processing speed is a
function now of _broadband connectivity_, not isolated Mips.

Lg



  #6   Report Post  
Doug White
 
Posts: n/a
Default

Keywords:
In article , "dalecue" wrote:

Lawrence Glickman wrote in message ...
On 3 Dec 2004 06:47:49 -0800, (Miki Kanazawa)
wrote:

Most excellent!!!

http://www.spikynorman.dsl.pipex.com...oduct/J932_ANN

OUNCE.950313.txt

According to that article, the Cray J932 can contain up to 32
processors and run 6.4 gflops. That's a whole lotta power for the $99
opening bid... assuming the system is in complete working condition.
If it's not, good luck getting parts!


University lab might be the market.


I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.


LOL - the Cray could do jobs in a few hours that your toy computer
would not be able to complete in the rest of your lifetime


Don't be so sure. The museum at Los Alamos has a Cray 1, sitting next to
a SPARC 10, which has roughly the same computing horsepower....

Doug White
  #7   Report Post  
jim rozen
 
Posts: n/a
Default

In article , Lawrence Glickman
says...

Furthermore, the fact that IBM is getting out of the PC market should
tell you something. Compaq even folded, and was bought out by HP.
Get a clue.


Gigaflops? What were those? g

http://en.wikipedia.org/wiki/Blue_Gene

Were they like "albums?"

Jim


--
==================================================
please reply to:
JRR(zero) at pkmfgvm4 (dot) vnet (dot) ibm (dot) com
==================================================
  #8   Report Post  
Scott Moore
 
Posts: n/a
Default

Gunner wrote:
http://cgi.ebay.com/ws/eBayISAPI.dll...5579 961&rd=1



We get that much power on a single chip now.

http://www.eet.com/semi/news/showArt...cleId=51200997

--
Samiam is Scott A. Moore

Personal web site: http:/www.moorecad.com/scott
My electronics engineering consulting site: http://www.moorecad.com
ISO 7185 Standard Pascal web site: http://www.moorecad.com/standardpascal
Classic Basic Games web site: http://www.moorecad.com/classicbasic
The IP Pascal web site, a high performance, highly portable ISO 7185 Pascal
compiler system: http://www.moorecad.com/ippas

Good does not always win. But good is more patient.
  #9   Report Post  
Ian Stirling
 
Posts: n/a
Default

In rec.crafts.metalworking Lawrence Glickman wrote:
On 3 Dec 2004 06:47:49 -0800, (Miki Kanazawa)
wrote:

Most excellent!!!

http://www.spikynorman.dsl.pipex.com...NCE.950313.txt

According to that article, the Cray J932 can contain up to 32
processors and run 6.4 gflops. That's a whole lotta power for the $99
opening bid... assuming the system is in complete working condition.
If it's not, good luck getting parts!


University lab might be the market.

I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.


A 2.5Ghz pentium may only be a couple of times faster than a P100, for
some tasks.

If it requires random access to large (greater than the on-chip cache)
amounts of memory, random access speed of chips hasn't really changed
in the past decade.

Some 'supercomputer' type computers took extreme measures to get round this.
  #10   Report Post  
Lawrence Glickman
 
Posts: n/a
Default

On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:

I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.


A 2.5Ghz pentium may only be a couple of times faster than a P100, for
some tasks.

If it requires random access to large (greater than the on-chip cache)
amounts of memory, random access speed of chips hasn't really changed
in the past decade.

Some 'supercomputer' type computers took extreme measures to get round this.


OK, I put in 512MB of RAM and this thing is working as advertised.
Less than 512 is not a good idea, because I can see from my stats that
I only have 170MB of unused physical memory at the moment.

Once you get enough RAM so you don't have to do HD read/writes, you're
on your way to speed. But M$ has shafted everybody, because M$ loves
huge swap files. The more RAM you have the less you will need to
read/write to this swap file, and the more smoothly things seem to
run.

For my own purposes, this is all the computer I need. I suppose if
you had to, you could put up to 1 gig of RAM into this thing, but from
what I can see, that would be a waste of money. It wouldn't be used.

Lg



  #11   Report Post  
Ian Stirling
 
Posts: n/a
Default

In rec.crafts.metalworking Lawrence Glickman wrote:
On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:

I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.


A 2.5Ghz pentium may only be a couple of times faster than a P100, for
some tasks.

If it requires random access to large (greater than the on-chip cache)
amounts of memory, random access speed of chips hasn't really changed
in the past decade.

Some 'supercomputer' type computers took extreme measures to get round this.


OK, I put in 512MB of RAM and this thing is working as advertised.
Less than 512 is not a good idea, because I can see from my stats that
I only have 170MB of unused physical memory at the moment.

Once you get enough RAM so you don't have to do HD read/writes, you're
on your way to speed. But M$ has shafted everybody, because M$ loves


Not for all tasks.
For a few scientific computing tasks (nuclear simulations are a big one),
the big bottleneck is how long it takes you to read a dozen bytes from a
'random' location in memory.
All the information is in memory - it's just that the memory isn't fast
enough.
The increase in random access speed has not nearly matched the increase
in streaming speed.

IIRC, a 20 year old 30 pin 1Mb SIMM would typically take 150ns (billionths
of a second) for the first byte, then 100ns for the next one.

A modern 1Gb memory DIMM might read out the second (and subsequent) bytes
a hundred times faster than the 1Mb SIMM.
But the initial access to any location in that DIMM is still only around
at best 3-4 times faster.

In some tasks, this can be the limiting factor for calculation.
  #12   Report Post  
Tim May
 
Posts: n/a
Default

In article , Lawrence
Glickman wrote:

On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:

I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.


A 2.5Ghz pentium may only be a couple of times faster than a P100, for
some tasks.

If it requires random access to large (greater than the on-chip cache)
amounts of memory, random access speed of chips hasn't really changed
in the past decade.

Some 'supercomputer' type computers took extreme measures to get round this.


OK, I put in 512MB of RAM and this thing is working as advertised.
Less than 512 is not a good idea, because I can see from my stats that
I only have 170MB of unused physical memory at the moment.

Once you get enough RAM so you don't have to do HD read/writes, you're
on your way to speed. But M$ has shafted everybody, because M$ loves
huge swap files. The more RAM you have the less you will need to
read/write to this swap file, and the more smoothly things seem to
run.

For my own purposes, this is all the computer I need. I suppose if
you had to, you could put up to 1 gig of RAM into this thing, but from
what I can see, that would be a waste of money. It wouldn't be used.


The issue isn't about the hard drive...I think we all take it as
obvious that if hard disk accesses are frequent, performance suffers.

The issue is that there at least 3 different speed regimes for the
solid-state memory:

-- relatively slow dynamic RAM (DRAM) installed as "main memory"--this
is what people talk about when they say they have 512 MB of RAM. Access
speeds are much slower than processor cycle speeds.

-- static RAM (SRAM), which is several times faster than DRAM because
of the way the circuitry works (cross-coupled inverters rather than bit
lines and storage in depletion regions).

(The reason DRAMs and SRAMs have their niches is because DRAMs can be
built with only one transistor per memory cell, whereas SRAMs tend to
take 4 transistors per memory cell--the canonical design of
cross-couple inverters making a latch takes 4 transistors and 2
resistors. Various clever designs keep reducing the sizes, but never
down to what a single-transistor DRAM has been at for nearly 30 years.)

The SRAM may be in a separate module, either next to the processor, or
even in the same package. Or some of it may be integrated onto the
chip, as various levels of cache.

The processor looks for references in cache (via things like "TLBs,"
standing for "translation lookaside buffer"). If it finds what it needs
in cache, especially on-chip cache (or even faster, on-chip registers,
the fastest access of all), then memory access can happen in just one
or a couple of processor cycles.

If what it needs is NOT in cache, a "cache miss" has occurred, and the
processor gets a block of memory from DRAM and puts it in cache.

The analogy is like this:

Imagine you have several pieces of things you need--papers, pens,
stationery, etc. Some are right on your desk, where they can be
accessed immediately. Some are stored in drawers or file cabinets,
where they can be gotten to quickly, but not as quickly as the "cached"
items. And some are stored in other rooms.

So you need a pen. You can't one. You suffer a cache miss, and your
work stalls for a while. You root around in your desk and find one. But
since you have already paid the price of stalling and going into your
desk, you might as well "refill the cache" with several pens, and maybe
a stapler, etc.

(But you don't want to completely flush your old cache, as you may need
some of its items soon. So you only partly flush the cache and replace
it with stuff from your desk drawer. Stategies for what to flush depend
on things like "oldest gets flushed" and "least recently used" and even
some metrics coming from what you expect to be working on.)

Worse is having to go out to a storage box in the garage for something.
This is like accessing main memory.

And even worse is having to drive over to Office Depot or Staples for
something...like staples. This is like loading from a hard disk, with
access time thousands of times slower than main memory. And even slower
is having to access from a tape drive (rare for most home users these
days) or even floppies stored in a box somewhere. Hundreds of millions
of times slower than memory accesses. And because the cost of this
access is so high, you don't just get the actual staples you'll need,
you get more. You "swap from disk into main memory" and, when you
resume work, you put some of those staples in a stapler kept on your
desk....you have swapped into main memory and then loaded cache.

Look on your processor and you'll references to how much "L1" and/or
"L2" (levels, a measure of "closeness" to the processor core) cache it
has. One of the biggest differences between consumer-grade CPUs and
server-grade CPUs, like the difference between a Celeron and a Xeon--is
the amount of cache. Some processors have up to 8 MB of on-chip cache,
assuring higher "hit rates" on the cache.

Supercomputers use various strategies for boosting performance. Lots of
fast cache memory is one of them. Another is to have lots of
processors. The current top-ranked supercomputers use lots of cache
(one of them uses the Itanium-2, with, IIRC, 4 MB of cache per
processor).

One of the interesting trends is to emphasize the memory over the
processor and, instead of attaching memory to processors, consider the
processors to be embedded in a sea of memory. This is the "processor in
memory" (PIM) approach. And it's used in the IBM "Blue Gene"
supercomputer, currently the fastest supercomputer in the world (the
aforementioned Itanium-2-based machine is the second fastest).

--Tim May
  #13   Report Post  
Lawrence Glickman
 
Posts: n/a
Default

On 04 Dec 2004 19:00:43 GMT, Ian Stirling
wrote:

In rec.crafts.metalworking Lawrence Glickman wrote:
On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:

I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.

A 2.5Ghz pentium may only be a couple of times faster than a P100, for
some tasks.

If it requires random access to large (greater than the on-chip cache)
amounts of memory, random access speed of chips hasn't really changed
in the past decade.

Some 'supercomputer' type computers took extreme measures to get round this.


OK, I put in 512MB of RAM and this thing is working as advertised.
Less than 512 is not a good idea, because I can see from my stats that
I only have 170MB of unused physical memory at the moment.

Once you get enough RAM so you don't have to do HD read/writes, you're
on your way to speed. But M$ has shafted everybody, because M$ loves


Not for all tasks.
For a few scientific computing tasks (nuclear simulations are a big one),
the big bottleneck is how long it takes you to read a dozen bytes from a
'random' location in memory.
All the information is in memory - it's just that the memory isn't fast
enough.
The increase in random access speed has not nearly matched the increase
in streaming speed.

IIRC, a 20 year old 30 pin 1Mb SIMM would typically take 150ns (billionths
of a second) for the first byte, then 100ns for the next one.

A modern 1Gb memory DIMM might read out the second (and subsequent) bytes
a hundred times faster than the 1Mb SIMM.
But the initial access to any location in that DIMM is still only around
at best 3-4 times faster.

In some tasks, this can be the limiting factor for calculation.


No calculation my computer is ever going to need to make.
We're talking apples and oranges here. I'm talking about getting a
motorcycle up to 200 mph, you're talking about taking a truck up to
200 mph. Yes, they are going to require different engines.

For my purposes ( motorcycle ), the load my CPU has to deal with,
things appear normal in human time. I am not simulating nuclear
detonations in virtual reality. Any nanosecond speed differences are
not perceptible to this human unit.

I just performed 170! ( factorial of 170 ). That's the limit on my
*engine*

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*
???????

Lg

  #14   Report Post  
Tim May
 
Posts: n/a
Default

In article , Lawrence
Glickman wrote:


I just performed 170! ( factorial of 170 ). That's the limit on my
*engine*

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*
???????



I despair.

A good example of where many consumer PCs are running out of gas is in
rendering DVDs for burning: this is the equivalent of "ripping" a CD
with music. A friend of mine, who is tech-savvy, is finding a DVD
render is taking about 11 hours on his 1 GHz PC. This is with both
Pinnacle's software and with Adobe's (which he returned when the
claimed "faster than the competition" turned out not to be true, at
least not in his application).

A 2 GHz machine might cut this time approximately in half.

This is a serious rate limiter for many applications, and illustrates
why people who are using their PCs for video editing, music work, scene
rendering, Photoshop, compilation of programs, etc., are buying the
fastest machines they can buy, such as dual-3.4 GHz Pentium 4-based
machines, loaded with RAM.


--Tim May
  #15   Report Post  
Lawrence Glickman
 
Posts: n/a
Default

On Sat, 04 Dec 2004 11:53:14 -0800, Tim May
wrote:

In article , Lawrence
Glickman wrote:


I just performed 170! ( factorial of 170 ). That's the limit on my
*engine*

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*
???????



I despair.


Don't despair.

A good example of where many consumer PCs are running out of gas is in
rendering DVDs for burning: this is the equivalent of "ripping" a CD
with music. A friend of mine, who is tech-savvy, is finding a DVD
render is taking about 11 hours on his 1 GHz PC. This is with both
Pinnacle's software and with Adobe's (which he returned when the
claimed "faster than the competition" turned out not to be true, at
least not in his application).


This is pure manure, unlike your other post which I have saved to my
archives because of its perfection.

DVD Decrypter will make an *image* of a DVD on my computer in about 15
to 20 minutes MAX. I can play that back and not know the difference
between the original and the one I wrote to disc at 36 frames /
second.

OTOH, if I am going to BURN a DVD, well then, that is another story.
But, both DVD Decrypter and/or DVD Shrink makes short work out of DVD
ripping on my 2.5 gig Celeron, you can believe it.

If it is taking 11 hours for your *tech savy* friend to rip a DVD, I
don't want to be near this guy. He is doing something terribly wrong.

BTW, the cost of the software I mentioned was $0.00 US.

Lg


A 2 GHz machine might cut this time approximately in half.

This is a serious rate limiter for many applications, and illustrates
why people who are using their PCs for video editing, music work, scene
rendering, Photoshop, compilation of programs, etc., are buying the
fastest machines they can buy, such as dual-3.4 GHz Pentium 4-based
machines, loaded with RAM.


--Tim May




  #16   Report Post  
Tim May
 
Posts: n/a
Default

In article , Lawrence
Glickman wrote:

On Sat, 04 Dec 2004 11:53:14 -0800, Tim May
wrote:


A good example of where many consumer PCs are running out of gas is in
rendering DVDs for burning: this is the equivalent of "ripping" a CD
with music. A friend of mine, who is tech-savvy, is finding a DVD
render is taking about 11 hours on his 1 GHz PC. This is with both
Pinnacle's software and with Adobe's (which he returned when the
claimed "faster than the competition" turned out not to be true, at
least not in his application).


This is pure manure, unlike your other post which I have saved to my
archives because of its perfection.

DVD Decrypter will make an *image* of a DVD on my computer in about 15
to 20 minutes MAX. I can play that back and not know the difference
between the original and the one I wrote to disc at 36 frames /
second.

OTOH, if I am going to BURN a DVD, well then, that is another story.
But, both DVD Decrypter and/or DVD Shrink makes short work out of DVD
ripping on my 2.5 gig Celeron, you can believe it.

If it is taking 11 hours for your *tech savy* friend to rip a DVD, I
don't want to be near this guy. He is doing something terribly wrong.



I said _rendering_, as from video tapes:


"is finding a DVD render is taking about 11 hours on his 1 GHz PC."

I didn't say "make an image of," as in doing a bit-copy. Taking 13 GB
(for example) of data from a video tape and making it fit on a 4.7 GB
DVD requires processing via a codec. You can educate yourself on this
at many sites, including this introductory article:

http://www.pcworld.com/howto/article...7,pg,11,00.asp


--Tim May
  #17   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Martin H. Eastburn wrote:
Tim Williams wrote:


[ ... ]

Should use it to draw these in real time.

http://webpages.charter.net/dawill/Images/Mbrot20.gif

(With my QBasic program, it took about half an hour to draw the 1281 x 801
RAW format file. D'oh!)


[ ... ]

I remember doing 100! or factorial 100 in full integer precision printout.
The first time 11 1/2 hours then 3 hours. First in Advanced Disk Basic, then Machine language (not assembly).
Oh the processor 8080.


Does the advanced disk basic support that many digits of integer
precision? Or were you processing it in slices of some form? 69! is
the maximum for the HP 15C calculator, and that can only be represented
in scientific or engineering notation, not in integer format.
Scientific notation gives:

1.71122452E98

However, the unix DC command does it rather quickly -- even on my
slowest system.
--

In order of system speed. (The important time values at the end
of each are the ones ending in 'u' (user space time) and those ending in
's' (system space time -- the math part of things.) Sorry that I don't
have my old 8MHz v7 unix system still up and running. That might even
take enough time to count. :-) But -- it would take most of a day to
excavate it, and set it up. It is stored broken down into components.

SPARC-10 (quad 35 MHz CPUs), running SunOs 4.1.4 (Solaris 1.2)
================================================== ====================
popocat:csu 15:35 # time dc /tmp/trial
93326215443944152681699238856266700490715968264381 62146859296389521759\
99932299156089414639761565182862536979208272237582 51185210916864000000\
000000000000000000
0.10u 0.16s 0:00.00 0.0%
================================================== ====================
Total 0.26 second

SS-5 (110 MHz CPU) running OpenBSD 3.4
================================================== ====================
popocat-2:csu 20:14 # time dc /tmp/trial
93326215443944152681699238856266700490715968264381 6214685929638952175\
99993229915608941463976156518286253697920827223758 2511852109168640000\
00000000000000000000
0.050u 0.040s 0:00.09 100.0% 0+0k 2+0io 7pf+0w
================================================== ====================
Total 0.090 second

SS-5 (170 MHZ CPU) running Solaris 2.6,
================================================== ====================
izalco:dnichols 15:32 time dc /tmp/trial
93326215443944152681699238856266700490715968264381 621468592963895217\
59999322991560894146397615651828625369792082722375 825118521091686400\
0000000000000000000000
0.02u 0.05s 0:00.09 77.7%
================================================== ====================
Total 0.070 second

SPARC Ultra-2, Dual 400 MHz CPUs running Solaris 9
================================================== ====================
Hendrix:csu 0:54 # time dc /tmp/trial
93326215443944152681699238856266700490715968264381 621468592963895217\
59999322991560894146397615651828625369792082722375 825118521091686400\
0000000000000000000000
0.00u 0.02s 0:00.04 50.0%
================================================== ====================
Total 0.02 second

Celeron 2.3 GHz running OpenBSD 3.5
================================================== ====================
curlmakr:csu 11:33 # time dc /tmp/trial
93326215443944152681699238856266700490715968264381 6214685929638952175\
99993229915608941463976156518286253697920827223758 2511852109168640000\
00000000000000000000
0.000u 0.007s 0:00.00 0.0% 0+0k 2+1io 6pf+0w
================================================== ====================

Anyone feel like comparing the restults to see if any one was
wrong? :-)

The "program" was:


================================================== ====================
100
99 * 98 * 97 * 96 * 95 * 94 * 93 * 92 * 91 * 90 *
89 * 88 * 87 * 86 * 85 * 84 * 83 * 82 * 81 * 80 *
79 * 78 * 77 * 76 * 75 * 74 * 73 * 72 * 71 * 70 *
69 * 68 * 67 * 66 * 65 * 64 * 63 * 62 * 61 * 60 *
59 * 58 * 57 * 56 * 55 * 54 * 53 * 52 * 51 * 50 *
49 * 48 * 47 * 46 * 45 * 44 * 43 * 42 * 41 * 40 *
39 * 38 * 37 * 36 * 35 * 34 * 33 * 32 * 31 * 30 *
29 * 28 * 27 * 26 * 25 * 24 * 23 * 22 * 21 * 20 *
19 * 18 * 17 * 16 * 15 * 14 * 13 * 12 * 11 * 10 *
09 * 08 * 07 * 06 * 05 * 04 * 03 * 02 * 01 *
p q
================================================== ====================

if anyone cares. There may be a way to let it automatically decrement
on its own, but this was quicker to set up. :-) It is a reverse Polish
notation, so two numbers are entered into the stack, then they are
multiplied, another entered, and multipled, etc. The 'p' is "print the
results", and the 'q' is "quit now".

This was in 1975 by myself.


And this was in 2004 (with systems dating back to perhaps 1998
or so. :-)

Enjoy,
DoN.

--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---
  #18   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Spehro Pefhany wrote:
On Sat, 04 Dec 2004 05:15:10 GMT, the renowned "Martin H. Eastburn"
wrote:


[ ... ]

I remember doing 100! or factorial 100 in full integer precision printout.
The first time 11 1/2 hours then 3 hours. First in Advanced Disk Basic, then Machine language (not assembly).
Oh the processor 8080.

This was in 1975 by myself.

Martin


Barely noticable (in human terms) time on a modern machine:

9332621544394410218832560610857526724094425485496 0571509166910400
4079950642429371486326940304505128980429892969444 7489825873720431
1236641477561877016501813248.


Something is wrong there. The number of digits is right

158 characters

but a sanity check says that there should be a *lot* of trailing zeros.
After all, it has as factors 100 90 80 70 60 50 40 30 20 and 10, not
counting other factors which add up to more zeros, such as 2 and 5. I
get:

93326215443944152681699238856266700490715968264381 621468592963895217\
59999322991560894146397615651828625369792082722375 825118521091686400\
0000000000000000000000

twenty-four trailing zeros.

It looks as though your program (whatever it was) used some form
of extended precision floating-point math, with some kind of conversion
errors.

9332621544394410 (yours)
9332621544394415 (mine)

so -- we start to differ in the 16th digit.

Enjoy,
DoN.
--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---
  #19   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Lawrence Glickman wrote:
On 04 Dec 2004 19:00:43 GMT, Ian Stirling
wrote:

In rec.crafts.metalworking Lawrence Glickman wrote:
On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:


[ ... ]

In some tasks, this can be the limiting factor for calculation.


No calculation my computer is ever going to need to make.
We're talking apples and oranges here. I'm talking about getting a
motorcycle up to 200 mph, you're talking about taking a truck up to
200 mph. Yes, they are going to require different engines.


If you do image processing, you will need the big RAM, or a bit
swap file. The image processing programs like to have the *whole* image
in RAM -- even if it has to use swap space to do it. :-)

For my purposes ( motorcycle ), the load my CPU has to deal with,
things appear normal in human time. I am not simulating nuclear
detonations in virtual reality. Any nanosecond speed differences are
not perceptible to this human unit.

I just performed 170! ( factorial of 170 ). That's the limit on my
*engine*

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*


My SS-5 (a mere 170 MHz CPU speed) does that one at:

================================================== ====================
izalco:dnichols 20:09 time dc /tmp/trial-170
72574156153079989673967282111292631147169916812964 513765435777989005\
61843401706157852350749242617459511490991237838520 776666022565442753\
02532890077320751090240043028005829560396661259965 825710439855829425\
75689663134396122625710949468067112055688804571933 402126614528000000\
00000000000000000000000000000000000
0.05u 0.05s 0:00.16 62.5%
================================================== ====================
A total of 0.1 second between user and system time, and 0:00.16
seconds wall clock time (with the system doing other things at the same
time.)

307 digits, which makes your 3.2574E+306 pretty close, but it is
missing a lot of digits for true precision.

Do you have a way to do it in multi-precision integer math?

Enjoy,
DoN.
--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---
  #20   Report Post  
Lawrence Glickman
 
Posts: n/a
Default

On 4 Dec 2004 20:15:09 -0500, (DoN. Nichols)
wrote:

In article ,
Lawrence Glickman wrote:
On 04 Dec 2004 19:00:43 GMT, Ian Stirling
wrote:

In rec.crafts.metalworking Lawrence Glickman wrote:
On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:


[ ... ]

In some tasks, this can be the limiting factor for calculation.


No calculation my computer is ever going to need to make.
We're talking apples and oranges here. I'm talking about getting a
motorcycle up to 200 mph, you're talking about taking a truck up to
200 mph. Yes, they are going to require different engines.


If you do image processing, you will need the big RAM, or a bit
swap file. The image processing programs like to have the *whole* image
in RAM -- even if it has to use swap space to do it. :-)


Is why I stopped using Photoshop.
I have no need for high-resolution photographic illustrations for
magazine covers and full-page advertisements.

I process hundreds of personal digital images at a time, and with the
programs I use, the work is done completely without my involvement in
a few minutes. Of course, I had to initially set up the programming
to do what I wanted it to do, but now it is automated. Good enough to
better for 99.88% of my photos.

For my purposes ( motorcycle ), the load my CPU has to deal with,
things appear normal in human time. I am not simulating nuclear
detonations in virtual reality. Any nanosecond speed differences are
not perceptible to this human unit.

I just performed 170! ( factorial of 170 ). That's the limit on my
*engine*

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*


My SS-5 (a mere 170 MHz CPU speed) does that one at:

================================================== ====================
izalco:dnichols 20:09 time dc /tmp/trial-170
7257415615307998967396728211129263114716991681296 4513765435777989005\
6184340170615785235074924261745951149099123783852 0776666022565442753\
0253289007732075109024004302800582956039666125996 5825710439855829425\
7568966313439612262571094946806711205568880457193 3402126614528000000\
00000000000000000000000000000000000
0.05u 0.05s 0:00.16 62.5%
================================================== ====================
A total of 0.1 second between user and system time, and 0:00.16
seconds wall clock time (with the system doing other things at the same
time.)

307 digits, which makes your 3.2574E+306 pretty close, but it is
missing a lot of digits for true precision.

Do you have a way to do it in multi-precision integer math?


Not at this time, no.
Otoh, if I needed to do that for a good reason, I would get hold of
the necessary programming.

For rendering analog video into digital video, as TM suggested, I
haven't explored that yet. Not sure I have enough interest in doing
so to justify the investment in time and money.

Maybe DVD is more permanent than tape, but it isn't forever.
Delamination is a problem, along with the growth of anaerobic
organisms that attack the media. I think you will be lucky to get 5
years out of DVD even if it is stored in a controlled environment.
The stuff isn't as archival as people would lead you to believe.

Lg



  #21   Report Post  
Tim May
 
Posts: n/a
Default

In article , Lawrence
Glickman wrote:




For rendering analog video into digital video, as TM suggested, I
haven't explored that yet. Not sure I have enough interest in doing
so to justify the investment in time and money.

Maybe DVD is more permanent than tape, but it isn't forever.
Delamination is a problem, along with the growth of anaerobic
organisms that attack the media. I think you will be lucky to get 5
years out of DVD even if it is stored in a controlled environment.
The stuff isn't as archival as people would lead you to believe.


I have DVDs older than this, with no delamination or "rust." I usually
put critical files onto two different brands of DVD, with the
assumption that while one may have problems, the chances of both
failing in the first N years are low (pace the Poisson distribution).

My friend makes DVDs of videos for two reasons:

1. Randomly-accessible viewing, with chapters, titles, etc. A lot
faster than fast-forwarding through a tape. (Even more so with DV, as
few people have DV decks, just DV camcorders. The wear and tear on a DV
camcorder used as a playback deck is an excellent reason to make DVDs!
And store the DV as an archival "safe" version.)

2. Mainly he does the rendering (the overnight rendering I mentioned)
so that he has the source he can quickly burn onto a DVD+R (or -R, his
does both) for a parent of one of the kids in the baseball and football
programs he makes videos of. The cost of a blank DVD+R is about 50
cents, a VHS tape is more. And even more importantly, a DVD+R can be
burned in about 15-30 minutes, depending on length and burn speed, but
a VHS tape take 1-2 hours, depending on the length of the source.\

As I said, and as the article I referenced said, rendering either
home-edited video or non-digital-sourced video into a form that can be
written to DVD is notoriously time-consuming. Many sites say "Your
15-minute home video may take hours to render, even on a fast PC."
Which was my point.

Your point that you can do 130 factorial, or whatever, faster than you
need to, is not the point. Nor is it that you now have a 2.5 GHz
Celeron. My point was to refute your claim that fast PCs are not needed
for everyday use. They are. As for my friend, he also worked at Intel,
and he has 5 or 6 PCs. That he is rendering on one of his 1 GHz
machines is beside the point. The point is that rendering is
exorbitantly CPU-intensive, whether it takes 11 hours or 5 hours or 2
hours or even 10 minutes (for some professional folks, even this is too
slow, for obvious time value of money reasons).

(And, yeah, I can do 130!, in Lisp, Scheme, Mathematica, or Haskell. In
full bignum precision, needless to say. Both tail-recursion optimizing
for a recursive version, or, of course, the brute force 130 * 120 * 128
* 127... version. Mostly I now favor Haskell. See the Net for details
on why.)

Personally, I dump my DV videos directly through a Firewire port into
my Philips standalone deck burner, whose codec is set up for real time
streaming of digital data. This means no fancy editing of titles, jump
cuts, or even deletions, but it creates an archival copy of a 1-hour DV
tape in 1 hour. No muss or fuss. It's a DVD+R and DVD+RW deck.

My iMac has a DVD-R and DVD-RW burner, which is suitable for writing
content after editing or digitizing. I never use it, as I haven't used
the (admittedly sophisticated) features of iMovie (or its pro-grade
versions like FinalCutPro) to edit my home movies, insert fancy titles,
insert soundtracks, and then burn to DVD. This would presumably require
rendering times comparable to what my friend is seeing, e.g., several
times the length of the original per gigahertz of CPU speeed, e.g.,
overnight for a 2-hour piece of source material.

As for your 2.5 GHz Celeron making a DVD bit copy at high speeds,
congratulations. Except you should realize that bit copies are
basically made at the writing speed of your DVD burner. So if your
burner writes at 12x, a 2-hour standard DVD gets copied in 20 minutes.
Whether your CPU runs at 2.5 GHz or at 1 GHz. Think about it.

Rendering is not the same thing as bit copying.


--Tim May
  #22   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Lawrence Glickman wrote:
On 4 Dec 2004 20:15:09 -0500, (DoN. Nichols)
wrote:


[ ... ]

If you do image processing, you will need the big RAM, or a bit
swap file. The image processing programs like to have the *whole* image
in RAM -- even if it has to use swap space to do it. :-)


Is why I stopped using Photoshop.
I have no need for high-resolution photographic illustrations for
magazine covers and full-page advertisements.

I process hundreds of personal digital images at a time, and with the
programs I use, the work is done completely without my involvement in
a few minutes. Of course, I had to initially set up the programming
to do what I wanted it to do, but now it is automated. Good enough to
better for 99.88% of my photos.


Understood. I've got a (unix) shell script which takes a
directory full of images, and makes a skeleton web page around them,
producing both thumbnails and reduced size images (4X for the Nikon D70
at medium resolution, 2x for the Nikon CoolPix 950) for what you get if
you click on the thumbnail.

Then -- all I need to do is add descriptive text -- or forget it
depending on what the subject matter is. A bunch of shots of the 4th of
July fireworks doesn't need much in the way of text, but details of a
project may need quite a bit more.

[ ... ]

307 digits, which makes your 3.2574E+306 pretty close, but it is
missing a lot of digits for true precision.

Do you have a way to do it in multi-precision integer math?


Not at this time, no.
Otoh, if I needed to do that for a good reason, I would get hold of
the necessary programming.


O.K. For unix, the tool of choice is "dc" (desktop calculator)
which takes RPN commands, and handles multiple precision as needed.
I've only tried up to 500!, and that worked well. If finally got to
where I could see the time that it took. (0.50 seconds combined -- and
it took longer to print the results on the screen -- all seventeen
lines of digits. :-)

For rendering analog video into digital video, as TM suggested, I
haven't explored that yet. Not sure I have enough interest in doing
so to justify the investment in time and money.

Maybe DVD is more permanent than tape, but it isn't forever.
Delamination is a problem, along with the growth of anaerobic
organisms that attack the media. I think you will be lucky to get 5
years out of DVD even if it is stored in a controlled environment.
The stuff isn't as archival as people would lead you to believe.


Agreed -- but it is quicker access than a 5GB+ 8mm tape. It is
useful for short-term backups, and save the tapes for more serious
backups. (Perhaps time to move to an Exabyte "Mammoth" tape drive, to
get 18GB+, as my disk partitions keep getting bigger. :-)

And -- if they would just come out with an affordable DVD writer
with a SCSI interface, I could go into that world, too. :-)

Enjoy,
DoN.

--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. |
http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---
  #23   Report Post  
Tim May
 
Posts: n/a
Default

In article , DoN. Nichols
wrote:

Maybe DVD is more permanent than tape, but it isn't forever.
Delamination is a problem, along with the growth of anaerobic
organisms that attack the media. I think you will be lucky to get 5
years out of DVD even if it is stored in a controlled environment.
The stuff isn't as archival as people would lead you to believe.


Agreed -- but it is quicker access than a 5GB+ 8mm tape. It is
useful for short-term backups, and save the tapes for more serious
backups. (Perhaps time to move to an Exabyte "Mammoth" tape drive, to
get 18GB+, as my disk partitions keep getting bigger. :-)


I disagree in almost every respect. I'll try to politely explain why.

First, I have had several tape breakages on my 8mm and Hi-8 tapes in
the 17 years I've been using camcorders. I have not yet had a breakage
in one of my DV tapes, but I am keeping my fingers crossed--any time a
thin Mylar tape is moving over heads, I am worried. And the innards of
my camcorders are filled with lots of gears and tension rollers and
other moving parts.

Second, none of my DVDs have gone bad. And because I might step on
them, or whatever, I usually make at least two (2) of each.

Third, a friend of mine got a tape drive backup system. From H-P, as I
recall. When it crapped out, he faced a $1000 repair bill, with few
alternatives. He ended up deciding not to replace the tape drive, and
was fortunate to have most of the files still on disk.

(By contrast, I have two laptops that can read DVDs, two desktops that
can read DVDs, and six machines or readers than can read CDs. And I
know I can go into the nearest Circuit City and buy a new DVD+-R+-RW
machine for $100. So I know that so long as my DVDs are OK, I can read
them. The same cannot be said of the several formats of tape drive that
have been sold over the past decade.)

Fourth, I routinely give or lend DVDs and CDs to my friends. The same
cannot be said of tape from various competing tape drive formats.


And -- if they would just come out with an affordable DVD writer
with a SCSI interface, I could go into that world, too. :-)



Fifth, I very, very, very happily gave up the world of SCSI when I went
to Firewire several years ago. I retired my stack of SCSI drives, from
Iomega to 80 MB to 200 MB to 1 GB to 5 GB and haven't turned them on
since (I transferred the contents to a higher capacity drive, then
started backing up onto CD-Rs and DVD-Rs). I started using SCSI in 1986
and, while it was an advance over mostly non-existent alternatives in
the PC world, it was a 15-year series of hassles over cable lengths,
terminators, ID switches, and diagnostics to track down problems.

Firewire, or the PC standard of USB 2.0, solves these problems.

(Or internal, ATA or IDE.)

I would not consider getting a SCSI DVD writer for all the tea in China.


--Tim May
  #24   Report Post  
Spehro Pefhany
 
Posts: n/a
Default

On 4 Dec 2004 20:02:11 -0500, the renowned (DoN.
Nichols) wrote:

In article ,
Spehro Pefhany wrote:
On Sat, 04 Dec 2004 05:15:10 GMT, the renowned "Martin H. Eastburn"
wrote:


[ ... ]

I remember doing 100! or factorial 100 in full integer precision printout.
The first time 11 1/2 hours then 3 hours. First in Advanced Disk Basic, then Machine language (not assembly).
Oh the processor 8080.

This was in 1975 by myself.

Martin


Barely noticable (in human terms) time on a modern machine:

933262154439441021883256061085752672409442548549 60571509166910400
407995064242937148632694030450512898042989296944 47489825873720431
1236641477561877016501813248.


Something is wrong there. The number of digits is right

158 characters

but a sanity check says that there should be a *lot* of trailing zeros.
After all, it has as factors 100 90 80 70 60 50 40 30 20 and 10, not
counting other factors which add up to more zeros, such as 2 and 5. I
get:

9332621544394415268169923885626670049071596826438 1621468592963895217\
5999932299156089414639761565182862536979208272237 5825118521091686400\
0000000000000000000000

twenty-four trailing zeros.

It looks as though your program (whatever it was) used some form
of extended precision floating-point math, with some kind of conversion
errors.

9332621544394410 (yours)
9332621544394415 (mine)

so -- we start to differ in the 16th digit.

Enjoy,
DoN.


Good point. That explains why the function is limited to 150-odd
factorial when I spec'd hundreds of digits. It is using floating point
math to calculate some part of it rather than the 300-digit arbitrary
precision math that I asked it to use.


Best regards,
Spehro Pefhany
--
"it's the network..." "The Journey is the reward"
Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com
  #25   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Tim May wrote:
In article , DoN. Nichols
wrote:


[ ... ]

Agreed -- but it is quicker access than a 5GB+ 8mm tape. It is
useful for short-term backups, and save the tapes for more serious
backups. (Perhaps time to move to an Exabyte "Mammoth" tape drive, to
get 18GB+, as my disk partitions keep getting bigger. :-)


I disagree in almost every respect. I'll try to politely explain why.

First, I have had several tape breakages on my 8mm and Hi-8 tapes in
the 17 years I've been using camcorders. I have not yet had a breakage
in one of my DV tapes, but I am keeping my fingers crossed--any time a
thin Mylar tape is moving over heads, I am worried. And the innards of
my camcorders are filled with lots of gears and tension rollers and
other moving parts.


O.K. Now camcorders are subjected to a lot more shocks than
Exabyte drives built into drive cases with other drives. I have, beside
my desktop machine, a case containing (from top down)

One Dual SCSI PCMIA reader

Two Exabyte 8505 8mm tape drives.

One Yahama CD-R/RW drive.

These are connected to one standard (50-pin) SCSI port on the
computer.

Connected to the other a

two internal 18 GB disk drives

one internal DVD reader drive

and a Multipack housing containing twelve fast-wide SCSI disk
drives, ranging in size from 4 GB to 36 GB, with most being
either 18 GB or 9 GB. Any one of these latter drives can be
unmounted while the system is running, and then unplugged and
replaced. True hot-swapping. And I have done it several times,
as I replaced smaller drives with larger ones. That 4 GB drive
is an endangered species, and will be swapped out as soon as I
find another 18 GB or better at a hamfest.

Both the computer and the Multipack are Sun products, and the
disk drives are SCA interface (fast wide SCSI with the power and
SCSI ID selection all brought in through the drive's connector.

Second, none of my DVDs have gone bad. And because I might step on
them, or whatever, I usually make at least two (2) of each.


If I were writing them I might be able to make the same
statement. But I can't, at the present.

The only failure that I have had in the Exabyte 8505 drives (5
GB with standard tape, 8 GB with XL tapes, all without compression, and
more possible with compression built into the drives) was when a nylon
gear split on one end of the loading mechanism. This happened when the
tape was ejecting after a nightly automatic backup. So, for the short
term, I swapped over to the other drive in the housing, and in the long
term, I got a replacement drive from eBay (the other drives came from
either eBay or hamfest purchases.)

And, BTW, I was able to extract the tape without damage -- it is
still readable, but it take some careful work.

Third, a friend of mine got a tape drive backup system. From H-P, as I
recall. When it crapped out, he faced a $1000 repair bill, with few
alternatives. He ended up deciding not to replace the tape drive, and
was fortunate to have most of the files still on disk.


a) HP is overpriced. (And I think that their drives are DAT
drives, not 8mm drives though I have seen their expensive backup
jukebox systems using drives from other vendors.)

b) He apparently did not investigate used equipment for a
replacement. (I buy almost exclusively used equipment, as I am
retired, and don't have the money which I would if I were
earning with the aid of these systems.

c) I *always* have more than one drive which can read my backup
tapes. For many of them, I have more than two drives. (Again
surplus prices -- not new prices.) But if you are going to buy
an expensive tape backup system -- buy a backup for that system.

d) Know more about your backup system than the vendor's
representative does.

(By contrast, I have two laptops that can read DVDs, two desktops that
can read DVDs, and six machines or readers than can read CDs. And I
know I can go into the nearest Circuit City and buy a new DVD+-R+-RW
machine for $100. So I know that so long as my DVDs are OK, I can read
them. The same cannot be said of the several formats of tape drive that
have been sold over the past decade.)

Fourth, I routinely give or lend DVDs and CDs to my friends. The same
cannot be said of tape from various competing tape drive formats.


I have one friend with whom I occasionally exchange 8mm tapes.
The format is set by the OS, not the drive, and for exchange purposes, I
use the tar format by choice.

Granted -- Windows systems tend to have dozens of mutually
incompatible backup systems which even may use the same physical tapes,
but not be able to read data interchangeably. This is a problem with the
combination of the OS and proprietary backup formats, not with tape
per se.

And -- if they would just come out with an affordable DVD writer
with a SCSI interface, I could go into that world, too. :-)



Fifth, I very, very, very happily gave up the world of SCSI when I went
to Firewire several years ago. I retired my stack of SCSI drives, from
Iomega to 80 MB to 200 MB to 1 GB to 5 GB and haven't turned them on
since (I transferred the contents to a higher capacity drive, then
started backing up onto CD-Rs and DVD-Rs). I started using SCSI in 1986
and, while it was an advance over mostly non-existent alternatives in
the PC world, it was a 15-year series of hassles over cable lengths,
terminators, ID switches, and diagnostics to track down problems.


The SCA forms of the SCSI drives eliminate the ID switches
consideration. (The ID is defined by the socket into which the drive
plugs.) The termination problem has pretty much been eliminated by the
vendors who continue to use SCSI as a standard part of their system.
The MultiPack drive housings, and the single-drive Unipack (with
switches on the back to define the SCSI ID) are self-terminating, and
will automatically switch off or on the termination needed as you plug
in something downstream from them. They will even terminate half of the
fast-wide SCSI bus automatically, if you plug in a transition from
fast-wide (68-pin) to standard (50 pin). *And* -- they will tell you
which halves of the bus are terminated by a glance at the LEDs on the
back of the drive housing.

SCA drives started (with Sun) in the internal drives in the SS-5
and SS-20 machines. Then they appeared in the big RAID systems (30
drives -- Sun Storage Arrays) connected to the system via an optical
interface (yet another sBus card). The SCA drives also appeared in some
DEC made RAID systems -- truly hot-swappable, including two plug-in
power supplies for each horizontal cluster of five disk drives, IIRC.
(We had those at work before I retired.)

There are also MultiPack housings which accept only six drives,
but those are Ultra-SCSI -- an even faster bus than standard fast-wide.
(And -- the system will adjust the speed automatically for the slowest
drive currently connected, so you *can* mix slower drives in there if
you really want to. These 6-slot MultiPacks have only one switch (other
than the power switch) -- a switch which selects whether the drives will
get the lower half of the address range (0-6), or the upper half
(8-14). These are normally used on a second SCSI card in the system, an
the twelve-slot one is used on the system's built-in SCSI bus. So it
provides SCSI IDs of 2, 3, 4, 5, 8, 9 10, 11, 12, 13, 14 and 15 --
skipping over 0, and 1 (the two internal disk drives), plus 6 (the
internal DVD-ROM drive, or the CD-ROM drive, depending on the system.

Firewire, or the PC standard of USB 2.0, solves these problems.

(Or internal, ATA or IDE.)


How many drives can you hang off a single IDE bus? Two. And it
is picky about which drive it will allow as a boot drive.
Typical systems have two IDE bus connectors, and can only boot
from half of the drives -- thanks to some combination of the
design of the interface and the design of the boot firmware in
the mother board.

How many active drives can you hang off USB before they start
getting in each other's way? And confusing the keyboard and
mouse, which now also use the USB. (I don't know, but I have
heard some horror stories. How fast is USB compared to a fast
wide Ultra SCSI bus?

Serial ATA is one drive per socket on the motherboard. Two
maximum in the token Windows box (which is kept away from the
outside net, so I don't have to keep patching it. It is a
purely internal use system. (Income tax software, transforming
weird format images from weird cameras to something which other
systems understand, and shoving them through the internal net to
the unix boxen where they actually get *used*.

The only box which I have with firewire on the Motherboard is
running an OS which does not (yet) know how to use it. (But let
me tell you OpenBSD really *screams* on a 2.3 GHz Celeron. :-)

Could you even manage to run 14 disk drives on a single Windows
system using all of those interfaces you have mentioned together? I'm
running all of those on the single built-in fast-wide SCSI bus. If I
want more, I can plug in extra SCSI cards into the system. (And no, I
don't count an external RAID box making a cluster of drives look like a
single one as falling within the spirit of that, because there is a limit
to the number of partitions you can make on those.) I like partitions to
keep related things confined to a single partition -- which *may* be
using a single disk drive as one partition, or having up to eight
partitions on Sun Solaris (or older Sun SunOs) systems, and up to sixteen
partitions on a single drive on OpenBSD systems (on the same hardware).

Also -- I *could* plug an external RAID system into each of those
SCSI IDs, so I would still be ahead of your disk count -- 14 RAID
arrays.

I would not consider getting a SCSI DVD writer for all the tea in China.


While I would not consider getting one in IDE, except for the
token Windows box. (Granted, the Sun does not *have* an IDE interface,
but it has an excellent SCSI system.

But -- your experience with SCSI has been with systems which use
it as an afterthought -- not as a normal part of the OS and hardware,
where it has been made to work really well.

Obviously, we have different experiences, and thus different
perceptions and preferences.

Enjoy,
DoN.
--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---


  #26   Report Post  
Martin H. Eastburn
 
Posts: n/a
Default

Lawrence Glickman wrote:
On 04 Dec 2004 19:00:43 GMT, Ian Stirling
wrote:


In rec.crafts.metalworking Lawrence Glickman wrote:

On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:


I can't imagine running anything my 2.5 giger can't handle. It gets
the job done.

A 2.5Ghz pentium may only be a couple of times faster than a P100, for
some tasks.

If it requires random access to large (greater than the on-chip cache)
amounts of memory, random access speed of chips hasn't really changed
in the past decade.

Some 'supercomputer' type computers took extreme measures to get round this.

OK, I put in 512MB of RAM and this thing is working as advertised.
Less than 512 is not a good idea, because I can see from my stats that
I only have 170MB of unused physical memory at the moment.

Once you get enough RAM so you don't have to do HD read/writes, you're
on your way to speed. But M$ has shafted everybody, because M$ loves


Not for all tasks.
For a few scientific computing tasks (nuclear simulations are a big one),
the big bottleneck is how long it takes you to read a dozen bytes from a
'random' location in memory.
All the information is in memory - it's just that the memory isn't fast
enough.
The increase in random access speed has not nearly matched the increase
in streaming speed.

IIRC, a 20 year old 30 pin 1Mb SIMM would typically take 150ns (billionths
of a second) for the first byte, then 100ns for the next one.

A modern 1Gb memory DIMM might read out the second (and subsequent) bytes
a hundred times faster than the 1Mb SIMM.
But the initial access to any location in that DIMM is still only around
at best 3-4 times faster.

In some tasks, this can be the limiting factor for calculation.



No calculation my computer is ever going to need to make.
We're talking apples and oranges here. I'm talking about getting a
motorcycle up to 200 mph, you're talking about taking a truck up to
200 mph. Yes, they are going to require different engines.

For my purposes ( motorcycle ), the load my CPU has to deal with,
things appear normal in human time. I am not simulating nuclear
detonations in virtual reality. Any nanosecond speed differences are
not perceptible to this human unit.

I just performed 170! ( factorial of 170 ). That's the limit on my
*engine*

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*
???????

Lg

The method I employed was to set up an array of integers of the F size.
e.g. Not so easy to guess when the numbers get large, but 1000! is under 2000 digits.

Then take the first two numbers and multiply, place the results into the array.
Multiply that by the next number - and when done, make sure each array element is a
single digit. If not - bump it up.... and re-test...

This then takes massive numbers and puts them into very simple baby talk numbers.
Any system can handle one digit vs. another. (unless it were a 4 bit o.s. :-) )

Martin

--
Martin Eastburn, Barbara Eastburn
@ home at Lion's Lair with our computer
NRA LOH, NRA Life
NRA Second Amendment Task Force Charter Founder
  #27   Report Post  
Martin H. Eastburn
 
Posts: n/a
Default

DoN. Nichols wrote:

In article ,
Lawrence Glickman wrote:

On 04 Dec 2004 19:00:43 GMT, Ian Stirling
wrote:


In rec.crafts.metalworking Lawrence Glickman wrote:

On 04 Dec 2004 17:10:21 GMT, Ian Stirling
wrote:



[ ... ]


In some tasks, this can be the limiting factor for calculation.


No calculation my computer is ever going to need to make.
We're talking apples and oranges here. I'm talking about getting a
motorcycle up to 200 mph, you're talking about taking a truck up to
200 mph. Yes, they are going to require different engines.



If you do image processing, you will need the big RAM, or a bit
swap file. The image processing programs like to have the *whole* image
in RAM -- even if it has to use swap space to do it. :-)


For my purposes ( motorcycle ), the load my CPU has to deal with,
things appear normal in human time. I am not simulating nuclear
detonations in virtual reality. Any nanosecond speed differences are
not perceptible to this human unit.

I just performed 170! ( factorial of 170 ). That's the limit on my
*engine*

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*



My SS-5 (a mere 170 MHz CPU speed) does that one at:

================================================== ====================
izalco:dnichols 20:09 time dc /tmp/trial-170
72574156153079989673967282111292631147169916812964 513765435777989005\
61843401706157852350749242617459511490991237838520 776666022565442753\
02532890077320751090240043028005829560396661259965 825710439855829425\
75689663134396122625710949468067112055688804571933 402126614528000000\
00000000000000000000000000000000000
0.05u 0.05s 0:00.16 62.5%
================================================== ====================
A total of 0.1 second between user and system time, and 0:00.16
seconds wall clock time (with the system doing other things at the same
time.)

307 digits, which makes your 3.2574E+306 pretty close, but it is
missing a lot of digits for true precision.

Do you have a way to do it in multi-precision integer math?

Enjoy,
DoN.

I posted a reply earlier tonight - I had used an array and integer math at that!

Martin

--
Martin Eastburn, Barbara Eastburn
@ home at Lion's Lair with our computer
NRA LOH, NRA Life
NRA Second Amendment Task Force Charter Founder
  #28   Report Post  
Tim May
 
Posts: n/a
Default

In article , DoN. Nichols
wrote:


But -- your experience with SCSI has been with systems which use
it as an afterthought -- not as a normal part of the OS and hardware,
where it has been made to work really well.


A silly comment. My first SCSI machine was a Mac Plus, in 1986, where
it was indeed designed in close conjunction with the OS and hardware.
It was sensitive to termination, addresses, etc. for very basic
reasons. The PC versions arrived years later, through add-on cards, and
were so flaky that SCSI was considered a joke in the PC community.


--Tim May
  #29   Report Post  
Mark Rand
 
Posts: n/a
Default

On Sun, 05 Dec 2004 07:37:22 GMT, "Martin H. Eastburn"
wrote:



The method I employed was to set up an array of integers of the F size.
e.g. Not so easy to guess when the numbers get large, but 1000! is under 2000 digits.

Then take the first two numbers and multiply, place the results into the array.
Multiply that by the next number - and when done, make sure each array element is a
single digit. If not - bump it up.... and re-test...

This then takes massive numbers and puts them into very simple baby talk numbers.
Any system can handle one digit vs. another. (unless it were a 4 bit o.s. :-) )

Martin


What's that Martin? You got some sort of prejudice against BCD???
G

Mark Rand
RTFM
  #30   Report Post  
Lawrence Glickman
 
Posts: n/a
Default

On Sun, 05 Dec 2004 07:37:22 GMT, "Martin H. Eastburn"
wrote:

I got an answer of 7.2574E+306 before I could even take my finger off
the enter key. Is that not fast enough for a *consumer computer*
???????

Lg

The method I employed was to set up an array of integers of the F size.
e.g. Not so easy to guess when the numbers get large, but 1000! is under 2000 digits.

Then take the first two numbers and multiply, place the results into the array.
Multiply that by the next number - and when done, make sure each array element is a
single digit. If not - bump it up.... and re-test...

This then takes massive numbers and puts them into very simple baby talk numbers.
Any system can handle one digit vs. another. (unless it were a 4 bit o.s. :-) )

Martin


so, there are a lot of other factors involved than raw CPU speed.
This is what benchmarking measures. Speeds of data bus, memory bus,
disc drive I/O, and so forth. There is a cluster of components that
need to work together to produce a result, and the slowest component
in the chain is always going to be the bottleneck.

So to be fair, we do benchmarking using different algorithms, to give
us an idea of SYSTEM performance, not raw Mips. But again, I am not
going to buy an elephant rifle to kill a mosquito. Maybe scientific
laboratories can profit from these high-end machines, but unless I get
into analog to digital interpolation, it is highly doubtful I am going
to benefit from having a cannon on board where a pellet pistol would
do the job just as well.

And even then, the throughput of the A/D converter is going to
determine to a large extent what limitations my *system* would have.

I try to stay digital all the way. Keep analog out of it, and you've
solved a lot of problems before they even got off the launch pad.

I've tossed all my legacy equipment, just threw it out into the
garbage ( after sanitizing it of course, with a flamethrower ).

Lg



  #31   Report Post  
Tim May
 
Posts: n/a
Default

In article , Lawrence
Glickman wrote:

So to be fair, we do benchmarking using different algorithms, to give
us an idea of SYSTEM performance, not raw Mips. But again, I am not
going to buy an elephant rifle to kill a mosquito. Maybe scientific
laboratories can profit from these high-end machines, but unless I get
into analog to digital interpolation, it is highly doubtful I am going
to benefit from having a cannon on board where a pellet pistol would
do the job just as well.

And even then, the throughput of the A/D converter is going to
determine to a large extent what limitations my *system* would have.

I try to stay digital all the way. Keep analog out of it, and you've
solved a lot of problems before they even got off the launch pad.


You're missing the point. The ADC is _not_ what makes rendering take
many, many hours. In fact, the output of a 13.7 GB digital video (DV)
is fed directly into a computer via Firewire ports. Digital at all
times once it's in the camcorder, so digital at all times in the
computer.

What takes time is the resampling/resizing/reformatting/rewhatevering
to "fit" 13.7 GB of digital data, for example, into 4.7 GB of space on
the the blank DVD. This is akin to when a printer driver says "The
image will not fit on this page...would you like to crop, scale, or
abort?" Except with 13.7 GB that's a lot of data, a lot of swapping to
disk, a lot of work.

And I mentioned rendering only as one frequently-encountered "overnight
job." There are a lot of things which actually take measurable time.
The point being that your "I can do a factorial in the blink of an eye
so I obviously don't need a faster computer!" example is silly.

And if you never need a faster computer, why did you even buy a 2.5 GHz
machine in the _first_ place?


--Tim May
  #32   Report Post  
Gary Coffman
 
Posts: n/a
Default

On Sun, 05 Dec 2004 10:25:31 -0800, Tim May wrote:
In article , Lawrence
Glickman wrote:

So to be fair, we do benchmarking using different algorithms, to give
us an idea of SYSTEM performance, not raw Mips. But again, I am not
going to buy an elephant rifle to kill a mosquito. Maybe scientific
laboratories can profit from these high-end machines, but unless I get
into analog to digital interpolation, it is highly doubtful I am going
to benefit from having a cannon on board where a pellet pistol would
do the job just as well.

And even then, the throughput of the A/D converter is going to
determine to a large extent what limitations my *system* would have.

I try to stay digital all the way. Keep analog out of it, and you've
solved a lot of problems before they even got off the launch pad.


You're missing the point. The ADC is _not_ what makes rendering take
many, many hours. In fact, the output of a 13.7 GB digital video (DV)
is fed directly into a computer via Firewire ports. Digital at all
times once it's in the camcorder, so digital at all times in the
computer.

What takes time is the resampling/resizing/reformatting/rewhatevering
to "fit" 13.7 GB of digital data, for example, into 4.7 GB of space on
the the blank DVD. This is akin to when a printer driver says "The
image will not fit on this page...would you like to crop, scale, or
abort?" Except with 13.7 GB that's a lot of data, a lot of swapping to
disk, a lot of work.


What the computer is doing is MPEGII compression of the video. That
requires having at least 7 frames of video in memory at the same time.
The algorithms look at the time domain data of the individual frames,
translate it into the frequency domain, ie they do a fast Fourier transform,
and then using the preceeding and following frames they find the most
compact way to represent the group of frames as a MPEGII compressed
data stream, which is what's laid down on the DVD.

MPEGII is lossy compression. Some of the data is discarded, repetitive
data is compressed. The algorithms do this in such a way so as to not
produce visible artifacts (most of the time anyway). That's what makes
it difficult. The algorithms are actually making *judgements* about
picture content. Moving objects can be rendered with less resolution
than stationary ones, so data is discarded for parts of the picture
determined to be moving. Fixed areas in the picture don't have
to be repeated in each frame, they're flagged for repetition instead.
Etc. Very complicated.

Decompressing MPEGII is easy. The compressed data stream has
flags embedded in it telling the decompressor exactly what to do.
There are commodity codec chips to do it on the fly in realtime.
Compressing it is hard. Even the professional purpose built
massively parallel processing boxes we use to do it for DTV
broadcast lag realtime by about 2 seconds.

Doing realtime with a single processor PC having limited memory is
impractical. You'd just keep losing ground to the realtime signal.
You can do offline processing with the PC, but it takes a very long
time. 11 hours to do 1 hour of video sounds about right for a 2.5
GHz PC with a gigabyte of memory. And that's just for regular
resolution video.

Doing HDTV with a PC is out of the question. With only a gig of
memory, the PC would be thrashing the disc all the time. HDTV
MPEGII converters are very expensive. Only a few production
houses, and the major networks, have them. At the local station
level, we just pass through the already compressed streams.

Uncompressed HDTV is a data stream at 1.2 Gb/s. Compression
brings it down to 45 Mb/s. Then we do a complex modulation called
8VSB which allows us to stuff it into a 6 MHz TV channel bandwidth
for broadcast.

Gary
  #33   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Tim May wrote:
In article , DoN. Nichols
wrote:


But -- your experience with SCSI has been with systems which use
it as an afterthought -- not as a normal part of the OS and hardware,
where it has been made to work really well.


A silly comment. My first SCSI machine was a Mac Plus, in 1986, where
it was indeed designed in close conjunction with the OS and hardware.
It was sensitive to termination, addresses, etc. for very basic
reasons.


Would one of those reasons perhaps be Mac's use of the DB-25
connector for SCSI, instead of a proper 50-pin connector? The proper
design of single-ended SCSI required an individual ground wire paired
with *each* signal conductor, and ideally either twisted with that
conductor, or at a minimum alternating with the signal conductor in a
ribbon cable, so there was a shield between each of the signal
conductors and its neighbor.

Of *course* you have problems with a DB-25 connector used for
that, because it interrupts the proper ground pairing, resulting in
ground loops and other undesirable actions.

The old Sun systems used a DD-50 connector (three rows of pins)
for SCSI, and later used a miniature two-row connector from AMP (smaller
than a DB-25), but a full 50 pins with click locks. Other vendors
sometimes used the 50-pin Amphenol Blue Ribbon connectors (mis-called
"Centronics" connectors, because the printer manufacturer, Centronics,
used the 36-pin version of that connector as their standard parallel
printer interface.

Above I mentioned "single-ended SCSI", because there is also a
differential SCSI. In that, the pins previously used for ground are
driven in the opposite sense to the signal with which it is paired.
There is a differential line receiver at the end (and each intermediate
drive) on this. Standard single ended SCSI had a length limitation, and
the higher the speed, the shorter the cable must be. This included the
length of ribbon cable within the drive housings, which could sometimes
add several feet. Differential SCSI can drive lines *much* longer than
the single-ended without problems -- but it needs a proper differential
terminator, not the normal single-ended one.

There are several styles of terminators. The passive ones (just
pairs of resistors to ground and VCC), which were adequate for shorter
runs, the active ones, which had a voltage regulator from the VCC and
ground to provide a proper intermediate voltage to which a single
resistor is run from each signal, and the FPT (Forced Perfect
Termination), which added to the active sets of diodes to ensure that
the lines could not ring beyond the VCC and ground voltages.

Now -- I mentioned a relationship between total cable length and
speed. Sun systems sensed errors, and at a certain point would back off
the speed from the maximum for that interface.

Also -- cable quality makes a *big* difference. A proper SCSI
cable was twisted pairs, run within a good shield (perhaps two or three
layers woven, foil, and then woven again. On the other hand, using
cheap cables *did* lead to problems. Using ones marked with the Sun
logo did not.

So -- I stand by my statement about Suns being designed to
properly use SCSI, and Macs and PCs not being so designed. The very use
of the DB-25 connector asks for problems before anything else gets
involved.

The PC versions arrived years later, through add-on cards, and
were so flaky that SCSI was considered a joke in the PC community.


Most of the early PC versions used the DB-25 connector as well,
because that fit their card brackets without problems. The later cards
(by Adaptec, for example), use the AMP high-density 50-pin connector
which the Suns use, and those are quite well behaved with good cables.
(I've used them -- but only with cables from Sun.)

So -- most of the problems with home computers and SCSI stems
from trying to do it on the cheap (the DB-25 connectors), and
secondarily from the lack of an adaptive driver which knew when to drop
the speed back to make up for an over-length total cable setup. A
proper drive enclosure will have the 50-pin connector for the cable
entrance, a loop of wire past the drive, with an IDC connector to plug
into the drive, and the loop continues to another 50-pin connector. Use
of the DB-25 connector makes this impractical, so there is typically a
board with two DB-25s side by side, and a stub going off to the drive.
Each stub introduces more reflections, and thus more potential for
problems.

Apple has, from the very start, depended on making software
substitute for hardware, so the original Apple, as well as the
long-lived Apple-II had weird artifacts, such as split memory segments
in the middle of memory for graphics interfaces, simply to save a very
few ICs in address decoding. And they used stripped floppy drives
(compared to the rest of the world), substituting software for hardware
there, too. A result of that was that they could squeeze more data into
a floppy, by changing the data rate depending on which track you were
using. This was nice -- but it meant that they were incompatible with
the rest of the world.

Then -- let us go into the Mac world. The early 68000-based
systems could only address 16 MB of space (24 address bits), though the
internals of the chip had room for 32-bit addresses, which could address
4 GB. OK, it is perhaps reasonable that they (at that time) figured
that 8MB max for RAM, and the other 8MB for memory-mapped I/O was
sufficient. But they started a "space-saving" trick which lead to a lot
of heartaches later on. they used the upper eight bits of the address
variables to store flags, and other things. When they move past the
68010 to the 68020 (which *could* address the full 32-bit address space,
and which was *planned* for and announced when the 68000 was introduced,
all of a sudden those flags were trying to address other real spaces, so
programs from the earlier systems started breaking. Even worse, their
software vendors picked up on the flags storage trick, so programs from
the vendors also broke when put on the 68020 (and later) based systems.

So -- with this record, I am supposed to believe that they got
things right with SCSI? Sorry, I don't -- based on the evidence. For
Sun, SCSI works very well. Even back in the Sun 2/120 (my first Sun
machine), SCSI worked well. Yes -- I eventually stacked up enough
drives in separate boxes so things started to fail -- until I subsitued
18" cables between boxes for the 6-foot ones which I had started with.

SCSI, properly implemented, works well. (I've used it since its
predecessor, SASI, which I used to interface drives to home-made
interfaces in a SWTP 6800 running DOS-69 and OS-9 (the *real* OS-9, not
what Mac came out with later.) :-) (Granted, I also had to write drivers
for OS-9 for my wire-wrapped interface, but it worked, and worked well.)
I was driving pairs of MFM drives connected to dual MFM controllers for
the SASI bus.

Enjoy,
DoN.

--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---
  #34   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Martin H. Eastburn wrote:

[ ... ]

The method I employed was to set up an array of integers of the F size.
e.g. Not so easy to guess when the numbers get large, but 1000! is under
2000 digits.


Hmm ... are you *sure* about that? I get 2568 characters from
calculating 1000! -- after stripping out the '\' at the end of each line
and the newline (line feed) which followed it, so there are only digits.
(The output from dc is normally formatted for a terminal.)

Then take the first two numbers and multiply, place the results into the
array. Multiply that by the next number - and when done, make sure each
array element is a single digit. If not - bump it up.... and re-test...


Sure -- basic "bignum" techniques.

This then takes massive numbers and puts them into very simple baby talk
numbers. Any system can handle one digit vs. another. (unless it were
a 4 bit o.s. :-) )


Yep -- want to build one around an Intel 4004? :-)

Enjoy,
DoN.
--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---
  #35   Report Post  
DoN. Nichols
 
Posts: n/a
Default

In article ,
Mark Rand wrote:
On Sun, 05 Dec 2004 07:37:22 GMT, "Martin H. Eastburn"
wrote:



The method I employed was to set up an array of integers of the F size.
e.g. Not so easy to guess when the numbers get large, but 1000! is under 2000 digits.

Then take the first two numbers and multiply, place the results into the array.
Multiply that by the next number - and when done, make sure each array element is a
single digit. If not - bump it up.... and re-test...

This then takes massive numbers and puts them into very simple baby talk numbers.
Any system can handle one digit vs. another. (unless it were a 4 bit o.s. :-) )

Martin


What's that Martin? You got some sort of prejudice against BCD???


Doe the 4-bit OS (and CPU) have provisions for handling overflow --
e.g. add 9 and 9 and you get 18 (0x12), which overflows the 4-bit
values. And if you need to handle *signed* numbers, your maximum value
is 7 -- one more becomes a negative number.

Enjoy,
DoN.

--
Email: | Voice (all times): (703) 938-4564
(too) near Washington D.C. | http://www.d-and-d.com/dnichols/DoN.html
--- Black Holes are where God is dividing by zero ---


  #36   Report Post  
Martin H. Eastburn
 
Posts: n/a
Default

Mark Rand wrote:
On Sun, 05 Dec 2004 07:37:22 GMT, "Martin H. Eastburn"
wrote:



The method I employed was to set up an array of integers of the F size.
e.g. Not so easy to guess when the numbers get large, but 1000! is under 2000 digits.

Then take the first two numbers and multiply, place the results into the array.
Multiply that by the next number - and when done, make sure each array element is a
single digit. If not - bump it up.... and re-test...

This then takes massive numbers and puts them into very simple baby talk numbers.
Any system can handle one digit vs. another. (unless it were a 4 bit o.s. :-) )

Martin



What's that Martin? You got some sort of prejudice against BCD???
G

Mark Rand
RTFM

No in fact I wrote a 64 bit to BCD converter - both directions many years ago. The
nasty trick was to build the hardware display it used. Seven Segments were expensive
and so were the BCD to 7-segment decoder chips. But I used TI's instead of the
handful of decode logic. Monsanto was the 7-segment maker for me at the time.

I later found out that an in-law type relative alt-tree type - was the
Fairchild VP for Opto. Such is life.

Martin

--
Martin Eastburn, Barbara Eastburn
@ home at Lion's Lair with our computer
NRA LOH, NRA Life
NRA Second Amendment Task Force Charter Founder
  #37   Report Post  
Martin H. Eastburn
 
Posts: n/a
Default

Whoops - wonder what 1745 was - maybe that was e or something.
Color me yellow :-(

Yes - 1000! (2567)4.0238 72600 77093 77345 out to a few places -
'compiled from Ballistic Research Laboratory, A table of the factorial
numbers and their reciprocals from 1 to 1000! to 20 significant digits,
Tech note 381, Aberdeen Proving Ground Md.(1951) (with permission) [to]
United States Department of Commerce. National Bureau of Standards
"Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables"
Applied Mathematics Series: 55 Fourth printing December 1965.
~8x10" x 2.5" Library of Congress Catalog Card Number : 64-60036

A very tech table book and was useful since my schooling days!
If you need trig tables out or log table out or Gama tables - transcendental functions
and other fun things we had to use prior to the little plastic boxes
with built in functions.

My 4 banner - Nixie tube - 12 digit - had taped to the top the mathematical arguments
to generate logs from scratch. That sucker cost us 600 USD and was prior to TI or HP.
It was a business machine.

Martin

DoN. Nichols wrote:

In article ,
Martin H. Eastburn wrote:

[ ... ]


The method I employed was to set up an array of integers of the F size.
e.g. Not so easy to guess when the numbers get large, but 1000! is under
2000 digits.



Hmm ... are you *sure* about that? I get 2568 characters from
calculating 1000! -- after stripping out the '\' at the end of each line
and the newline (line feed) which followed it, so there are only digits.
(The output from dc is normally formatted for a terminal.)


Then take the first two numbers and multiply, place the results into the
array. Multiply that by the next number - and when done, make sure each
array element is a single digit. If not - bump it up.... and re-test...



Sure -- basic "bignum" techniques.


This then takes massive numbers and puts them into very simple baby talk
numbers. Any system can handle one digit vs. another. (unless it were
a 4 bit o.s. :-) )



Yep -- want to build one around an Intel 4004? :-)

Enjoy,
DoN.



--
Martin Eastburn, Barbara Eastburn
@ home at Lion's Lair with our computer
NRA LOH, NRA Life
NRA Second Amendment Task Force Charter Founder
  #38   Report Post  
B.B.
 
Posts: n/a
Default

In article ,
(DoN. Nichols) wrote:

[...]

Then -- let us go into the Mac world. The early 68000-based
systems could only address 16 MB of space (24 address bits), though the
internals of the chip had room for 32-bit addresses, which could address
4 GB. OK, it is perhaps reasonable that they (at that time) figured
that 8MB max for RAM, and the other 8MB for memory-mapped I/O was
sufficient. But they started a "space-saving" trick which lead to a lot
of heartaches later on. they used the upper eight bits of the address
variables to store flags, and other things. When they move past the
68010 to the 68020 (which *could* address the full 32-bit address space,
and which was *planned* for and announced when the 68000 was introduced,
all of a sudden those flags were trying to address other real spaces, so
programs from the earlier systems started breaking. Even worse, their
software vendors picked up on the flags storage trick, so programs from
the vendors also broke when put on the 68020 (and later) based systems.


To clarify: Apple implemented a 32-bit mode on the later 68k
machines. If you needed backward compatibility, which I did for MS's
****, you could switch modes and reboot. The OS simply told all of the
apps it was using all of the addresses above 16MB, so apps stayed out of
there and they could get away with the 24bit memory hackery. You were
just short on RAM until you switched back and rebooted.
Since Apple was pretty reliable about selling computers with 16M or
less RAM up until they switched to Power PC, it's not too much of a
rip-off. (:

[...]

--
B.B. --I am not a goat! thegoat4 at airmail dot net
http://web2.airmail.net/thegoat4/
  #40   Report Post  
Russ Kepler
 
Posts: n/a
Default

Martin H. Eastburn wrote:

Sorry Spehro - I forgot - it was 1000! it had 1732 ? chars in it.
Just fit on my Queme printout.


Well, that's:

40238726007709377354370243392300398571937486421071 463254379991042993\
85123986290205920442084869694048004799886101971960 586316668729948085\
58901323829669944590997424504087073759918823627727 188732519779505950\
99527612087497546249704360141827809464649629105639 388743788648733711\
91810458257836478499770124766328898359557354325131 853239584630755574\
09114262417474349347553428646576611667797396668820 291207379143853719\
58824980812686783837455973174613608537953452422158 659320192809087829\
73084313928444032812315586110369768013573042161687 476096758713483120\
25478589320767169132448426236131412508780208000261 683151027341827977\
70478463586817016436502415369139828126481021309276 124489635992870511\
49649754199093422215668325720808213331861168115536 158365469840467089\
75602900950537616475847728421889679646244945160765 353408198901385442\
48798495995331910172335555660213945039973628075013 783761530712776192\
68490343526252000158885351473316117021039681759215 109077880193931781\
14194545257223865541461062892187960223838971476088 506276862967146674\
69756291123408243920816015378088989396451826324367 161676217916890977\
99119037540312746222899880051954444142820121873617 459926429565817466\
28302955570299024324153181617210465832036786906117 260158783520751516\
28422554026517048330422614397428693306169089796848 259012545832716822\
64580665267699586526822728070757813918581788896522 081643483448259932\
66043367660176999612831860788386150279465955131156 552036093988180612\
13855860030143569452722420634463179746059468257310 379008402443243846\
56572450144028218852524709351906209290231364932734 975655139587205596\
54228749774011413346962715422845862377387538230483 865688976461927383\
81490014076731044664025989949022222176590433990188 601856652648506179\
97023561938970178600408118897299183110211712298459 016419210688843871\
21855646124960798722908519296819372388642614839657 382291123125024186\
64935314397013742853192664987533721894069428143411 852015801412334482\
80150513996942901534830776445690990731524332782882 698646027898643211\
39083506217095002597389863554277196742822248757586 765752344220207573\
63056949882508796892816275384886339690995982628095 612145099487170124\
45164612603790293091208890869420285106401821543994 571568059418727489\
98094254742173582401063677404595741785160829230135 358081840096996372\
52423056085590370062427124341690900415369010593398 383577793941097002\
77534720000000000000000000000000000000000000000000 000000000000000000\
00000000000000000000000000000000000000000000000000 000000000000000000\
00000000000000000000000000000000000000000000000000 000000000000000000\
00000000000000000000000000000000000000000000000000 00

(Took an unnoticiable time to calculate using bc on my home computer.
Man, it's been ages since I used bc!)
Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
DRO System Mill/Drill Kenneth A. Emmert Metalworking 11 December 3rd 04 05:28 AM
Radiators and CH system... Christian McArdle UK diy 4 September 2nd 03 04:10 PM
mains Hot water, and do I convert open heating to a closed heating system Ian Tracey UK diy 5 July 18th 03 09:55 AM


All times are GMT +1. The time now is 08:20 PM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 DIYbanter.
The comments are property of their posters.
 

About Us

"It's about DIY & home improvement"