A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

PII vs PIII



 
 
Thread Tools Display Modes
  #111  
Old October 14th 03, 08:59 PM
SIOL
external usenet poster
 
Posts: n/a
Default


"w_tom" wrote in message
...

SIOL - I was probably building computer systems (at
component level with soldering iron) before you were even
born.


And that gives you some kind of seniority ?
Just ot of curiosity- what have you actually built by yourself ?


Apparently you don't even understand the priority
system of task execution.


If there are not enough CPU cycles and programs demand impossible
combinations of CPU loads, then no prioritisation in the world can help you.

High priority task gets processed
immediately at the expense of a compiler program.


What a load of crap. When tasks "get executed", compiler program that
compiled them has no longer say in what gets executed and how.


And no
processor puts up a "do not disturb" sign.


Even interrupts have their priority.Not every interrupt gets to be serviced
at the moment of arrival.
Thus the parallel- "do not disturb". I still think it's quite appropriate...


Task is only
executed, at most, for a prescribed amount of time - and then
another task is taken up. Or current task is immediately
interrupted to perform a higher priority task. No task can
"camp out" on a microprocessor as in SIOL's Point #2.


On Linux, AFAIK kernel has these rights.It can reserve (due to I/O etc) CPU
power at will/needs.
On 2.6 this is somewhat remedied, I believe. This was the whole point of
having RTOS kernels- "normal" ones weren't deterministic enough...


There
is even this little thing called time slicing. Also nowhere
was mention of "random choice". Where out of the blue did
"random choice" come from?


I have meant to say that program can have say about which CPU(s) it gets
executed on.
It need not strictly be kernel's choice- which could be seen from outside
observer's perspective as a pseudorandom.
That is, not knowing the variables that affected the kernel's decision, it
would seem random...

BTW: What about time slicing ? Firs time I have encountered it was when
fiddling with my forst Sinclair QL IIRC it was something like 1989.Sinclair
was so proud of its multitasking capabilities...
So, what is so magic about time slicing or job scheduling as Sinclair liked
to put it ?



Because you don't fully understand how a preemptive
multitasking OS works, then you think a dual processor system
should be more responsive. If multiprocessor systems worked
as you described, then yes, the multiprocessor system would be
more responsive. But preemptive MT does not work as
described. Processors are constantly taking up new tasks even
when the current task is not completed.


What is new about that ? Sure they are. They are executing job after job
until next scheduler's interrupt. So what ?


High priority or real
time tasks - that make for system interface responsiveness -
are processed immediately.


O.K. Lets' say I do:

-burn a CDR
-watch a DivX
-do some window manipulations (resizing etc) with my mouse

To the system, all these tasks are high priority. So how does it decide what
to service first, interrupt for filling the next few sectors to CDR, deal
with mouseclicks or prepare next few frames of MPEG4 for display ?

Lets leave aside extra bazilion of processes with lower priority for a
moment and concentrate just on "big ones..."


Difference: a faster processor
means that real time task will be picked up and completed
quicker - which is why a 600 Mhz processor will finish


I don't fully understand this. Real time task can be processed in time or
not in time.
WHat do you mean to say, that with fast P4 you can watch Matrix in 15 min ?

Or that 600 MHz CPu will have extra time" to spare" in each timeslice ?

Even if so (not necesarilly your OS can always make sensible use out of
this), 600 Mhz CPU does not switch between tasks 2x faster than 300 Mhz. It
is 2x faster only with the data in cache. But in real life its L1 and L2
caches are full of tasks that compete for execution. And task swittch costs
quite some external accessess.
And DRAM doesn't get twice as fast every 18 months like CPUs allegedly do.


But again, this is speculation. You have not provided
numbers for your claims AND not even provided a research
study. Therein lies the problem. Whether dual processor
system is more or less responsive is irrelevant. You have
again only provided speculative theories; some not even based
in how preemptive MT works. Then you claim those speculative
theories prove your personal, subjective, observation. You
have only an opinion which you misrepresent as scientific
fact. It is what we call junk science reasoning.


I say that THIS WORKS FOR ME ! Really, that is all I'm saying.
Just today I have burned some 250+ CDRs and printed some 400+ CD stickers.
Not ONE CD lost- on three burners, and with nice Ethernet traffic.
All this while doing other things on this machine. What more do I need ?
Dhrystons, Sisoft Sandra etc ?
Why ? I have tried uni CPU board in this machine and I have tried dual. It
works better.
It works as good as I need it to ! That is all what counts with real time
machines !

I don't run from numbers but getting them will cost me downtime and money.
For what ?
To get a Quake framerate ?

Thing does what it is supposed to do and all I'm telling you is that for me
dual CPU has its value.



I am not saying the single high speed CPU is more or less
responsive than slower, multiprocessor system. I am saying
you do not have numbers or even a study to make your claims.


Sure I do. Here are some numbers:

With uni CPU Tualatin 1.7GHz I couldn't burn ONE CDR without ruining it. Now
I can burn a CDR ALWAYS.

Difference in performance is 100%- and let's not talk about a ratio...

How's that for a study ?


Even worse, because you could not support your claims with
numbers, then you posted insults at Lane Lewis, et al.
Personal insults is a symptom of junk science reasoning.


What insults ? Can you find at least one ?
How about his/hers insults ?


Your only proof is your emotional opinion of how you 'feel'
the dual processor system works. And you did not even
demonstrate that both single and dual processor systems are
equivalently designed - have same memory capacity - same bus
speeds - same video subsystem. Just more reasons why what you
'felt' is actually nothing more than speculation.


Some more crap from you. I DID say that the only thing I have exchanged was
just a board & CPU.
And I DID gave detailed explanation of confuigurations.

Take another look and tell me what else do you want to know about
configurations ?


BTW, experience with current technology preemptive
multitasking OSes would indicate that 486 CPU cannot run XP.
IOW understanding how preemptive MT OSes works was not
demonstrated, AND experience is lacking. SIOL is also not
familiar with hardware required for an XP system.


"SIOL" was just putting someone others claims to the test about 486 being
responsive and therefore fast.
Your attention to details is a bit shallow.

That again
is my point. Insufficient background (and numbers) to make
those claims. Conclusions are based on junk science
reasoning. Reasoning only good enough to express a personal
opinion - a relationship unique to that one person's machines.


We can't all be blessed with your deeper understanding of the universe and
everything in it...


  #112  
Old October 14th 03, 09:17 PM
SIOL
external usenet poster
 
Posts: n/a
Default


"John-Paul Stewart" wrote in message
...
Steve Wolfe wrote:


Now, use an SMP machine. One CPU gets hammered while servicing the
gigabit NIC, the other CPU's free to run the app receiving the data, so

you
get next to it's full ability for processing data - say, 80%.


Funny you should mention this, Steve. I was about to post very similar
information about an hour ago, but decided against prolonging this
thread. Howver, here it is, just to prove your point.

Earlier today I tried a rather informal test on my dual 1GHz P-III
system with a GigE card in it. I hammered it with one gigabyte of data
coming in over the network and monitored the server with xosview.
During data transfer one CPU or the other was at greater 90% CPU
utilization, usually 100%. (This is "system" time, not user
applications.) Just from servicing interrupts from the gigabit card.
Without a second CPU, the machine would have been crippled by the
interrupt flood.

Same thing happens in the other direction, too. Transmitting that data
causes CPU utilization to go through the roof.

Of course, this is just one example. I'm sure somebody will cry: "but
those are rare circumstances that'll lead to that situation". Sure.
But there are situations where the same theory applies. When you
consider all such situations, they are surprisingly common.


I was lurking on comp.sys.sinclair and one guy that is doing firmware for
TCP/IP layer on Ethernet interface for QL got an interesting problem. IIRC
interface has sufficient buffer memory, so CPU doesn't have to poll it too
often.

This is good since he uses QL's interrupt that gets executed on 20 ms. In
terms of 100 Mbit EThernet time, 20 ms is a long time to wait. But since he
had deep enough buffer, he did not worry.

But then he run onto a problem: Ethernet card (normally) understands only
Ethernet frames. To the card, TCP/IP is just a data. But TCP/IP protocol
demands acknowledge after each received frame. There can be of course some
window between last received and last acknowledged frame, but in practice it
has been shown that no sender is willing to send frame after frame and
blindly wait more than 20 ms for the first acknowledge to come.

So he had implementation that ran perfecty correct, it just was a little
slow. It could transfer some 50 packets per second with 1.5 kB each= some 70
kB/s ;o)

Since core of the TCP/IP has to be done by the CPU, I was kind of expecting
this. It's bad enough on 100 Mbit, and it is got to be a killer on 1 Gbit. I
was constantly hoping that some standard solution with might emerge- like
Ethernet chip also understanding IP, for example- and taking care for
elemetary but labor intensive stuff...




  #113  
Old October 14th 03, 09:25 PM
Steve Wolfe
external usenet poster
 
Posts: n/a
Default


That might be a rather unique situation though, if I needed constant
high-bandwidth on a LAN I'd get a more expensive Gbit adapter as soon
as a 2nd CPU/motherboard.


Ah, but therein lies the tradeoff: The adaptors which reduce interrupt
floods, via interrupt coallescing, do it at the expense of increased
latency. In some applications, that's alright. In others, it's a killer.

Of course, you can go even higher in cost to use switches which support
jumbo frames, but then you're speaking of such high dollar amounts that you
might as well just use SMP machines to begin with. : )

steve



  #114  
Old October 15th 03, 11:31 AM
Steve Wolfe
external usenet poster
 
Posts: n/a
Default


Since core of the TCP/IP has to be done by the CPU, I was kind of

expecting
this. It's bad enough on 100 Mbit, and it is got to be a killer on 1 Gbit.

I
was constantly hoping that some standard solution with might emerge- like
Ethernet chip also understanding IP, for example- and taking care for
elemetary but labor intensive stuff...


Between interrupt coallescing, jumbo frames, and SMP systems, there are
plenty of ways to make things work well. The interrupt coallescing will do
the trick, but does cost latency. Many good drivers allow for adjustments
of just when the coallescing kick in. With SMP, of course, you can handle
interrupts without the problems, and jubmo frames let you do 64K packets,
which cuts your interrupt number by a factor of something like 40. However,
SMP systems and/or jumbo frames do cost more money, and interrupt coalescing
costs latency. But if you have the money to throw at it, the problem can be
solved easily. : ) All in all, people who have the hardware to throw at it
and do the right kernel tuning do get a full gigabit/second. Actually
*doing* something with that data is another question, processing and/or
storing 100+ megabytes/second is nothing to sneeze at.


steve



  #115  
Old October 15th 03, 05:56 PM
Steve Wolfe
external usenet poster
 
Posts: n/a
Default


Nice try.
I of course never said that. What's your interpretation of " just about

"
does that mean "every"

It's time to end this when someone double-dog-dares-me.


Of course it is, this time *you* were asked to back up your statements.

steve


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
PIII 1333 roch General 3 October 3rd 03 12:53 AM
CPU upgrade, how high can I go? Sam General 3 September 19th 03 03:30 PM
DELL Inspiron 4000 PIII, 600, 128 RAM sc General 0 August 14th 03 11:57 AM
Dell CS-X Slimline Notebook PIII 500Mhz help hammer General 1 July 15th 03 09:59 PM
my graphic card require 650mhz I have a pIII 450mhz is that enough? Kanolsen General 4 June 29th 03 02:13 PM


All times are GMT +1. The time now is 07:38 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.