View Single Post
Old February 16th 16, 08:38 PM posted to alt.comp.periphs.mainboard.gigabyte
external usenet poster
Posts: 13,364
Default Upgrade from Z87X-UD4H-CF mobo, socket 1150 LGA.?

(PeteCresswell) wrote:
I currently have a Z87X-UD4H-CF mobo, socket 1150 LGA, running at 3.5
GHz with 16 gigs of memory. 64-bit Windows 7, running from a SDD.

The Question:
Has hardware progressed yet to where I could put another mobo/CPU in
this box and reap some response time benefits that I would notice
without getting ridiculous cost-wise?

The process starts with a study of Task Manager.


There is clock speed.

There is parallelism.

If a compute problem can use "divide and conquer",
then using more CPU cores might help.

We're run out of scaling on CPU clock rate.

Intel makes small adjustments on IPC (instructions per
clock) per generation, but that is a very steep hill to
climb. Just a couple days ago, I saw an article about
a technology that attempts to dynamically allocate
resources in an attempt to increase IPC. Whereas Intel
manually assigns resources in their designs. So there
are people still toying in that dimension. But that
might be a 5% per year kind of improvement.

Software designers, on average, are still poor at
finding parallelism. Some problems inherently don't
scale via parallelism. Others (Bitcoin mining, Cinebench,
to some extent 7ZIP), scale nearly perfectly. Video
transcoding doesn't seem to be all that good in the
parallelism dimension.

If, while running your normal load, and with Task
Manager in "One Graph per Core" mode, you'd look at
whether all cores were profitably filled. If the
software was poorly written, had one thread of execution,
one core was chock full (running at 100%), the other
cores idle, then a hardware upgrade would be pointless.
You'd want better software. If the pattern looked like
a 7ZIP pattern, pretty good parallelism, then you might
consider a hardware upgrade, to get 10-25% more perhaps.
And the cost would be disproportionately higher ($800
to $1400 machine).

When studying Task Manager, you have to remember that the
kernel scheduler causes threads of execution to "migrate"
and do so many times per second.

Schedulers in OSes vary, in their policy on this. Modern
Windows assigns a "cost" function to moving, so the threads
of execution no longer jump around purely randomly many times
a second. But with the right mix of programs running, the
Task Manager pattern can be pretty hard to study. So if
you were to say to me "I don't understand what I'm seeing here",
I would agree that the Task Manager display method leaves
a lot to be desired. You have to use your "imagination",
to claim you understand what you're seeing. Think of it being
a multi-dimensional problem, with a dimension missing. You're
expected to fill in the gaps, using your vivid imagination.

It's possible for the designer, to "force" the threads of
execution to run on particular cores. For example, the
DScaler my WinTV card uses to put TV on my screen, runs
two threads of execution, and arbitrarily puts one thread
on Core0, and the second thread on the highest Core number.
When studying the pattern from that one, you would not
need to use your imagination.

The user can also force the program onto a particular core,
but there is no granularity in that control. Task Manager
has "Set Affinity" as a right-click option when in the
"Processes" view. If I start 7ZIP running and use all
possible cores, then if I use the affinity control and
assign 7ZIP to Core0, then all the threads of execution
run on that core, and I shoot myself in the foot. There
is no ability to manage the internal threads that way.
What I can do with the affinity control, is keep 7ZIP
off one of the cores. If I had a 4 core processor, I
could assign 7ZIP Core0, Core1, Core2, and not assign
it Core3. That keeps it off Core3, and is a way of "reserving"
25% of my four core resources, for some other (unspecified)
purpose. But generally, user control of affinity is a mistake.
I've tried steering things with affinity, and it's usually
pointless. But this should not prevent you from experimenting
your own self. I think on occasion, a program will be ill-behaved,
but then, if affinity kills it, it probably would have died
at some random time on you anyway (not thread-safe perhaps).

There used to be some games, that if you went from a single
core CPU, to a multi core CPU, the game would malfunction.
And affinity, and runtime launch controls based on affinity,
allowed us to continue to use those programs, without them
crashing. So that's another example of where affinity paid off.
Not too many people would still play such (old) games.

Good luck,