A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Printers
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

How do you make small pixel photos better on printout?



 
 
Thread Tools Display Modes
  #11  
Old January 30th 04, 09:49 PM
Paul Cooper
external usenet poster
 
Posts: n/a
Default

On 30 Jan 2004 12:28:16 -0800, (W. W.
Schwolgin) wrote:


Winfried


Fundamentally, this isn't worth the trouble. There simply isn't enough
information to make it worth while. The results will always be fuzzy,
and if you somehow did (by resamply and sharpening) make it look
better, it would be full of spurious artifacts.

Paul


Paul,

you are right, in this case there might not be enough information for
a good print. But why do you say "fundamentally"? To my experience the
interpolation method makes a difference. I did a test and printed an
image with the size 10 x 15 cm at 300dpi (about 6.2 MB as tiff). I
printed the image with Photoshop Elements and Qimage at the size
10x15cm. Of cause the was no realy difference.
Then I did a second test with the same file and printed at 100dpi and
a size of 30 x45 cm. In this case there was a visual difference. The
Qimage print looked smoother and more sharp. But nevertheless, a
100dpi image will not give a good print.

Winfried



Sorry - you need a little background. Image processing and signal
processing are part of my work, and I tend to take some things for
granted!

Simply put, you can never increase the amount of information present
in the image. Furthermore, a thing called the Nyquist criterion means
that the minimum wavelength present in any sampled dataset (which
means images in 2 dimensions) is twice the minimum sample interval.
Resampling, image processing, whatever can NEVER get round that, as it
is fundamental information theory. So, if you resample and sharpen,
what you are trying to do is to shorten the minimum wavelength in the
image, and that you can't do. So, what is actually happening is that
any information at wavelengths shorter than twice the ORIGINAL sample
interval is totally spurious, an artifact of the processing you have
done and only indirectly related to the original image. Now, you may
produce results that are visually better - but they are not an
accurate reflection of the object that the camera was pointed at. If
you want to produce an accurate rendition of the original, then you
must simply oversample the image using some interpolation algorithm
(bilinear or bi-cubic spline are the commonest), but do NOT attempt to
sharpen the result. Yes, it will look a little fuzzy, but it will be a
more accurate rendition of the information you have.

Paul
  #12  
Old January 31st 04, 12:50 AM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default

In article , Paul Cooper
writes
Sorry - you need a little background. Image processing and signal
processing are part of my work, and I tend to take some things for
granted!

Too much apparently!

Simply put, you can never increase the amount of information present
in the image. Furthermore, a thing called the Nyquist criterion means
that the minimum wavelength present in any sampled dataset (which
means images in 2 dimensions) is twice the minimum sample interval.
Resampling, image processing, whatever can NEVER get round that, as it
is fundamental information theory. So, if you resample and sharpen,
what you are trying to do is to shorten the minimum wavelength in the
image, and that you can't do. So, what is actually happening is that
any information at wavelengths shorter than twice the ORIGINAL sample
interval is totally spurious, an artifact of the processing you have
done and only indirectly related to the original image.


Rubbish - that is not *all* that is happening, and the difference
between your description and what *is* happening can result in
considerable effective image enhancement as viewed on the final print.

Like you claim, I also have several decades experience in signal
processing - specifically of and for imaging sensors and systems. You
should, given your statement, be aware of the MTF of the original sensor
and subsequently the pixels produced. With such awareness you should
also recognise that sharpening after up-sampling enhances the MTF of the
*original* data, not just the interpolated content - which is merely
extending the spatial frequency scale with near-null data (how near
depending on the quality of the up-sampling algorithm used). Indeed,
simply comparing the two most common upsampling algoriths, bilinear and
bicubic, the latter produces less spurious (super-Nyquist) artefacts
whilst retaining more real (sub-Nyquist) data than the former!

Consequently, not only can sharpening of interpolated data improve the
contrast of the *real* image content, it can appear much more natural
and smoother than sharpening *prior* to interpolation - a process which
knocks your *artefact enhancement only* statement into a cocked hat!

Now, you may
produce results that are visually better - but they are not an
accurate reflection of the object that the camera was pointed at.


As above, they can be a much more natural reflection of the scene than
implementing the process the other way around. You cannot get more than
you have to begin with, but you can certainly present it in the best
possible way for the eye to assimilate.

If
you want to produce an accurate rendition of the original, then you
must simply oversample the image using some interpolation algorithm
(bilinear or bi-cubic spline are the commonest), but do NOT attempt to
sharpen the result.


More rubbish! Learn the principles of pre-emphasis before making such
ludicrous claims. Spouting theory which directly contradicts
observations only demonstrates a failure to understand the process(es)
involved at sufficient depth!
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #13  
Old January 31st 04, 01:09 AM
Carrie Lyons
external usenet poster
 
Posts: n/a
Default

Kennedy McEwen wrote:
Paul Cooper writes:

Simply put, you can never increase the amount of information present
in the image.

Now, you may
produce results that are visually better - but they are not an
accurate reflection of the object that the camera was pointed at.


If you want to produce an accurate rendition of the original, then you
must simply oversample the image using some interpolation algorithm
(bilinear or bi-cubic spline are the commonest), but do NOT attempt to
sharpen the result.


More rubbish! Learn the principles of pre-emphasis before making such
ludicrous claims. Spouting theory which directly contradicts
observations only demonstrates a failure to understand the process(es)
involved at sufficient depth!


Cooper is technically correct (amount of information) yet real-world
wrong. People know what things look like, and image manipulation does
indeed add information to the photograph. A person looks at the photo
and adjusts it. The person added the information.

Red eye correction adds information, for example. Cooper is using
a purist definition of information that is misapplied here.
  #14  
Old January 31st 04, 02:51 AM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default

In article , Carrie Lyons
writes
Kennedy McEwen wrote:
Paul Cooper writes:

Simply put, you can never increase the amount of information present
in the image.

Now, you may
produce results that are visually better - but they are not an
accurate reflection of the object that the camera was pointed at.


If you want to produce an accurate rendition of the original, then you
must simply oversample the image using some interpolation algorithm
(bilinear or bi-cubic spline are the commonest), but do NOT attempt to
sharpen the result.


More rubbish! Learn the principles of pre-emphasis before making such
ludicrous claims. Spouting theory which directly contradicts
observations only demonstrates a failure to understand the process(es)
involved at sufficient depth!


Cooper is technically correct (amount of information) yet real-world
wrong. People know what things look like, and image manipulation does
indeed add information to the photograph. A person looks at the photo
and adjusts it. The person added the information.

Red eye correction adds information, for example. Cooper is using
a purist definition of information that is misapplied here.


Unfortunately that is not the case in this instance. Whilst you cannot
add image information that is not present in the original, you can
certainly make better use of it, making it easier to pass through the
remaining visual system components (display or printer, eye, retina
etc.) much more effectively. This is exactly the effect that Winfried
described. One obvious, and well exercised, mechanism to do this is to
pre-emphasise the high spatial frequency components of the image that
would otherwise be further attenuated by the MTFs of those system
components. One such pre-emphasis is general sharpening, part of which
may be integral to the interpolation method used, although other
techniques better matched to the transfer functions of the imaging
system components are also available. To suggest that such sharpening
not only does not achieve this improved information content at the
perception centre (which is, after all, the only place that the image
information content really matters), but should not be implemented at
all on interpolated data because it can only result in excessive
spurious artefacts, is complete bunkum in both the technical sense and
real world experience - purist or not.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #15  
Old January 31st 04, 09:36 AM
Carrie Lyons
external usenet poster
 
Posts: n/a
Default

Kennedy McEwen wrote:
writes:

Cooper is technically correct (amount of information) yet real-world
wrong. People know what things look like, and image manipulation does
indeed add information to the photograph. A person looks at the photo
and adjusts it. The person added the information.

Red eye correction adds information, for example. Cooper is using
a purist definition of information that is misapplied here.


Unfortunately that is not the case in this instance. Whilst you cannot
add image information that is not present in the original, you can
certainly make better use of it, making it easier to pass through the
remaining visual system components (display or printer, eye, retina
etc.) much more effectively.


My concept of "adds information" does not require a higher resolution
photo, nor even an algorithm interpolating the data.

Fixing up "red eye" was an example I gave. The person taking the
picture remembered what the eye color was and used PhotoShop to
fix up the eyes.

They added information to the photo. It's a new photo.

If a small patch of nearby skin was duplicated and moved over a zit,
information was added to the photo. It's a new photo.

If you greatly enlarge the dimensions of a photo in PhotoShop, solid
colors stay solid all the way down to the pixel level, as if it had
vector underpinnings. Larger photo, information was added. Resolution
was gained in the solid color areas.
  #16  
Old January 31st 04, 01:39 PM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default

In article , Carrie Lyons
writes

My concept of "adds information" does not require a higher resolution
photo, nor even an algorithm interpolating the data.

Perhaps, but not within the context of the subject title of this thread
and even outside of that I disagree with your examples.

Fixing up "red eye" was an example I gave. The person taking the
picture remembered what the eye color was and used PhotoShop to
fix up the eyes.

They added information to the photo. It's a new photo.

Looking at the complete operation however, information has been removed
from the original image - the eyes really were red because the flash
illuminated the blood vessels on the subject's retina - and replaced
with a view of what the user expected. Overall, as much information has
been added as subtracted. The image has indeed changed, but contains no
more information than the original.

If a small patch of nearby skin was duplicated and moved over a zit,
information was added to the photo. It's a new photo.


In this case, information has been duplicated and hence the image
actually contains *less* information than it had before.

If you greatly enlarge the dimensions of a photo in PhotoShop, solid
colors stay solid all the way down to the pixel level, as if it had
vector underpinnings. Larger photo, information was added. Resolution
was gained in the solid color areas.


ABSOLUTELY NOT!!! You are *way* off base here!

Resolution is the ability to distinguish fine features in the image.
Formally, "resolution" is defined as the minimum separation between
point sources necessary to distinguish them as individual points rather
than a single extended object. On a digital image, the minimum
theoretical separation of such points is infinitesimally greater than a
single pixel pitch - since a single pixel is required to separate them
and the points could be present at the inner edge of the adjacent pixels
in the original scene - any closer and that separating pixel is lost and
they become unresolved. Hence resolution has, in salesman speak or
other forms of slang, become synonymous with pixel density, but they are
completely different concepts. Pixel density can, and often is, higher
than resolution - it cannot, however be lower than resolution.

In your example, *NO* resolution was added in the solid colour areas *AT
ALL*, since it is still one single large extended object and no detail
exists within it. Without detail, at whatever scale, there is *no*
resolution!

No amount of enlargement, filtering, interpolation or sharpening can
increase the resolution of the image. If any two points in the scene
are just unresolved in the image then no amount of processing will ever
resolve them. However, if the same two points are just resolved in the
image then processing can increase the local contrast between them and
the background which separates them, thus ensuring that they remain
separable after subsequent imaging stages such as display, printing and
even the viewing eye.

Each of these components reduce the contrast of fine detail relative to
the overall image contrast. How much they do this is defined by their
modulation transfer function (MTF) which plots the percentage of
contrast reproduced by the component against spatial resolution,
representing fineness of detail. Hence objects which were just resolved
with the minimum detectable contrast would have their contrast reduced
further by each stage of the imaging process ensuring the contrast falls
below the level of perceptibility and they become unresolved.
Pre-emphasis by sharpening the image, whether before or after any
interpolation stage, reduces the possibility that resolved objects
become unresolved by later steps in the viewing process.

Interpolation itself has an MTF, which may reduce the contrast of the
resolved detail below a perceptible level - thus reducing the resolution
of the image. Indeed, the main image quality difference between the
common interpolation algorithms is how high the MTF is maintained below
the Nyquist limit compared with how low it is kept above that limit,
thus avoiding spurious artefacts. The ideal interpolation algorithm
would have an MTF which was completely flat up to Nyquist and zero above
it. Nearest neighbour interpolation has an MTF which is a sinc
function, ie. sin(pi.a.f)/(pi.a.f) where a is the pixel pitch and f is
the spatial frequency. If you plot this curve out you will see that it
has a high MTF up to the Nyquist limit (f=1/2a) but also a significant
amplitude above it, and the specific characteristic gives rise to the
sharp edges between original pixels. Bilinear interpolation has an MTF
of the form (sin(pi.a.f)/(pi.a.f))^2. Plot this curve and you will see
less MTF above the Nyquist limit, hence less spurious artefacts, but
also a lower MTF below the limit, so less information is retained and
some resolved detail in the original becomes unresolved. Bicubic
interpolation has an MTF which is much flatter below the limit and
almost zero above it, whilst Lanczos interpolation is *designed* to
achieve the closes approximation to the ideal flat and cut shape as is
possible with the original pixel density. Fractal interpolation is
unique in that it has no fixed MTF but instead creates new data based on
the scaled spatial spectral characteristics of the original image, thus
creating the *impression* of new information and increased resolution.

Excluding fractal interpolation, all of these interpolation algorithms
can be converted from one to the other or to intermediate forms by
combining them with one or more sharpening or softening filters,
providing sufficient numerical precision is retained in the computation.
Indeed any such complex interpolation scheme can be implemented in a
single stage filtering process to ensure that all of the resolution
captured in the original image is maintained right up to the point of
perception.

Some of the surveillance systems, which require all of their available
resolution to be clearly reproduced and presented to the viewer with
minimum artefacts, that were recently deployed in Iraq, and previously
used in Afghanistan and Bosnia, use this combined filter technique
directly. They enlarge the image produced by a digital sensor for
optimum viewing whilst enhancing the fine detail present, all in a
*single* interpolation and sharpening stage which is exactly the
mathematical equivalence of image enlargement using bicubic
interpolation followed by a specific sharpening filter. I know, because
*I* designed them that way almost 15 years ago!
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #17  
Old January 31st 04, 09:18 PM
Carrie Lyons
external usenet poster
 
Posts: n/a
Default

Kennedy McEwen wrote:
writes

Fixing up "red eye" was an example I gave. The person taking the
picture remembered what the eye color was and used PhotoShop to
fix up the eyes.

They added information to the photo. It's a new photo.


Looking at the complete operation however, information has been removed
from the original image - the eyes really were red because the flash
illuminated the blood vessels on the subject's retina - and replaced
with a view of what the user expected. Overall, as much information has
been added as subtracted. The image has indeed changed, but contains no
more information than the original.


To me, the original photo is *forever unchanged*.

When a new image is created from that correcting the eye color,
information has certainly been added, creating a more faithful
reproduction of the original person. Information not recorded
in the original photo was added from the "memory bank" of the
person taking the picture.

Clearly, information was added to create the new image.


If a small patch of nearby skin was duplicated and moved over a zit,
information was added to the photo. It's a new photo.


In this case, information has been duplicated and hence the image
actually contains *less* information than it had before.


A new set of information is conveyed. That the new image does not
contain some of the original image does not mean there is less
information, just less information from the original image.

Let's say the zit was removed using a small skin patch from
another photo - the new image is now a photo collage, technically.

If it's a new creation, information was added because something
new is conveyed.

If you have seven original (starting) photos of seven people, use one
as the base background and cut out the other six people and collaged
them onto the base such that seven people could be seen plus some of
the remaining background of the base image, the new image has new
information, even if it lost most of the pixels from the original image.

If you greatly enlarge the dimensions of a photo in PhotoShop, solid
colors stay solid all the way down to the pixel level, as if it had
vector underpinnings. Larger photo, information was added. Resolution
was gained in the solid color areas.


ABSOLUTELY NOT!!! You are *way* off base here!

Resolution is the ability to distinguish fine features in the image.


Take the original photo and the greatly enlarged one.

Backup twenty feet. ;-)

More detail is delivered to the eye by the larger image.

Is not 'more delivered detail' a definition of 'resolution?'
  #18  
Old January 31st 04, 11:51 PM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default

In article , Carrie Lyons
writes

To me, the original photo is *forever unchanged*.


Quite, however, changing the photo content does not increase the amount
of information contained in it - unless you put more information in than
you take out. What you consider to be useful or important information
in this context is irrelevant, it does not make that information any
more significant in terms of the total amount of information present,
even if it is of more *importance* to you personally. So replacing your
old boyfriend with your new one in your holiday snaps does not change
the total amount of information in the photos, it just changes the
content of the photo.

There are 26 letters in the western alphabet. Changing each of these
letters for number, from 0 to 25, is a change of medium - but *NOT* the
total amount of information contained in any message composed of those
letters! Any message written using those 26 letters can equally well be
written by using 26 numbers - it might not mean much to you when you
read the numbers instead of the text, but those numbers can be
unambiguously converted back into text and the only information that
they will produce is the same original text. A sequence of 1 million of
those 26 numbers can contain no more, and no less, information than a
sequence of 1 million letters. This has been the basis of every
cryptographic system since the Abyssinians!

Indeed, expanding the number of letters to include punctuation marks,
page formatting data such as line feeds and page feeds etc. results in
the 7-bit ASCII code that everything you are currently reading depends
on for it to get from my keyboard, through my computer, several thousand
miles of copper and fibre, a few servers, your computer and onto your
display! At no time in that process is the volume of information in the
message changed but how it is represented changes many times - from
individual key presses to voltages on 7 individual lines, to sequences
of voltages on a single line, to pulses of light down a fibre, back to
voltages in copper and finally to a single electronic beam scanning the
back of the phosphor on your screen to produce the text image you read
in light patterns. Many changes of content - no change in information.
Throughout that rather obscure route, extra information is added to the
core content at various stages - parity bits, message headers, Internet
packet identifiers etc. but that is also all removed at other stages in
the process as well, with the end result that the message you read has
the same total amount of information (and, in this instance the same
information content, albeit presented on a different medium) as I put
into it in the first place.

If you wish to argue this, you would do well to get a basic grounding in
Information Theory, starting with Claude Shannon's original paper "A
mathematical Theory of Communication" which laid the foundations for the
entire topic but is a very readable text for even the layman! Originally
written with telecommunications in mind, it is just as relevant to
imaging and image processing.

When a new image is created from that correcting the eye color,
information has certainly been added, creating a more faithful
reproduction of the original person. Information not recorded
in the original photo was added from the "memory bank" of the
person taking the picture.

Clearly, information was added to create the new image.

There is no question that information was added - replacing information
that was already there. Where that information came from is irrelevant
to the total amount of information contained in the image. The sum total
of that subtraction and addition is "no change" in the total amount of
information. I assume you understand the concept of subtraction,
because you certainly seem to be ignoring it in your comments so far!

If a small patch of nearby skin was duplicated and moved over a zit,
information was added to the photo. It's a new photo.


In this case, information has been duplicated and hence the image
actually contains *less* information than it had before.


A new set of information is conveyed.


No, the "set" of information is reduced - the same information as in the
original, less the patch that has been replicated and information
stating where the duplicated patch comes from. A different image is
certainly conveyed however, since some of that information is duplicated
then the total amount of information conveyed is reduced. Furthermore,
a sufficiently intelligent lossless compression algorithm could identify
the duplicate information and transmit the image faster or store it in
less space. For example, the image could comprise 1 million pixels, but
the zit perhaps only 1000 pixels. After duplication the entire image
could be described in terms of 999,000 pixels and a few bytes stating:
a) section of the image duplicated
b) start x pixel of duplication area
c) start y pixel of duplication area
d) x size of duplication area
e) y size of duplication area
f) x source of duplication area
g) y source of duplication area

For a 1000x1000 pixel image with a maximum duplication size of 500
pixels, that corresponds to around 59 additional bits of information,
one bit being required to identify that a section of the image is to be
duplicated - in return for saving 3000 bytes of information (assuming an
8-bit colour image) - in short, the image contains 23,941 fewer bits of
information! In practice, no general image coding scheme would be quite
as specific and hence as efficient, however some image formats will
achieve a significant proportion of that possible saving.

That the new image does not
contain some of the original image does not mean there is less
information, just less information from the original image.

Let's say the zit was removed using a small skin patch from
another photo - the new image is now a photo collage, technically.

If it's a new creation, information was added because something
new is conveyed.

Conversely, something *LESS* is also conveyed - the image of the zit!
Again, the information has changed but that does *NOT* mean that more
information is contained in the image. Information is added to replace
information that is already present. Once again, the sum total is "no
change". However, if the replacement data was cloned from another patch
of the same image, as is normal practice in cleaning up images, then
LESS total information certainly is contained in the image.

If you have seven original (starting) photos of seven people, use one
as the base background and cut out the other six people and collaged
them onto the base such that seven people could be seen plus some of
the remaining background of the base image, the new image has new
information, even if it lost most of the pixels from the original image.


A new image does NOT mean more information - background, as any IP
lawyer will inform you, is VERY IMPORTANT INFORMATION!!

Take the original photo and the greatly enlarged one.

Backup twenty feet. ;-)

More detail is delivered to the eye by the larger image.

Not compared to the small photo viewed at close range it doesn't! Where
does the extra detail come from? Magic'd out of thin air? You really
are way out of your depth and sinking fast if you think that you get
more information just from a larger picture!

Try this little test. Take one of your digital images and open it in
Photoshop (or any image package you care to mention). Set the
"resolution" to 1000ppi and save a copy of the file in either an
uncompressed format or in a loss-less compressed format. Now change the
"resolution" to 1ppi (no interpolation, just simple pixel residing). Now
save that image in the same format as before. Compare the size of the
two files - they are identical, yet one image occupies 1 million times
as much area as the other! The files are the same size because the
total amount of information contained by each image is IDENTICAL!

Is not 'more delivered detail' a definition of 'resolution?'


No, resolution is only one aspect of "delivered detail" or of the total
information. Ever heard of signals and noise?
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
  #19  
Old February 1st 04, 09:59 AM
Carrie Lyons
external usenet poster
 
Posts: n/a
Default

Kennedy McEwen wrote:
writes

To me, the original photo is *forever unchanged*.


Quite, however, changing the photo content does not increase the amount
of information contained in it - unless you put more information in than
you take out. What you consider to be useful or important information
in this context is irrelevant, it does not make that information any
more significant in terms of the total amount of information present,
even if it is of more *importance* to you personally.


Sure it does: how long have you been living amongst us humans, KM?

There are 26 letters in the western alphabet. Changing each of these
letters for number, from 0 to 25, is a change of medium - but *NOT* the
total amount of information contained in any message composed of those
letters! Any message written using those 26 letters can equally well be
written by using 26 numbers - it might not mean much to you when you
read the numbers instead of the text, but those numbers can be
unambiguously converted back into text and the only information that
they will produce is the same original text.


No, you're wrong. The new numbers (or say a new alphabet) might change
the relationships. The letters C and S and K might be reduced to two
symbols representing just the S and K sounds, freeing up a symbol to
represent something new.

Same number of symbols, something new and more expressive though.

Some encryption codes purposely do the reverse: they map the input
to fewer characters and when decrypted the message has occasional
misspellings, none of which are bad enough for a human to not be
able to understand the message.

All sorts of transformations are possible using the same number
of symbols.

Then there's Babel-17.

http://www.amazon.com/exec/obidos/tg...glance&s=books

----

Kennedy McEwen wrote:

Indeed, expanding the number of letters to include punctuation marks,
page formatting data such as line feeds and page feeds etc. results in
the 7-bit ASCII code that everything you are currently reading depends
on for it to get from my keyboard, through my computer, several thousand
miles of copper and fibre, a few servers, your computer and onto your
display! At no time in that process is the volume of information in the
message changed but how it is represented changes many times - from
individual key presses to voltages on 7 individual lines, to sequences
of voltages on a single line, to pulses of light down a fibre, back to
voltages in copper and finally to a single electronic beam scanning the
back of the phosphor on your screen to produce the text image you read
in light patterns. Many changes of content - no change in information.
Throughout that rather obscure route, extra information is added to the
core content at various stages - parity bits, message headers, Internet
packet identifiers etc. but that is also all removed at other stages in
the process as well, with the end result that the message you read has
the same total amount of information (and, in this instance the same
information content, albeit presented on a different medium) as I put
into it in the first place.


Well, that monolithic block of text certainly qualifies for a
Geek-Of-The-Year nomination! (And I've demonstrated you're wrong.)

And no, Path/Message-ID/X-Trace etc are new (not in my original post),
meaningful and retained.

I suppose you think NNTP does a straight N-way propagation too.

----

Kennedy McEwen wrote:
writes

When a new image is created from that correcting the eye color,
information has certainly been added, creating a more faithful
reproduction of the original person. Information not recorded
in the original photo was added from the "memory bank" of the
person taking the picture.


Clearly, information was added to create the new image.

There is no question that information was added - replacing information
that was already there. Where that information came from is irrelevant
to the total amount of information contained in the image. The sum total
of that subtraction and addition is "no change" in the total amount of
information. I assume you understand the concept of subtraction,
because you certainly seem to be ignoring it in your comments so far!


What you said is false. If I opaquely overlay text over someone's
forehead, I've added information regardless of the byte count.

More "signal." Our brains will assume "forehead" under the text.

Or if I secretly added a message in the low order bits but the
photo looked the same to the human eye. The original low order
bits conveyed almost no information, the new ones are a complete
new message. Not all bits are the same, information-wise, because
of the interaction with our brains.

----

Kennedy McEwen wrote:
writes

If a small patch of nearby skin was duplicated and moved over a zit,
information was added to the photo. It's a new photo.


In this case, information has been duplicated and hence the image
actually contains *less* information than it had before.

A new set of information is conveyed.


No, the "set" of information is reduced - the same information as in the
original, less the patch that has been replicated and information
stating where the duplicated patch comes from.


To anyone who had not seen the original image, the entire new image
is all new information.

And if I change a smoking friend's face to all green and put
a word balloon on it saying "I need another cigarette then I'll
feel fine", I have (once again) added information.

That you don't recognized changed bits as new information,
or even changed meanings to 26 symbols indicates you just
haven't lived among us humans for very long. ;-)

A new image does NOT mean more information - background, as any IP
lawyer will inform you, is VERY IMPORTANT INFORMATION!!


Well, finally you've somewhat recognized meanings in bits.

Uncle Martin got one of his antennae back in his head.

Well, one halfway retracted.

Take the original photo and the greatly enlarged one.

Backup twenty feet. ;-)

More detail is delivered to the eye by the larger image.

Not compared to the small photo viewed at close range it doesn't!


WHOOOSH! If the detail can't be seen at twenty feet then defacto
the detail doesn't reach the eye and so ain't there. Same thing
for satellite photos...if you can't read the detail then the
image is lacking the detail. That a close enough look will see
the detail is irrelevant until you get such an image.

If a tree falls in a forest and there's no human there
to hear it, did it make a noise?

No! Not as far as humans are concerned.

Perceived detail equates to real detail.

Is not 'more delivered detail' a definition of 'resolution?'


No, resolution is only one aspect of "delivered detail" or of the total
information. Ever heard of signals and noise?


Su Operations Research, Cybernetics, negentropy by homeostasis.

Your analysis has been leaving out the human factor of incoming
images being interpreted by the brain. One set of bits can convey
significantly different and *new* information than another set of bits,
i.e. the 'words added to their forehead' example, or the steganography
example, for us humans.
  #20  
Old February 1st 04, 02:10 PM
Kennedy McEwen
external usenet poster
 
Posts: n/a
Default

In article , Carrie Lyons
writes
Kennedy McEwen wrote:
writes

To me, the original photo is *forever unchanged*.


Quite, however, changing the photo content does not increase the amount
of information contained in it - unless you put more information in than
you take out. What you consider to be useful or important information
in this context is irrelevant, it does not make that information any
more significant in terms of the total amount of information present,
even if it is of more *importance* to you personally.


Sure it does: how long have you been living amongst us humans, KM?

Obviously a lot longer than you! Weighting information in terms of what
is considered important is the basis of lossy compression systems, an
imaging example of which is Jpeg. It is called "lossy" because
information is lost - even the Joint Photographic Experts Group do not
consider that this declassifies it from being information. You, on the
other hand, demonstrate absolute arrogance and with it a complete
inability to discriminate between "information" and "assumption"!

There are 26 letters in the western alphabet. Changing each of these
letters for number, from 0 to 25, is a change of medium - but *NOT* the
total amount of information contained in any message composed of those
letters! Any message written using those 26 letters can equally well be
written by using 26 numbers - it might not mean much to you when you
read the numbers instead of the text, but those numbers can be
unambiguously converted back into text and the only information that
they will produce is the same original text.


No, you're wrong. The new numbers (or say a new alphabet) might change
the relationships. The letters C and S and K might be reduced to two
symbols representing just the S and K sounds, freeing up a symbol to
represent something new.

That is irrelevant, merely taking advantage of the redundancy present in
one specific language. It does not change the total amount of
information that can be carried by combinations of those 26 letters or
numbers!

Same number of symbols, something new and more expressive though.

No - more efficient use of the information space available, just like
lossless compression of data.

Some encryption codes purposely do the reverse: they map the input
to fewer characters and when decrypted the message has occasional
misspellings, none of which are bad enough for a human to not be
able to understand the message.

Once again, this does NOT change the amount of information that any
combination of the letters can contain - it merely exploits the
inefficiency of one particular language. Since words such as QQRXYZ
simply do not exist in the language, the information space they
represent can be eliminated or utilised for another purpose. That does
not change the amount of information that can be represented by 26
letters or numbers.

Once again, I seriously suggest you read Claude Shannon's groundbreaking
paper on the topic. It may be over 50 years old but it specifically
addresses this very issue on the second page!


And no, Path/Message-ID/X-Trace etc are new (not in my original post),
meaningful and retained.

They are "wrappers" - just like envelopes and postage stamps and
franking marks on snail mail. Additional information wrapping the
original message for the purpose of delivery, and requiring additional
information to do so. The message *itself* remains unchanged and the
total amount of information contained by it does also.


What you said is false. If I opaquely overlay text over someone's
forehead, I've added information regardless of the byte count.

No - and by making such a ludicrous statement it is clear that you do
not understand what *information* actually is! You have removed
information from the original in order to place the text. In fact,
since the text you have added could have been implemented in less than
one byte per character and you have obliterated many bytes in the image
(each potentially containing unique information) to place the text, you
have actually reduced the total amount of information present. In
short, graphical text is an extremely inefficient means of encoding the
information contained by it.

More "signal." Our brains will assume "forehead" under the text.

Learn the difference between assumption and knowledge. You *assume*
that the perfect skin of a forehead is a continuance of what lies behind
the text, but you do not *know* that - the text could conceal an ugly
scar on the individual or even conceal text placed on the image by a
previous user, such as a watermark. Indeed, what is concealed by the
text may be the only discriminating feature of the individual which
identifies him from a twin brother, distant relative or anyone who
randomly has similar characteristics - you assume it doesn't, but you do
not know because you have lost the very piece of information that would
allow to know. What you assume is irrelevant in terms of the total
amount of information present in the image because the assumption can
be, and often is, wrong - and sometimes deliberately led to be wrong, as
they obviously were in intelligence information concerning Weapons of
Mass Distraction! Knowledge that you do not know what you are assuming
is just as significant as any other knowledge - but you are ignoring
that and treating your assumptions as knowledge.

Or if I secretly added a message in the low order bits but the
photo looked the same to the human eye. The original low order
bits conveyed almost no information, the new ones are a complete
new message. Not all bits are the same, information-wise, because
of the interaction with our brains.

Well at least you have acknowledge that you are removing information in
order to do so - but you have *NO* knowledge of what the importance of
that information is. You assume it is unimportant, but you do not know
that! I could, for example, wish to modify the image levels to improve
the visibility of shadow, mid range or highlight detail, in itself
sacrificing some information, in which case the loss of those lower bits
to your hidden message would be immediately obvious.

The relative importance of the information contained in the image, not
only to our brains but every stage of the imaging process is exactly
where we came into this discussion. Sharpening the image changes the
weighting of the information relating to fine detail, it does not,
however, change the total amount of information in the image.

To quote someone else:
"We know, there are known knowns; there are things we know that we know.
We also know there are known unknowns; that is to say we know there are
some things that we do not know. But there are also unknown unknowns -
the ones we don't know that we don't know."

Your example of text on the forehead is making assumptions for those
"known unknowns" - you don't know what information the text conceals,
you assume. Your subsequent suggested example amounts to making
assumptions for the "unknown unknowns". Both have been established in
history as being erroneous assumptions!

To anyone who had not seen the original image, the entire new image
is all new information.

New information introduced at the expense of old information. ie. Total
information content is unchanged!

And if I change a smoking friend's face to all green and put
a word balloon on it saying "I need another cigarette then I'll
feel fine", I have (once again) added information.


At the expense of much more information than you introduce!

That you don't recognized changed bits as new information,
or even changed meanings to 26 symbols indicates you just
haven't lived among us humans for very long. ;-)

Changed bits is NEW information replacing OLD information - total
information content is UNCHANGED. It seems that you have lived amongst
the tricksters for far too long - unable to recognise that what is being
given by the right hand is also being taken away by the left. Which
particular minute were you born in, sucker?

Take the original photo and the greatly enlarged one.

Backup twenty feet. ;-)

More detail is delivered to the eye by the larger image.

Not compared to the small photo viewed at close range it doesn't!


WHOOOSH! If the detail can't be seen at twenty feet then defacto
the detail doesn't reach the eye and so ain't there.


And your point is what? The same detail reaches the eye from the large
image at long distance as the small image at short distance - NO CHANGE.

Same thing
for satellite photos...if you can't read the detail then the
image is lacking the detail. That a close enough look will see
the detail is irrelevant until you get such an image.

That has nothing whatsoever to do with your example! You started with a
certain information content, and modified it for different viewing
conditions resulting in exactly the same information content. Now you
are changing your comparison to one of less information to begin with
and having to change conditions to obtain more information. That is
neither unexpected nor a sequitur to your previous statement.

Your analysis has been leaving out the human factor of incoming
images being interpreted by the brain. One set of bits can convey
significantly different and *new* information than another set of bits,
i.e. the 'words added to their forehead' example, or the steganography
example, for us humans.


I suggest you back and read my original post in this thread - sharpening
the image enhances the contrast of details in the image which would
otherwise be lost by subsequent stages of the image viewing process. I
have never contended that some parts of the information contained in the
image is more significant to its interpretation or more resilient
against loss through stages of the imaging process than others - indeed
that is the crux of my original method! However, that specifically
exploits the inefficiency of the image as an information medium so that
unused "information space" can be exploited to make real information
more robust to the losses in the process stages. It does not mean that
the image contains more information than it previously did - neither
resolution nor signal to noise ratio in any spatial frequency band have
changed so the amount of information contained in the image remains
unchanged. This is the entire principle of pre-emphasis!

It would be impolite to remind you that *you* introduced the term
"amount of information" to contest my comments in your post of Fri, 30
Jan 2004 19:09:00GMT reference ,
however it is there on record and publicly accessible to all who doubt
that you did! Since then that is what the discussion has addressed -
the total amount of information in the image. There is no point in now
changing your argument back to relative importance or robustness of the
information since that is exactly back on the path that *you* decided to
divert from!
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's ****ed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Canon S820 print quality Dru Printers 2 September 26th 03 03:21 PM


All times are GMT +1. The time now is 08:48 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.