The article gives the impression that upscalers are mainly hardware devices edit

The article gives the impression that upscalers are mainly hardware devices, but upscaling is also frequently done in software. I would change the article to explain upscaling in a more device neutral way. Kwinzman (talk) 12:08, 3 May 2021 (UTC)Reply

LCD monitors for computers do NOT simply upscale a "VGA signal". edit

The passage I deleted in this article gave an example of upscaling as a standard LCD computer monitor upscaling a 640x480 VGA signal to a much larger resolution. This is preposterous... computers haven't output standard VGA for over a decade, perhaps two, except for the POST sequence. LCD monitors actually ADJUST to the output of modern video cards, which output a far greater resolution than 640x480. I suppose it's assumptions like this (all PCs output VGA, it's just "upscaled") that make the youth of today actually believe that true "upscaling" is even possible. Raster images are simply not scalable upward, so all that's taking place is pixel doubling.

Another example was given about the image being processed and filtered to "retain original detail", which is completely fallacious. Most of this article does seem to acknowledge that NO lost detail can be regained by "upscaling", but that particular example was completely misleading at best. Image processing can only attempt to smooth out the effects of pixel doubling, but it CANNOT reconstruct original detail. Image processing algorithms, no matter how advanced, can't tell a house from a tree, and have no way of "knowing" what detail was there to begin with. The ONLY way to see 1080 lines of detail is to view a source containing 1080 lines of resolution. Upscaling a 480-line source (standard DVD, for example) to fit a 1080-line display will only give you 480 lines that are simply LARGER than they were originally. Adding "image processing" will only BLUR those lines, not add any extra detail or improve image quality.

I agree with whomever suggested that this page needs MAJOR cleanup. MaxVolume (talk) 22:48, 7 December 2009 (UTC)Reply


When I originally contributed to this entry (both content and images), I was working at a home theater video-processor company by the name of Anchor Bay Technologies (makers of DVDO, now part of the Silicon Image unit of Lattice Semiconductor). This was my first and only contribution to Wikipedia, and was the result of a ton of customers calling in the support line with either no clue or horrible information. As a rebuttal to your comment on an LCD scaling a VGA 640x480 VESA standard signal to what-ever resolution the LCD glass is, that is exactly what they do. Your comment that the "LCD monitors actually adjust to the output of modern video cards", is actually 100% wrong (and is similar to one of the reasons I had to write this article in the first place). Displays including CRTs have what is called an EDID ROM, which communicates over the DDC channel of the video interface (AnalogRGB, DVI, HDMI, DisplayPort). This ROM contains a VESA standard table of the display's capabilities. The video card (or really any video source that uses the standard interfaces) will then read the table and output what either the display says is the prefered resolution, or whatever it can produce that is in the table of capabilities. I am personally familiar with the VESA, DVI, HDMI, and DisplayPort standards, however they are covered under strict NDAs so copying technical data verbatim is not allowed and at the time I was having trouble finding out how to present 100% accurate information without violating that NDA. I also can't go into the technical magic that actually did the scaling for various products lines, although I can point to patents and coach another writer on what certain things mean.

My intention was to go back and fill in more details on how video scaling was actually done. There are various mathematical processes that occur and the order is very important, and I can describe those various elements using some public domain information.

The first of two interpretations of "keep the original content" that I had not developed in the article well is that, at face value, the video in = the video out - and the intention is to reduce or eliminate scaling induced artifacts. Yes you discovered the old adage "Garbage in, Garbage out", but each signal processing step is not perfect, it can be ideal under given circumstances, but rarely ever perfect - and that was where I wanted to go with that portion. As an example the simplest form of a true scaler is a merely that of a sample rate converter - a given number of samples "pixels on a line" needs to be converted to a different number of sample on a line in a same period of time. For example, converting a 640 pixel wide line to a 1920 pixel wide line that has to occur in the same frame period time, means that you are increasing the pixel sample frequency for that line (because we aren't generating any new content, we are limited by the content that the source provides). Take a look at the wikipedia article on sample rate conversion and they will get into why certain types of conversion are prefered over others (though I do note a circular reference to this article for the Apollo Mission under Film/TV). The problem is that any signal processing you do adds error to the signal, and error appears as visual noise. It could manifest as a "ring" where the scaler overshoots the target pixel intensity, then undershoots, then overshoots (etc.) as it homes in on the correct value, all the while outputting pixels (because time doesn't stand still). This can look like a ripple around an edge if not handled properly, and will make the edge look softer to the viewer (even if they can't articulate why). By changing the type of processing that is done around certain danger areas, one can mitigate various effects that are likely to happen around them thereby "retaining the original detail". See it's not dishonest, the reader just needs more information. Merely doubling pixels will not get you a properly scaled image all the time, 1920/1280 = 1.5 which is not quite double. Assuming that a scaler only ever doubles or makes simple mathematical averages of new pixels (and yes some very bad scalers do) is a disservice to the readers.

As for further input to your concern that I was less than honest that original content could be preserved - actually there are several cases where this is true in a video processor, an interlaced video signal that is converted into a progressive video signal, can have detectable artifacts that identify the line that was not part of the the original signal. This coupled with the cadence of the video signal can get a confidence in an algorithm that certain lines are not original, and they can be removed (restoring the original interlaced video signal). for the company I was working for at the time, this technology was infact productized and marketed under the trade name "PReP", which stood for progressive re-processing (take a poorly de-interlaced progressive signal, strip off the bad lines that were not the original content thereby returning it to the original interlaced signal, then re-process the interlaced signal with a de-interlacer).

The last rebuttal to your concern is that there are several algorithms out in the field that can look at compression artifacts and determine what caused them - this can actually be reversed to a point, but the result is you actually end up with more original detail than you would by viewing the same input source. See the Teranex line which was a spinoff of Lockheed Martin's satellite imagery software, in hardware for video.

What was removed from this article that made it more clear was the section on video processor, which was meant to call out that a video processor contained several of the various functions listed now under the "See Also" section, these functions need to be designed to work together to retain the original detail - poorly design hardware and algorithms or algorithms that are combined with incompatible algorithms will result in a larger image corruption than doing it well. I agree that an article for each of those topics is and was warranted, and I'm glad that others continued the work I started. However the way it was broken up leaves a lot to be desired.

Frame rate conversion is different from scaling (converting from say 24Hz to 60Hz), however a version of "frame rate scaling" is being done now with a technology called frame rate interpolation - it's really the same concept as scaling a line of video applied to entire frames of video over time. An original set of frames happens at a given spacing in time, and the algorithm needs to figure out what should go between them. At that high level, frame rate conversion and scaling are the same thing. Cheers! Tim292stro (talk) 22:22, 13 October 2015 (UTC)Reply

upconversion edit

upconversion redirects here, but term is never used. Mathiastck (talk) 16:46, 23 June 2010 (UTC)Reply

LOL...I just looked up upconversion here yesterday. (grin) It does redirect to the section on upconverting DVDs...is upconversion generally used in other contexts? It might be confusing if the redir didn't go specifically to that section. Doniago (talk) 18:36, 23 June 2010 (UTC)Reply

Upconversion is an alias (aka "also known as") of Upscaling - article was edited and made the whole collection less clear. Tim292stro (talk) 22:22, 13 October 2015 (UTC)Reply

video "scaling" in place of "scaler" edit

as a possible article title alternative. Twipley (talk) 23:51, 21 April 2011 (UTC)Reply

Blu-ray players are DVD upscalers too edit

I believe every Blu-ray player ever made is also a DVD upscaler (Are there any exceptions, however obscure?), so should the article mention this? What kind of upscaler are they? Does the quality of the upscaling vary across models? — Preceding unsigned comment added by 4.254.82.104 (talk) 01:11, 26 May 2011 (UTC)Reply

All BluRay players were required by the BluRay standard to contain a scaler, as long as it took certain input resolutions and output certain resolutions that was as tight as the standard got. Each vendor was able to buy their own parts or make their own algorithms (if they had the technical prowess to do that). Yes the quality difference varried - think of it like the difference between a Yugo and a Bugatti, they are both cars and have four wheels and run on gasoline/petrol, but that's about where the similarities stopped. You get what you pay for here, and not everyone needs a Bugatti to pick up the kids... tim292stro (talk) 21:53, 13 October 2015 (UTC)Reply

Unsourced Material edit

The entire article is in desperate need of sourcing, but I'm moving some of the more troublesome sections here. Please feel free to properly reference and reincorporate into the article text. Doniago (talk) 13:28, 29 July 2011 (UTC)Reply

I do feel that the last section here was one of the most important and got dumped because of me. Here I claim responsibility for doing original research - I stated a technical fact (you will never recover 1080p worth of high frequency detail from a 720p or lower frequency limited input signal), then I followed it up with what I had constantly heard from consumers. That said, I think at this point with some digging we can find some formal market research that has been done to substantiate the claims.

The section on the Pioneer Plasma was a direct pull from the manual, unfortunately I cannot recall the specific model, although it was a Black model (under the "Kuro" line name). Tim292stro (talk) 22:22, 13 October 2015 (UTC)Reply

References