DMA? edit

What would be the difference between a DMA controller and a Blitter?--Anonauthor (talk) 00:57, 1 July 2009 (UTC)Reply

A DMA controller can't modify and combine data. —Preceding unsigned comment added by 70.222.69.80 (talk) 23:35, 11 December 2009 (UTC)Reply
It's also more specialised for accessing a range of sources and destinations, particularly I/O devices (including non-memory-mapped ones) which have all kinds of additional sync/timing/polling requirements, rather than simply working on blocks of data within RAM. And, as already stated in the article, a Blitter is specialised for working with graphics bitplanes and other unaligned data types within the bytes/words of the memory system; a DMA controller generally blindly shifts entire, fully-aligned words of data, either as bytes or as whatever the system bus width is (16, 32 bits, etc). 146.199.60.6 (talk) 14:13, 20 April 2019 (UTC)Reply

Modern use of blitters edit

Would it be correct to say that on modern hardware blitters are still used to initialize (clear) buffers and for swapping front/back buffers when using double-buffering? 82.95.196.102 (talk) 14:29, 19 July 2009 (UTC)Reply

I'd say that modern GPUs are mostly enhanced blitters. The principles are the same. Stream chunks of data from, combine, and then write back to memory. —Preceding unsigned comment added by 70.222.69.80 (talk) 23:38, 11 December 2009 (UTC)Reply
Eeeeeeexcept for all that scaling, rotation, 3D rendering, T&L calculation, physics, texture mapping etc calculation that they do, of course? It's arguable that even alpha blending isn't truly a blitter function, more general-purpose ALU. Actual blitters don't really have much use in the modern world because general-purpose processors long since matched speed with them, and in a lot of cases ended up incorporating their functionality, particularly in superscalar / SIMD instruction sets like MMX, 3DNow, SSE etc. They were of rather more use back when CPUs were simplistic and slow, and particularly were rather bad at doing memory moves (especially large blocks) by themselves. EG the Z80, 8086, 68000 etc (the 6502 was somewhat blitter-ish itself, at least in Zero Page). Even by the time of the 286 and 68010, CPUs were gaining ground on single-purpose blitters (286 was MUCH faster at memory addressing and copying than the 8086, 68010 had a "loop" mode that was particularly good for block memory moves), and the 386, 68020/030, etc, with (near-)zero address calculation delays and burst-mode memory access not only made blitters but often even dedicated DMA controllers kinda pointless. 146.199.60.6 (talk) 13:45, 20 April 2019 (UTC)Reply

Logical sprite painting operation edit

Can someone step through the given formula for me, because either it's wrong or I'm reading it wrong. Wouldn't (Background AND Mask) OR Sprite mean that the sprite would still end up overwriting the background, potentially in weird ways? Even if you break it up into individual bitplanes it's gonna go wrong. I'm not sure how to modify that formula to include it, but I was always shown sprite+mask rendering operations as XORs, at least for 1-bit sprites with 1-bit masks (e.g. mouse pointers in bitplane-based windowing systems).

Alternatively, as the usually Blitter takes in all three data sources before writing back out to the screen memory buffer (which can be seen as a proxy version of a dedicated sprite engine passing-through either the BG or the sprite pixel data to the output shifter/DAC based on the mask bit or the sprite pixel colour), would it not be the slightly more complex, but entirely achievable in rudimentary silicon, "(Background AND Mask) OR (Sprite AND NOT Mask)"... or a more readable condensation of it? (Sprite NAND Mask? Or is it essentially a 3-way XOR, written in a strange way?) ... That is, you only want either the background OR the sprite to potentially be "true" at any one time. If you use the currently stated formula, you run the risk of having a background that is itself "0" (in one or more planes) still being overwritten by a "1" from the sprite, regardless of what the mask says.

For the types which depend on the sprite pixel palette value instead there would probably be a summing OR network across the (2, 3, 4...) bitplanes where entry 0 would give a "false" readout (ie, show Background), and any other entry would be "true" (show Sprite), valid for masking the bits in all of the planes at once. Essentially, the mask is inherent in the sprite data instead of being a separate plane additional to it (nb. in that case maybe still read from the same memory address at the same time, though, depending on bit width), and requires a little extra (rudimentary) calculation internal to the blitter/sprite engine to extract it. (...and losing one entry out of 15, or even out of 4, is still more efficient of bandwidth and storage than adding a third bitplane to the original two, or a fifth to the original four).

...actually, thinking about it, what's currently shown is probably more applicable to an inherent-mask / transparent palette entry system than a separate mask plane one. If the all the plane bits of a pixel sum to 0, producing a "show the background" output, then their own bits can, both individually and collectively, only ever be zero, and so they WON'T overwrite the background no matter what colour it is. If a pixel is any other colour than "transparent", the background will be turned off (set to all 0s) and each plane can be freely ORed with it to produce the sprite colour (and at the same time, you can also e.g. switch the LUT to the background palette instead of the foreground). Essentially, the transparent LUT entry can be any colour value (not "index") you want, because it will never be shown, and is clamped to all zeroes.

That setup doesn't work for separate masks, because the sprite pixel's colour can be any value you like, including zero, as the mask pattern is entirely independent of the sprite pattern. Thus the sprite data HAS to be "turned off" (via inverting the mask, then ANDing it with all the spite pixel bits) whenever the background is "turned on" (ANDed with the non-inverted mask).

I'm going to make that slight tweak to the article formula now. If I've got it wrong... sue me. Well, OK, don't sue me, but correct it to what it should be instead of just reverting it, because it doesn't look like the current formula is correct either. 146.199.60.6 (talk) 14:05, 20 April 2019 (UTC)Reply

History: Atari 7800, and other possible advances. edit

Here are some additional historical milestones, and others worth looking at to see if they offered significant improvements in blitter capabilities.

The Atari 7800 had the most advanced console graphics of it's initial development period (the game crash delayed it's wide release for years, but it's configuration was not increased). The company which developed it for Atari, was a profession arcade games hardware company, and so brought over arcade features. I've looked through the specifications of many home computers and and consoles, and the hardware of the 7800 seemed to have a clear advantage in the day, and in how many objects it could display in line. However, I did see a home computer book from the 1980's, with a home system with advanced graphics of some sort, which I never have been able to find again.

Motorola produced an early desktop chip set with blitter, people suspect was an early collaboration with Amiga before Commodore. It was to be used in the Tandy Color Computer 3, but everything was delayed, and it was dropped from the machine design and replaced with custom silicon. I do not know if it was ever used much.

The Atari Lynx, by the Amiga designers, the Super Nintendo, and Game boy advanced did some interesting graphics advances, but I am unfamiliar if much of it was blitters. Twwwww (talk) 14:17, 9 June 2022 (UTC)Reply

Obsoleteness caused by CPU instructions, is not true.. edit

In my own work, I independently came up with notions similar to the function of the blitter, as enhancements to performance by offloading the work from the CPU, allowing the CPU to do other things, and speeding up processing of all data types. One can't really say that CPU's special instructions completely displace the benefit of a blitter for these reasons (even though my own work has special processing in the cpu). It is a waste of large CPU's time, and bad logic. The CPU is better off doing something else, and thermally, energy consumption wise, not getting involved. In very small cheap systems, again, it is a waste of limited processing resources. Twwwww (talk) 14:28, 9 June 2022 (UTC)Reply

History and description of increasing functonality of blitter circuitry. edit

There needs to be a section that further researched the functional advancement in blitter design over time. Blitters can not only transform movement of blended values in a pixel, but also the spatial transformation of placement of pixels and values. Some systems had planes/or fields that rotated, skewed and transformed, able to give a 3D perspective illusion, which may have involved blitters. But, as it's ancient and unfamiliar to me, I do not know if it was through blitters, but is an area of research for enhancing the article. Twwwww (talk) 14:37, 9 June 2022 (UTC)Reply