In 3D computer graphics, anisotropic filtering (abbreviated AF)[1][2] is a method of enhancing the image quality of textures on surfaces of computer graphics that are at oblique viewing angles with respect to the camera where the projection of the texture (not the polygon or other primitive on which it is rendered) appears to be non-orthogonal (thus the origin of the word: "an" for not, "iso" for same, and "tropic" from tropism, relating to direction; anisotropic filtering does not filter the same in every direction).

An illustration of texture filtering methods showing a texture with trilinear mipmapping (left) and anisotropic texture filtering

Like bilinear and trilinear filtering, anisotropic filtering eliminates aliasing effects,[3][4] but improves on these other techniques by reducing blur and preserving detail at extreme viewing angles.

Anisotropic filtering is relatively intensive (primarily memory bandwidth and to some degree computationally, though the standard space–time tradeoff rules apply) and only became a standard feature of consumer-level graphics cards in the late 1990s.[5] Anisotropic filtering is now common in modern graphics hardware (and video driver software) and is enabled either by users through driver settings or by graphics applications and video games through programming interfaces.

An improvement on isotropic MIP mapping edit

An example of anisotropic mipmap image storage: the principal image on the top left is accompanied by filtered, linearly transformed copies of reduced size.
Isotropic mipmap of the same image

From this point forth, it is assumed the reader is familiar with MIP mapping.

If we were to explore a more approximate anisotropic algorithm, RIP mapping, as an extension from MIP mapping, we can understand how anisotropic filtering gains so much texture mapping quality.[6] If we need to texture a horizontal plane which is at an oblique angle to the camera, traditional MIP map minification would give us insufficient horizontal resolution due to the reduction of image frequency in the vertical axis. This is because in MIP mapping each MIP level is isotropic, so a 256 × 256 texture is downsized to a 128 × 128 image, then a 64 × 64 image and so on, so resolution halves on each axis simultaneously, so a MIP map texture probe to an image will always sample an image that is of equal frequency in each axis. Thus, when sampling to avoid aliasing on a high-frequency axis, the other texture axes will be similarly downsampled and therefore potentially blurred.

With MIP map anisotropic filtering, in addition to downsampling to 128 × 128, images are also sampled to 256 × 128 and 32 × 128 etc. These anisotropically downsampled images can be probed when the texture-mapped image frequency is different for each texture axis. Therefore, one axis need not blur due to the screen frequency of another axis, and aliasing is still avoided. Unlike more general anisotropic filtering, the MIP mapping described for illustration is limited by only supporting anisotropic probes that are axis-aligned in texture space, so diagonal anisotropy still presents a problem, even though real-use cases of anisotropic texture commonly have such screenspace mappings.

Although implementations are free to vary their methods, MIP mapping and the associated axis aligned constraints mean it is suboptimal for true anisotropic filtering and is used here for illustrative purposes only. Fully anisotropic implementation is described below.

In layman's terms, anisotropic filtering retains the "sharpness" of a texture normally lost by MIP map texture's attempts to avoid aliasing. Anisotropic filtering can therefore be said to maintain crisp texture detail at all viewing orientations while providing fast anti-aliased texture filtering.

Degree of anisotropy supported edit

Different degrees or ratios of anisotropic filtering can be applied during rendering and current hardware rendering implementations set an upper bound on this ratio.[7] This degree refers to the maximum ratio of anisotropy supported by the filtering process. For example, 4:1 (pronounced “4-to-1”) anisotropic filtering will continue to sharpen more oblique textures beyond the range sharpened by 2:1.[8]

In practice what this means is that in highly oblique texturing situations a 4:1 filter will be twice as sharp as a 2:1 filter (it will display frequencies double that of the 2:1 filter). However, most of the scene will not require the 4:1 filter; only the more oblique and usually more distant pixels will require the sharper filtering. This means that as the degree of anisotropic filtering continues to double there are diminishing returns in terms of visible quality with fewer and fewer rendered pixels affected, and the results become less obvious to the viewer.

When one compares the rendered results of an 8:1 anisotropically filtered scene to a 16:1 filtered scene, only a relatively few highly oblique pixels, mostly on more distant geometry, will display visibly sharper textures in the scene with the higher degree of anisotropic filtering, and the frequency information on these few 16:1 filtered pixels will only be double that of the 8:1 filter. The performance penalty also diminishes because fewer pixels require the data fetches of greater anisotropy.

In the end it is the additional hardware complexity vs. these diminishing returns, which causes an upper bound to be set on the anisotropic quality in a hardware design. Applications and users are then free to adjust this trade-off through driver and software settings up to this threshold.

Implementation edit

True anisotropic filtering probes the texture anisotropically on the fly on a per-pixel basis for any orientation of anisotropy.

In graphics hardware, typically when the texture is sampled anisotropically, several probes (texel samples) of the texture around the center point are taken, but on a sample pattern mapped according to the projected shape of the texture at that pixel,[9] although earlier software methods have used summed-area tables.[10]

Each anisotropic filtering probe is often in itself a filtered MIP map sample, which adds more sampling to the process. Sixteen trilinear anisotropic samples might require 128 samples from the stored texture, as trilinear MIP map filtering needs to take four samples times two MIP levels and then anisotropic sampling (at 16-tap) needs to take sixteen of these trilinear filtered probes.

However, this level of filtering complexity is not required all the time. There are commonly available methods to reduce the amount of work the video rendering hardware must do.

The anisotropic filtering method most commonly implemented on graphics hardware is the composition of the filtered pixel values from only one line of MIP map samples. In general the method of building a texture filter result from multiple probes filling a projected pixel sampling into texture space is referred to as "footprint assembly", even where implementation details vary.[11][12][13]

Performance and optimization edit

The sample count required can make anisotropic filtering extremely bandwidth-intensive. Multiple textures are common; each texture sample could be four bytes or more, so each anisotropic pixel could require 512 bytes from texture memory, although texture compression is commonly used to reduce this.

A video display device can easily contain over two million pixels, and desired application framerates are often upwards of 60 frames per second. As a result, the required texture memory bandwidth may grow to large values. Ranges of hundreds of gigabytes per second of pipeline bandwidth for texture rendering operations is not unusual where anisotropic filtering operations are involved.[14]

Fortunately, several factors mitigate in favor of better performance:

  • The probes themselves share cached texture samples, both inter-pixel and intra-pixel.[15]
  • Even with 16-tap anisotropic filtering, not all 16 taps are always needed because only distant highly oblique pixel fills tend to be highly anisotropic.[8]
  • Highly Anisotropic pixel fill tends to cover small regions of the screen (i.e. generally under 10%)[8]
  • Texture magnification filters (as a general rule) require no anisotropic filtering.

See also edit

References edit

  1. ^ "What is Anisotropic Filtering? - Technipages". 8 July 2020.
  2. ^ Ewins, Jon P.; Waller, Marcus D.; White, Martin; Lister, Paul F. (April 2000). "Implementing an anisotropic texture filter - ScienceDirect". Computers & Graphics. 24 (2): 253–267. doi:10.1016/S0097-8493(99)00159-4.
  3. ^ Blinn, James F.; Newell, Martin E. (October 1976). "Graphics and Image Processing: Texture and Reflection in Computer Generated Images" (PDF). Communications of the ACM. 19 (10): 542–547. doi:10.1145/360349.360353. S2CID 408793. Retrieved 2017-10-20.
  4. ^ Heckbert, Paul S. (November 1986). "Survey Of Texture Mapping" (PDF). IEEE Computer Graphics and Applications. 6 (11): 56–67. doi:10.1109/MCG.1986.276672. S2CID 6398235. Retrieved 2017-10-20.
  5. ^ "Radeon Whitepaper" (PDF). ATI Technologies Inc. 2000. p. 23. Retrieved 2017-10-20.
  6. ^ "Chapter 5: Texturing" (PDF). CS559, Fall 2003. University of Wisconsin–Madison. 2003. Retrieved 2017-10-20.
  7. ^ "Anisotropic Filtering". Nvidia Corporation. Retrieved 2017-10-20.
  8. ^ a b c "Texture antialiasing". ATI's Radeon 9700 Pro graphics card. The Tech Report. 16 September 2002. Retrieved 2017-10-20.
  9. ^ Olano, Marc; Mukherjee, Shrijeet; Dorbie, Angus (2001). "Vertex-based anisotropic texturing". Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware (PDF). pp. 95–98. CiteSeerX doi:10.1145/383507.383532. ISBN 978-1581134070. S2CID 14022450. Archived from the original (PDF) on 2017-02-14. Retrieved 2017-10-20.
  10. ^ Crow, Franklin C. (July 1984). "Summed-area tables for texture mapping". Proceedings of the 11th annual conference on Computer graphics and interactive techniques - SIGGRAPH '84 (PDF). Vol. 18. pp. 207–212. doi:10.1145/800031.808600. ISBN 0897911385. S2CID 2210332. Retrieved 2017-10-20.
  11. ^ Schilling, A.; Knittel, G.; Strasser, W. (May 1996). "Texram: a smart memory for texturing". IEEE Computer Graphics and Applications. 16 (3): 32–41. doi:10.1109/38.491183.
  12. ^ Chen, Baoquan; Dachille, Frank; Kaufman, Arie (March 2004). "Footprint Area Sampled Texturing" (PDF). IEEE Transactions on Visualization and Computer Graphics. 10 (2): 230–240. doi:10.1109/TVCG.2004.1260775. PMID 15384648. S2CID 10957724. Retrieved 2017-10-20.
  13. ^ Lensch, Hendrik (2007). "Computer Graphics: Texture Filtering & Sampling Theory" (PDF). Max Planck Institute for Informatics. Retrieved 2017-10-20.
  14. ^ Mei, Xinxin; Chu, Xiaowen (2015-09-08). "Dissecting GPU Memory Hierarchy through Microbenchmarking". arXiv:1509.02308 [cs.AR]. Accessed 2017-10-20.
  15. ^ Igehy, Homan; Eldridge, Matthew; Proudfoot, Kekoa (1998). "Prefetching in a Texture Cache Architecture". Eurographics/SIGGRAPH Workshop on Graphics Hardware. Stanford University. Retrieved 2017-10-20.

External links edit