Untitled edit

Could this page be modified to contain the term "Cache Blocking" -- it's currently a redirect but it's not clear how the terms relate. I'm assuming "Cache Blocking" is an instance of Loop nest optimization but are all the described techniques instances of cache blocking?

So far as I know, all cache blocking optimizations performed by compilers are performed on loop nests. However, I'm not qualified to say that's always going to be the case. Iain McClatchie 08:17, 13 March 2006 (UTC)Reply
Cache blocking AKA loop tiling AKA loop blocking is one loop transformation technique. -chun 7 April 2011

Pentium 4 edit

A machine like a 2.8 GHz Pentium 4, built in 2003, has slightly less memory bandwidth and vastly better floating point, so that it can sustain 16.5 multiply-adds per memory operation. As a result, the code above will run slower on the 2.8 GHz Pentium 4 than on the 166 MHz Y-MP!

16.5 times what memory operation ? L1 ? L2 ? RAM ? Taw 07:04, 10 October 2006 (UTC)Reply

Also remember that a Cray is a supercomputer built for floating-point algebra( and vector algebra, so if you compiled this statement as vector operations you would be a few times faster probably.) nd the Pentium4 was a cheap mass market processor( not even high-end mass market) —Preceding unsigned comment added by Masterfreek64 (talkcontribs) 19:15, 23 November 2008 (UTC)Reply

Numbers questionable edit

This code would run quite acceptably on a Cray Y-MP (built in the early 1980s), which can sustain 0.8 multiply–adds per memory operation to main memory. A machine like a 2.8 GHz Pentium 4, built in 2003, has slightly less memory bandwidth and vastly better floating point, so that it can sustain 16.5 multiply–adds per memory operation. As a result, the code above will run slower on the 2.8 GHz Pentium 4 than on the 166 MHz Y-MP!

According to https://en.wikipedia.org/wiki/Cray_Y-MP the machine was built in 1988, which is end of 1980s. — Preceding unsigned comment added by 130.149.224.23 (talk) 07:42, 24 August 2018 (UTC)Reply

Loop skewing edit

Is "loop skewing" another name for the polytope model, which involves representing N nested loops as a polyhedron in N-dimensional space and then "skewing" it via affine transformations to produce a new, parallelizable loop nest? If so, should Polytope model be moved to Loop skewing, for consistency among all the loop optimization articles? (And if not, what's loop skewing?) --Quuxplusone 19:30, 12 December 2006 (UTC)Reply

No. it is just one loop transformation available to compiler. It can be implemented within polyhedral framework. -chun 7 April 2011

Commute Matrices edit

Unfortunately, the code describes the product

 C = B×A

and the entire article is about manipulation of this original code. Either the product needs to be written correctly (unconventional), or the entire article needs to be updated (sadly, this is error-prone).--129.132.59.67 (talk) 09:31, 23 April 2009 (UTC)Reply

Merge needed edit

The following articles are all largely about the same thing: "Locality of Reference" "Loop tiling" "Loop nest optimization" You would not guess this from what each says about the others. The information should probably be consolidated in one place by someone who has permissions. Also, it would be of extreme benefit to this topic of matrix blocking to have pictures that illustrate what's going on. 98.119.149.245 (talk) 23:19, 27 May 2014 (UTC)Reply

You don't need to cross-post to multiple talk pages. Loop tiling and Loop nest optimization seem like the same thing, so I have added merge tags to them. But "Locality of reference" is a much more general concept and has uses in other optimization techniques, it doesn't make sense to merge it with the others. -- intgr [talk] 07:26, 28 May 2014 (UTC)Reply

  Done Klbrain (talk) 17:06, 5 April 2017 (UTC)Reply

Is the analysis of the code in "Example: Matrix multiplication" correct? edit

After the second code snippet within "Example: Matrix multiplication", the article states, "During the calculation of a horizontal stripe of C results, one horizontal stripe of A is loaded, and the entire matrix B is loaded." If that's true, why does the third code snippet confer the benefit that, "...ib can be set to any desired parameter, and the number of loads of the B matrix will be reduced by that factor"? The cache improvement in the second example is due to the reuse of B[k][j + 0] and B[k][j + 1] within the inner-most loop. That does not change in the third example. The inner-most "k" loop reads all of B into the cache.

The only scenario in which the entirety of B does not get read during the calculation of a single stripe of C is if the cache block size is small relative to the size of a row of B. However, we are already to assume that a row of A can fit easily within the cache, and the reader is likely expecting A and B to be similarly sized. If the reader is to make this assumption, it should be explicitly stated in the article.

I believe that caching will only improve when the "k" loop is distributed, as in the fourth example.

Is there something I am missing? If not, I will proceed with the edits. If so, perhaps the explanation can be edited for clarity. 76.28.101.246 (talk) 22:34, 25 September 2023 (UTC)Reply