I think this artical explains things quite well! As far as the math being "ridiculous", the way i see it is nothing more then someone trying to explain it in laymens terms as to how it works. (simplified for those of us who are not math wizards.) If you felt it is ridiculous, then I would be interested in seeing how you would lay it out in math terms. This is a subject that interests me as i am a network admin and deal with these problems on a daily basis. Knowing the formulas and how to figure all this out would help alot! Revision as of 13:32, 27 September 2006 by 
AFAIK, CAS does not have to be an integer. For instance, a memory chip can have CAS latency 2.5. -thealsir
- Only the first generation DDR supported non-integer CAS latency, and there, only half-integer values were possible. In practice, the only values used were 2, 2.5, and 3. Later generations omitted the option, since their higher clock speeds gave the same timing resolution. 22.214.171.124 (talk) 09:33, 9 February 2009 (UTC)
Could somone expand this page to include some info on CAS2 (CAS3, etc.?) Thanks! Ewlyahoocom 20:33, 10 April 2006 (UTC)
- Do you mean CAS 2 as in the latency speed? I use Dram that is rated at CAS 2.5, for example. Zedmaster375 2 July 2008 —Preceding comment was added at 21:50, 2 July 2008 (UTC)
- I have serious objections with the paragraph: <quote>"For example, consider a 133 MHz CL3 device (7.5 ns per cycle, 3 cycles request latency) versus a 100 MHz CL2 device (10.0 ns per cycle, 2 cycles request latency). The first bit would be available after 22.5 ns (7.5 ns * 3) on the CL3 device and after 20.0 ns (10.0 ns * 2) on the CL2 device, demonstrating the benefit of a lower CAS latency. However when reading a burst of even 4 bits, the higher clock speed wins: 45.0 ns (7.5 * 3 latency + 7.5 * 3 bits after the first) versus 50.0 ns (10.0 * 2 latency + 10.0 * 3 bits after the first)."</quote> The math is just plain wrong considering that DRAM is never accessible by individual bits. The smallest unit of download from or upload to DRAM is 1 byte (8 bits). Multiplying 7.5ns or 10.0ns for each bit is ridiculous. 126.96.36.199 00:04, 22 July 2006 (UTC)
Yes, I agree. Multiplying the cycle time for each bit is ridiculous. It's not even 8 bits, but 64 bits. Data buses are 64-bit wide. So once the CAS latency is incurred for the first 64-bits, every consecutive 64 bits will be transferred every I/O bus cycle (or twice in case of DDR). Also, it'll be good to mention what the unit of request from the CPU is -- L2/L3 cache line size? So the DRAM transfers say about 256 bytes in burst. It's going to be 7.5*3 + 7.5*31ns; however, if the load is on the aligned boundary, the load latency is going to be 7.5*3 + L2 miss time + L1 miss time + memory controller/bus arbitration cycles. —Preceding unsigned comment added by 188.8.131.52 (talk) 18:29, 3 August 2008 (UTC)
The article clearly says "burst", that is several bits sequentially accessed on each bit lane, or several words of full bus width. Bus width is 8 or 16 bits, and there are (at least were) also 4 and 32 bit wide DRAM chips, but I can't remember if they were in the JEDEC spec. So a burst of 4 on an 8-bit wide device would read 32 bits.
The OP is taking the point of view from a single bit. In practice many bits (8/byte) are returned in parallel, but the timings are the same for each. 184.108.40.206 13:55, 3 April 2007 (UTC)
Actually - I think data is read in WORDS, not bytes, and not bits either - but anyhow - the author is referring to the fact that most memory access is sequential - reading lots and lots of *consecutive* bytes, meaning that the RAS stuff doesn't happen all that often.
220.127.116.11 12:16, 2 June 2007 (UTC)
Okay here's what I know about the issue at hand here. The Author is correct in the 7.5ns and 10ns example. Here's why: 1)Although bits are read and written in sets of 8 bits or 1 byte, the timings per bit would equal to those values because of the fact in common sense that it's ns and not ms or seconds here. 2)Sometimes even in certain RAM sticks that i've seen such as 8x 16 memory (I think that was the one anyways), it is read in sets of 16 bits or 2 bytes. If i'm wrong somebody correct me please. Vedalken 02:26, 23 June 2007 (UTC)
Are there compatibility Issues?
I just bought myself a wrong Ram 3.0 instead of the allready installed 2.5. Now I am wondering because the Motherboard manual suggests to have identical latencies.
In either way, there should be a statement about compatibility between different latencies. If there are compatibility issues this would be important. Though I haven't experienced any there might be a difference in speed.
LordManu 04:00, 3 January 2007 (UTC)
- Get CPU-Z (free download) or any program that reads the "SPD table". That will tell you all the different frequency/latency settings that each module can use. The computer will run all of them at the same (highest unless your bios is set otherwise) frequency. e.g. If one module is 100, 133, 166 and another is 133, 166, 200; it will run at 166MHz.
- For me, I cannot execute a computer program to answer my question (see below) since I need memory to execute the program and I need an answer to my question to know what memory to get.
- If one has a CAS of 2.5 at 166Mhz and one has a CAS of 3 at 166MHz, it will run both of them at 3, but it will still work. The difference is practically unnoticable.--KX36 18:10, 29 May 2007 (UTC)
- I sure agree that we need practical information. My computer says it uses CL 2.5 but I am not sure if CL 3 or CL 2 memory will work. The technical explanation is interesting but it does not help me. So my concern is not the compatibility among memory modules; it is compatibility of memory modules with the system. —Preceding unsigned comment added by 18.104.22.168 (talk) 07:12, 29 June 2008 (UTC)
I'm not an engineer but I if remember correctly from college, this analogy may help (pls correct errors).
Suppose you are a teacher taking a class on a field trip. A magic vehicle that will take your class is used by all the schools in the district. There is only a one way road that connects all the schools and the bus drives around from one school to the next all day waiting for students to pick up. When it get the order it will drive down the road passing the other schools until it get to yours. If it happens to be at your school when the call comes it can pick your students up right away but if it just left your school it will take time to drive all the way around to get back to your school. The complete circle is like the CAS.
You may ask, "Why not just stay at the first school and wait?" The answer is that if you are the last school it will always take the full time to get to you. By circling, the vehicle will usually be closer than that. In fact, it will be an average of half the distance to any school at any given time. (There are other technical reasons why it is done this way too.)
Other things to consider are the Memory Bus Speed and Memory Bus Width (size). The Memory Bus Speed is like the road speed limit and the Memory Bus Width is like the number of seats on the vehicle.
Now back to the trip. The vehicle arrives at your school. It has 10 seats and you have 50 students. 10 get in and are magically transported to their destination. The (now empty) vehicle has continued to move on to the next school. The rest of your class must wait for the vehicle to return.
Now, if the vehicle had more seats (Wider Memory Bus) or it could travel faster (Memory Bus Speed) or the route was shorter(CAS) your students could get to their destination quicker.
It is important to remember that the CAS is not really a distance on a road. It is the number of ticks on the system clock. The analogy is only useful to help you visualize why the CAS is not the only determining factor of memory speed. —The preceding unsigned comment was added by 22.214.171.124 (talk) 18:02, 6 April 2007 (UTC).
Should Trcd, Trp etc... be capitalised as tRCD, tRP, as per the SDRAM latency article? 126.96.36.199 13:10, 23 August 2007 (UTC)
Errata is wrong
The poster who posted the errata is simply wrong. The original poster is correct. For example, with DDR the chip is actually read 64 bits at a time, not 8 bits as some others have suggested.
When the original poster said the 1st bit would be available in 22.5 ns, he meant (and it's obvious) that the 1st 1 bit (deep) x 64 bits wide would be available. When he said 4 bits he meant that the 1st 4 bits (deep) x 64 bits wide would be available.
I have taken the liberty of adding some additional explanation to article and removing the misleading section tag. I am an infrequent editor and I do not know if removing the tag requires consensus or not, but if it does then I'm sure that someone can add it back.
—Preceding unsigned comment added by 188.8.131.52 (talk) 05:01, 11 September 2007 (UTC)
Thus CAS Latency (CL) is the time (in number of clock cycles) that elapses between the
In which clock cycles? CPUs or RAMs?--184.108.40.206 18:36, 2 November 2007 (UTC)
RAM ilovemrdoe 05:29, 30 December 2007 (UTC)
There are no sources for most info. So its pure opinion. 220.127.116.11 (talk)
I wouldn't call it opinion, more 'general knowledge'. But some sources would be good! -ilovemrdoe 05:30, 30 December 2007 (UTC)
RAM speed increase versus CAS latency
According to the original article, i suppose that even though CAS5 is the minimum on DDR3 RAM, the clock speed boost will at some point offset the speed difference on even the smallest amount of data?
As in maybe DDR2-800 is faster at 4-4-4-12 but DDR3 may be faster in ALL situations at 5-5-5-15? Talrinys (talk) 11:58, 26 January 2008 (UTC)
Am I being stupid, or does the simple advice "the lower the CAS the better" given directly contradict the table, in that the lower the CAS number in the table, the longer the total time is?—Preceding unsigned comment added by 18.104.22.168 (talk) 22:49, 4 June 2008 (UTC)
- No, it's just inobvious. The CAS is the number of cycles needed to access the memory. As the speed (time for each cycle) is going up faster than the CL, the total time is going down. Compare the last two entries of the first table: the MHz of the second is double that of the previous one (conveniently for our math), meaning each cycle takes half as much time. As the 800 MHz memory is using 5 cycles, less than double the 3 needed for the 400 MHz memory, it needs slightly less time than the 400 MHz memory. --Rindis (talk) 19:49, 24 June 2008 (UTC)
- My question is why is it better? That seems rather err, subjective. For example there is a slow food movement, how do I know some people don't prefer to wait longer for their computers to compute? Maybe "The lower the CAS, the faster the memory?" PS: the Slow Computing people are already working on their official website, but they are still waiting for their systems to load Dreamweaver... Zedmaster375 2 July 2008 —Preceding comment was added at 21:56, 2 July 2008 (UTC)
The examples given work with the assumption that 1 bit is being put on the bus every cycle. Data buses are 64-bit wide. So the memory doesn't transfer one bit for every I/O bus clock cycle -- it transfers eight bits. —Preceding unsigned comment added by 22.214.171.124 (talk) 18:23, 3 August 2008 (UTC)
OK, Look if you have an 8-bit bus - it moves 1 byte at a shot. A 16-bit bus, moves a word, (double byte) at a shot. A 32-bit bus, double word at a shot. And lastly a 64-bit bus moves a quad word at a shot: UNLESS your running a 32 bit OS, in which case your right back to a double word again. This refers to "WIDE" 1st Bit! 2nd bit, 3rd bit, refers to consecutive (bus width) data transfers, "DEEP". This is in an ideal situation where the transfer to the memory modules address is at the start of a 'Page ' in memory. Just remember, if you have 256k, or 4gig of memory loaded, it is consecutive, sequential, ONE LONG STRING, as it seen by the processor, not a checker board! As well the starting address might not be at the start of a 'page' in the memory module. Just something to keep in the back of your mind. The tube guy (talk) 10:29, 14 May 2009
Clock Cycles/Command Rate
"In the table below, data rates are given in million of transfers per second (MT/s), while clock rates are given in MHz, cycles per second."
The foregoing sententence references the table and "clock rates". However, the table uses "Command rate" as a header. This is the first occurence of the term "command" in the article. Perhaps a parenthetical or description associating "command rate" with "clock rate" would clarify the use of "Command rate" in the table. Or, perhaps just changing the header to "Clock cycle rate" would suffice? Andage01 (talk) 15:50, 3 June 2009 (UTC)
Doesn't seem to state what, if anything, CAS stands for (which might help to explain it) or what it's actually measured in.
Quite a fully-featured article but it seems to skip the basics for people who just want to know what it is and how to make sense of a CAS rating. — Preceding unsigned comment added by 126.96.36.199 (talk) 10:44, 7 September 2011 (UTC)
Example of "a typical 1 GiB SDRAM": shouldn't there be a 13-bit (not 10-bit) column address?
The examples goes so:
- As an example, a typical 1 GiB SDRAM memory module might contain eight separate one-gibibit DRAM chips, each offering 128 MiB of storage space. Each chip is divided internally into eight banks of 227=128 Mibits, each of which comprises a separate DRAM array. Each array contains 214=16384 rows of 213=8192 bits each. One byte of memory (from each chip; 64 bits total from the whole DIMM) is accessed by supplying a 3-bit bank number, a 14-bit row address, and a 10-bit column address.
Well, shouldn't it read
- , and a 13-bit column address.
Also, (I think) there is a terminological incoherency in usage of the words "module", "chip", and "bank" between the 1st section (the above example) and the 2nd section. Namely, the 2nd section (Effect on memory access speed) starts its 2nd paragraph with
- Because memory modules have multiple internal banks, ...
Either (A) the "memory modules" should be "memory chips" or (B) the "banks" should be "chips". Which is it, (A) or (B) (or none)? — Preceding unsigned comment added by 188.8.131.52 (talk) 13:50, 22 June 2012 (UTC)