Jump to content

Talk:X86-64/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2

Background color of Operating modes diagram

Wouldn't it be more logical to have a red background for "Yes" and a green background for "No" in the "Compiled-application rebuild required"-field of the "Operating modes"-diagram? —Preceding unsigned comment added by 63.80.93.152 (talk) 21:58, 30 July 2008 (UTC)

"To Clone"

The best term to describe what happened is: "Intel has cloned AMD64 under the name Intel 64". In the early 80s, PCs were referred to as "IBM PC clones". Cloning is taking a technology (e.g. instruction set) and re-implementing it from scratch. I have taken the liberty to reword the introductory paragraph accordingly, because the term that was used before ("Intel has also adopted AMD64 under the name Intel 64") appeared too weak IMHO. Comments ? -- R. Duxx 67.53.116.122 03:23, 8 September 2007 (UTC)


I agree about the terminology. it is the proper term for the actions that were taken. 68.42.67.162 05:31, 26 October 2007 (UTC)

Error Corrected

Please note: This information was incorrect: "traditionally, operating systems take one half of the address space for themselves (usually the higher half, named kernel space) and leave the other to applications (user space)". It's highly unusual to put the kernel at the top. Usually it's at the bottom, below the program code and data segments, and the stack is usually at the top.

I don't know how Unix does it (except that some variants use an entire separate process for the kernel) but the description was correct for the NT family. By the way, how does your idea fit with the fact that there are multiple user address spaces (one per process), multiple stacks (one per thread) within each, multiple heaps per process, a kernel mode stack in kernel space for every thread in the system in addition to its user mode stack (the k-mode stacks are in the kernel space), and "data segments" haven't existed since NT 3.1? Well, they exist, but they have base address 0 and size (size of VAS), just like the code and stack segments. Things are a bit more complicated than you represent and the notion of basing "the" stack at FFF...FFF so it "won't grow into other things" is, well, just not applicable. Jeh 08:16, 21 April 2007 (UTC)
Most operating systems I know of put the kernel at the top of the virtual address space on x86 platforms, usually splitting the virtual address space 3:1 or 2:2 with the kernel getting the top half. It is theoretically possible to put the kernel in a separate address space (as is done on SPARC processors, for example) but the cost of doing so is much greater than the benefits would warrant. The standard Unix process layout puts the text segment at the bottom of the address space (but a little bit above address 0), the data and bss segments above that, mapped files above that, and then stack at the top of user space (growing downward). 18.26.0.5 20:09, 12 June 2007 (UTC)
Well, there you go. :) Nevertheless, putting "stack" at the top of user space doesn't prevent "it" from growing into anything, since each thread in the process needs its own stack. You can't put more than one of them at the top! Jeh 20:07, 13 June 2007 (UTC)

Merging AMD64, EM64T, and x64

After weeks of discussion on the Talk:X64 page, there was little opposition to merging the three articles. The biggest debate was about what the name should be: AMD64, EM64T, and x64 or something else, namely x86-64. --Charles Gaudette 21:47, 18 September 2006 (UTC)

IMHO it should be x86-64 as it is the most general term for the architecture. x64=> windows AMD64=>amd processors EM64T=>Intel processors. 68.42.67.162 05:33, 26 October 2007 (UTC)

While it's generalized, it's not the most correct name, IMHO. Naming AMD64 x86_64 is like calling Einstein's Theory of Relativity "The Theory of Relativity". While it's neutral, it doesn't give credit to the creators of the creators. Should we, in taking your advice, thus name i*86 x86? No. While some people do actually in fact do that, most stick with i386 or i686, including Windows, which owns about ~80% of the home computer market. KeineLust90 (talk) 03:05, 30 July 2008 (UTC)
I agree. The fact that i386, IA-32, and IA-64 reference Intel in their names but for some reason AMD64 has to be called by a name that's been rarely used since AMD changed it is just odd. --Evice (talk) 22:32, 29 August 2008 (UTC)

Move from AMD64

Woah. Was there a consensus to rename this page? I for one oppose this. AMD64 is the name of the architecture, like IA-32 is the name of an architecture. AMD64 is in wide use by many linux distributions, BSD systems, Microsoft, etc, to refer to this architecture. We need a vote or a consensus on such a drastic change that seems to take neutrality overboard to the point of changing the meanings of words. samrolken 07:25, 22 September 2006 (UTC)

Are you saying you have read the comment above and the Talk:x64 discussions? And are still unhappy? --Charles Gaudette 18:56, 22 September 2006 (UTC)

Donald: AMD64 incorporates most if not all x86 features, therefore x86-64 seems a good choice. In the intro we could mention amd64 as the first name, created by AMD. —Preceding unsigned comment added by Donald j axel (talkcontribs) 08:54, 16 April 2008 (UTC)

Paging hierarchy size

I've just replaced a some text that misleadingly stated that a full 64-bit PML4 would be 256 MiB long. In fact, the size is right for the PML4, but I think a better impression would be given if the size of the whole page mapping hierarchy is shown. These are my results for the current 48-bit hierarchy:

  • One PT: 512 entries x 8 bytes = 4096 bytes
  • One PD: 512 entries x (8 bytes + size of the associated PT) = 2101248 bytes = 2052 KiB
  • One PDPT: 512 entries x (8 bytes + size of the associated PD) = 1075843072 bytes = 1026 MiB
  • The PML4: 512 entries x (8 bytes + size of the associated PDPT) = 550831656960 = 513 GiB

Which yields about 0.2% of the 256 TiB space. Phew! Anyone wants to try calculating this for a possible 64-bit scheme? Habbit 09:22, 9 October 2006 (UTC)

Not really. The size of the PML4 was enough to scare me. :)
By the way, do you have a source for this change:
"In implementations supporting larger virtual addresses, this latter table would either grow to accomodate sufficient entries to describe the entire address range, up to a theoretical maximum of 33,554,432 entries for a 64-bit implementation, or be overranked by a new mapping level, such as a PML5"
The AMD docs suggest PML4 will be extended, specifically this one, on page 133:
"Note: The sizes of the sign extension and the PML4 fields depend on the number of virtual address bits supported by the implementation."
Now obviously as there is no implementation with more than 48 bits available yet this is still subject to change, but I think that's a pretty clear statement that the PML4 will be expanded in such cases. Unless another source contradicts it, in which case we'll just have to wait and see... JulesH 15:07, 14 October 2006 (UTC)
I don't think that means the PML4 is to be expanded on >48b implementations. It sounds like "don't take the 48 bits for granted, could be less in some sadistic but still amd64-compliant platforms". Think about an expanded PML4 and you'll see an unmanageable 32 million entry behemoth. Habbit 18:45, 14 October 2006 (UTC)
Not that it really matters now, but I think my figures were wrong because IIRC each descriptor entry is 16 bytes long, not 8, in amd64. Which means all figures must double: the whole 48-bit paging scheme would eat up 1 TiB of memory! Can somebody please confirm this? My final sum is 240+231+222+213, or 1 101 663 313 920 bytes. Habbit 01:19, 2 April 2007 (UTC)

x86-64 or AMD64 ??

Why the name has been changed to x86-64 ??

AMD stated and offically declared the rename of technology from x86-64 to AMD64...

the title for this Article must be AMD64, and any one enters x86-64 must be redirected to AMD64

The name of the page was changed to x86-64 because the page covers both AMD's implementation of the instruction set, which they now call AMD64, and Intel's implementation, which they've called, at various times, IA-32e, EM64T, and Intel64. The name x86-64 is a vendor-neutral way of referring to the instruction set, and not one specific to particular OSes, as x64 is. See Talk:x64 for the full discussion on the name change.
Given that AMD aren't the only company implementing the instruction set, the fact that they now call it AMD64 to promote their invention of it does not in any way impose any sort of requirement that the article be called AMD64, so, no, the title of this article is not required to be AMD64. Guy Harris 11:55, 14 November 2006 (UTC)
The name of the architecture is AMD64 in the same way that the name of the architecture it expands upon is i386 (i for Intel). When a company develops something, they get to name it - the fact that Intel calls their implementation of the instruction set something else is amusing, but it doesn't change the fact that the proper name to use for the architecture / instruction set in an encyclopedic work is AMD64. Chandon 02:05, 16 November 2006 (UTC)
Agreed. Because AMD started it, it was thus named amd64, and thus why most operating systems used by the public (including Windows) uses the term amd64. Quite honestly, if you asked someone, you're more likely to have them acknowledge the term "AMD64" due to it being on most of their stickers since about 2004ish. Intel, on the other hand, doesn't exactly state EMT64 or x86_64 on their stickers, now do they? Any source tree for a *nix OS is amd64, because that's what it was defined as, just as i*86 was for the 32bit implementation of the x86 platform. Seconded for reverting the name to AMD64, the original name. It's more confusing to the public by using the term x86_64 KeineLust90 (talk) 03:00, 30 July 2008 (UTC)
In fact, it referred to in most literature as x86_64, not x86-64, but maybe Wikipedia conventions do not allow such a name ? If they do, a redirect from x86_64 to x86-64 could be of help for some people. 81.65.26.7 00:47, 14 December 2006 (UTC)
Did you try it? It's there (since 2004, apparently). Because of the way Wikipedia names are mapped, it's called "x86 64", but if you type in x86_64, it does the expected thing. --NapoliRoma 01:59, 14 December 2006 (UTC)
The use of the underscore instead of a hyphen (ie. x86_64 versus x86-64) is purely due to the syntactic limitations of identifiers in the C programming language and/or C preprocessor. The main places where you'll find "x86_64" are in the Linux kernel and the GNU Compiler Collection. Letdorf 11:59, 23 May 2007 (UTC).

Why is this page called x86_64 and x86 called IA-32? Seems to be a bias on wikipedia —Preceding unsigned comment added by 74.73.6.35 (talk) 07:40, 2 June 2008 (UTC)

x86 != IA32 and if you type x86 into the box you'll end up at a suitable article for that term. IA32 could have been called i386, but that's hardly vendor neutral and would also be confused with the i386 CPU.--Anss123 (talk) 07:49, 2 June 2008 (UTC)
How is IA-32 more vendor-neutral than i386? They both reference Intel, something AMD64 (which is the article title I'd prefer) does with AMD. Even the article itself says that AMD changed the architecture's name from x86-64 to AMD64, so it doesn't make sense to call it by the old name. Should we rename the Windows Vista article to Windows Longhorn? --Evice (talk) 22:41, 29 August 2008 (UTC)

History of AMD64

It would be nice to have a history subsection of the AMD64 section. When was the technology announced for the first time? When was the first emulator available? when was the first piece of hardware available? --Jarl Friis 07:45, 26 January 2007 (UTC)

Date or year of initial linux support.

Since Linux was the first OS to run x86-64 in long mode, it would be very relevant to know when that happend, can someone tell. When and which was the first linux distribution to officially release a version with x86-64 support? I think it was SuSE, but I am not sure. --Jarl Friis 07:50, 26 January 2007 (UTC)

x86-64 support was merged into the development kernel at 2.5.5 (released 20th Feb 2002, see Changelog-2.5.5). A beta version of x86-64 SuSE was made available a few months before the actual release of AMD's first x86-64 Opterons ("Hammer") in April 2003 as per [1]. Debian merged x86-64 to Sid in mid-2004 and there was an "unofficial" x86-64 Sarge release in 2005, but as of yet, no official version (pending the release of Etch). Redhat made an x86_64 release sometime after SuSE. Andrew Rodland 23:40, 28 February 2007 (UTC)


I'm fairly sure that gentoo was the first distrobution to release 64bit support in a stable release. I sadly cannot find the date or the page i originally read that on Gaurdro 05:40, 26 October 2007 (UTC)

neutrality tag on differences section

It seems to me that the differences are consistently worded in a non-neutral way. The first few are worded like 'intel does not support', but then the later ones are worded as 'intel does support'. It seems like they should be parallel in their wording. Intel does not do blah, AMD does not do blah. Jabencarsey 21:05, 16 April 2007 (UTC)

I can at this point only find one example of what you're saying:
"Intel 64 supports the MONITOR and MWAIT instructions, used by operating systems to better deal with Hyper-threading."
That line could perhaps be worded "AMD64 lacks the MONITOR and WAIT instructions, ..." for consistency with some of the Intel 64 differences, but otherwise I can't see much that feels wrong there myself. Maybe some things have been updated since you made your comment though. Otherwise, if you're still seeing problems, just be bold and try to make them as you feel more neutral. As long as the sentence is telling the same thing afterwards, it's edits of low "risk" of upsetting people anyway. ;) -- Northgrove 23:10, 29 April 2007 (UTC)


x87 / SSE2

According to the Intel® 64 and IA-32 Architectures Software Developer’s Manual, x87 is supported in long mode (not just compatibility mode). Could someone please provide a link for the assertion that Win XP x64 won't allow x87 instructions in long mode? Additionally, SSE2 is (strictly speaking) not a replacement for x87 because some math capabilities are only exposed through x87. For example, there is no SSE2 equivalent for FCOS / FSIN - OTOH you have addsd (SSE2) that does the same thing as fadd (x87). Commutator 23:38, 25 May 2007 (UTC)

x86_64 FS and GS registers

When I tagged the "Removal of older features" paragraph as needing a citation, my intention was to bring attention to the "fact" about AMD keeping the FS and GS registers in their x86 64 bits design for compatibility with Windows. I couldn't find any mention of this in internet, and neither of Jeh's statement in the changelog of the page ("they were retained at the request of the Windows kernel team"). What I did find somewhat contradicts what he said (taken from http://msdn.microsoft.com/msdnmag/issues/06/05/x64/default.aspx , emphasis mine): "In x86 versions of Windows, the FS register is used to point at per-thread memory areas, including the "last error" and Thread Local Storage (GetLastError and TlsGetValue, respectively). On x64 versions of Windows, the FS register has been replaced by the GS register". So, I still think that the x86_64 article needs some references regarding this information. Azrael81 14:42, 31 May 2007 (UTC)

The debugger tells me that both I and the MSDN article are incorrect; FS (or rather the segment descriptor selected by the contents of FS) still points to the TIB under x64 in long mode; but GS seems to contain the same selector value as DS, hence base address 0. "Hmmm." As for "retained for compatibility with Windows," I got this information originally in private conversation with one of the kernel team, one who was in a position to know -- but I don't have a reference wiki can point to. I'll see if I can find one. Jeh 17:14, 31 May 2007 (UTC)
Since original research is not welcome in wikipedia, and there's still no reference for the "compatibility with Windows" bit, I'm going to remove that piece of information from the article. Azrael81 12:50, 16 August 2007 (UTC)
Since the F segment register and descriptor most certainly are there, and are used on x64 as they were under x86 Windows, I am restoring the easily-verifiable information about the F segment. As you had it the item is incomplete at best: the segmenting mechanism is NOT completely gone in long mode. Jeh 17:43, 30 August 2007 (UTC)
I put the original item back, with a reference to the AMD64 documentation. If the problem is with the "compatibility with Windows" part, then just remove that - and remove the note about the FS and GS registers from the section on Windows support, which also mentions this; the FS and GS registers (and, it appears from that section, the CS register) do have some effect, so we should at least mention that. Guy Harris 19:29, 30 August 2007 (UTC)

I've done something that User:Intgr points out is a little unusual: rather than adding the above category link directly to this page, I've added it to the AMD64 redirect page instead. The reason I've done this is so that on the category page itself, it will appear as "AMD64" rather than "X86-64", since the AMD product is indeed named "AMD64". There is no other way I'm aware of to make this happen (category pipes only change sort order, not the name that appears), although there's apparently a request pending for such a feature.--NapoliRoma 17:41, 2 June 2007 (UTC)

Windows Vista x64 build number

"... Windows Vista x64 was released in January 2007. Internally they are actually the same build (5.2.3790.1830 SP1)" - isn't Vista build number different? --0xF 15:57, 10 July 2007 (UTC)

"IA64t"

In today's edits, I removed a reference to Intel 64 once being known as "IA64t". I found very few references at all to this term (or to "IA-64t"), and many of the ones I did find seemed to equivalence it to IA-64 rather than Intel 64, including in a couple of 1998 speeches by Intel execs, which would have predated AMD's x86-64 announcement. (These latter may also be a mistranscription of IA-64™, since one of them also refs "MercedT".) If anyone has backup for this term ever being used officially by Intel to refer to x86-64, please update the article.--NapoliRoma 14:45, 3 August 2007 (UTC)

Fair use rationale for Image:AMD64 Logo.svg

Image:AMD64 Logo.svg is being used on this article. I notice the image page specifies that the image is being used under fair use but there is no explanation or rationale as to why its use in this Wikipedia article constitutes fair use. In addition to the boilerplate fair use template, you must also write out on the image description page a specific explanation or rationale for why using this image in each article is consistent with fair use.

Please go to the image description page and edit it to include a fair use rationale. Using one of the templates at Wikipedia:Fair use rationale guideline is an easy way to insure that your image is in compliance with Wikipedia policy, but remember that you must complete the template. Do not simply insert a blank template on an image page.

If there is other fair use media, consider checking that you have specified the fair use rationale on the other images used on this page. Note that any fair use images uploaded after 4 May, 2006, and lacking such an explanation will be deleted one week after they have been uploaded, as described on criteria for speedy deletion. If you have any questions please ask them at the Media copyright questions page. Thank you.

BetacommandBot 04:52, 16 September 2007 (UTC)

Stupid question

Is there any chance to run a 64-bit program on a 64-bit (AMD64) machine with a 32-bit version of Windows? —Preceding unsigned comment added by 212.149.216.233 (talk) 14:35, 14 November 2007 (UTC)

Yes, with vmWare you can run a 64-bit OS, and with that 64-bit apps, on top of the 32-bit Windows.--Anss123 17:36, 14 November 2007 (UTC)
Good to know. Thank you. --212.149.216.233 18:51, 14 November 2007 (UTC)
Wikipedia's not a support forum, so questions such as this aren't what talk pages are for (they're for discussing the corresponding article, not for general questions about what the article describes).
However, I'll respond anyway, as I'm not sure the answer to your question is "yes". VMware doesn't seem to have a clear answer to the question. Hardware and Firmware Requirements for 64-Bit Guest Operating Systems says, not surprisingly to me, that "Workstation and VMware Server require a 64-bit CPU to run a 64-bit guest operating system", noting, correctly, that "While it is theoretically possible to emulate a 64-bit instruction set on 32-bit hardware, doing so most likely results in unacceptable performance degradation." However, the "Supported and Unsupported Guest Operating Systems" page of the Guest Operating System Installation Guide says "To run a 64-bit guest operating system on 32-bit Intel hardware with VT support, you must enable VT on the host machine BIOS." - and then points you to Hardware and Firmware Requirements for 64-Bit Guest Operating Systems which, as noted, says you can't run a 64-bit guest OS on 32-bit processors.
So I suspect the answer to your question is "no"; unless somebody's successfully run a 64-bit guest on a machine with a 32-bit processor, I'll continue to have that suspicion. (Running a 64-bit guest on a machine with a 64-bit processor running a 32-bit OS might be possible. I'm doing so on Mac OS X v10.4, but that OS does support some 64-bit userland code, so the actual virtual machine code in VMware Fusion might be running in 64-bit mode.) Guy Harris 19:19, 14 November 2007 (UTC)
Dunno about various VMware versions, but I ran Vista x64 on top of XP32 to test my apps. It worked fine then, although I was using a free beta version of VMware. But it's 'possible'. Too bad Vista x64 doesn't support the surprisingly many 16-bit apps I still use. sigh. --Anss123 20:15, 14 November 2007 (UTC)
Did the machine running 32-bit XP have a 32-bit processor or a 64-bit processor? (I missed the "64-bit (AMD64) machine" in the original question; given that, the answer to that question is probably "yes", modulo the constraints mentioned in the two VMware documents I cited.) Guy Harris 20:32, 14 November 2007 (UTC)
Oh, the CPU was 64-bit. I think qemu can run 64-bit on a 32-bit CPU.--Anss123 20:42, 14 November 2007 (UTC)

Incorrect statement about cmpxchg16b

Article says:

Without CMPXCHG16B the only way to perform such an operation is by using a critical section.

This is not true. There are lock-free algorithms to do it. For example this paper describes how to do it with only a pointer-sized compare-and-swap. So I'm going to go ahead and remove this claim. – 128.151.69.131 (talk) 16:38, 21 March 2008 (UTC)

Also the sentence above it suggests that cmpxchg16b is only useful for updating a global counter. This isn't true either. –128.151.69.131 (talk) 16:40, 21 March 2008 (UTC)

I've changed this now to reflect my concerns. Hopefully someone else can also look at it. –128.151.69.131 (talk) 16:57, 21 March 2008 (UTC)

"Renamed" should be "named", AMD deserves credit.

The current (A80416) article version says: "x86-64 was designed by AMD, who have since renamed" ...

I propose: "designed by AMD engineers, and the architecture was accordingly named AMD64

Weren't there other names for the actual processors? (like "Hammer"?)

That's not correct; AMD originally called the 64-bit extended instruction set "x86-64", and later renamed it to AMD64 for marketing reasons. Alexthe5th (talk) 05:31, 19 May 2008 (UTC)


Add a "Caveats / Drawbacks" section?

1. Mainstream application problems:
- No Adobe Flash for x64 - means you need to run the web browser in 32-bit mode. (I know about npwiever, it's a hack.)
- No Skype (Ubuntu), only with workarounds.
- etc.

2. Double memory usage of some apps - ie. Java. (Basically means, that it has no sense to upgrade from 4GB 32bit to a 64bit platform, unless you jump right at >8GB 64bit.)

3. Not really faster than 32-bit, AFAIK. Really the only reason is to run memory extremely intensive applications.

x86-64 is great, but for average consumer it was strongly overhyped (by AMD), and still is nowhere near the 32-bit marketshare (IMHO). —Preceding unsigned comment added by 88.101.193.91 (talk) 23:37, 30 June 2008 (UTC)

x86_64 prerelease of Flash for Linux. Hullo exclamation mark (talk) 07:52, 15 December 2008 (UTC)
If it's well referenced, sure. Criticism sections are explicitly discouraged (see WP:CRITICISM) unless you can find reliable sources that explicitly state the criticisms cited in the article. Otherwise criticism is generally considered WP:OR, particularly by the "original synthesis" interpretation. Even if well referenced one has to be careful to keep within WP:NPOV. Jeh (talk) 08:05, 15 December 2008 (UTC)

Tebibyte vs. Terabyte

Why Terabyte, Exabyte ecc. instead Tebibyte, Exbibyte that are the natural prefix? —Preceding unsigned comment added by 87.13.70.45 (talk) 18:47, 14 July 2008 (UTC)

Have a look at WP:MOSNUM#Editing with byte and bit prefixes. Fnagaton 20:03, 14 July 2008 (UTC)
The wording of the MOSNUM is being discussed here. Your opinion is welcome. Thunderbird2 (talk) 05:12, 15 July 2008 (UTC)

x86 hardware can run this arch?

I was just wondering if it were possible to install/run an x86_64 arch type onto an x86 hardware computer. I think I've tested it out before, but I can't remember the results. 66.168.19.135 (talk) 23:41, 26 July 2008 (UTC)

Depends on what you mean with "x86 hardware computer". If you’re talking about an old 386 in the closet you'll need to use an emulator, like qemu. If you’re talking about modern Core 2 and Athlon64 computers then they support x86_64 unless deliberately disabled.--Anss123 (talk) 00:07, 27 July 2008 (UTC)
If your processor is any of those listed in the "AMD64 Implementations" and "Intel 64 Implementations" sections, then your processor can run an x86-64 OS, provided that your BIOS also supports 64-bit operation. Jeh (talk) 00:41, 27 July 2008 (UTC)
I'm talking about 686 computers like Pentium 4s, Core Duos, etc. Newer processors. 66.168.19.135 (talk) 14:32, 5 August 2008 (UTC)
Again, see the "Intel 64 Implementaitons" section in the article. Core 2's can do it, Core cannot. For Pentium 4, it depends on the model. Jeh (talk) 05:14, 6 August 2008 (UTC)
Centrino Duo is what I was using actually. 66.168.19.135 (talk) 23:34, 17 August 2008 (UTC)

Naming: AMD64 vs x86-64

"x86-64 was designed by AMD, who have since renamed it AMD64."

Have they renamed the instruction set or is AMD64 just the name for their implementation? Also, this claim is central to the article and should have an inline reference. If someone has one, please add it.Otus (talk) 12:44, 8 August 2008 (UTC)

Suggested article name

Would AMD64/Intel64 (using the names from both AMD and Intel, separated by a slash) be too awkward of a name? I thought I'd suggest it since the two largest manufacturers of the architecture don't call it x86-64, and I have yet to see a distributor of software who uses that name. If not, I personally think AMD64 would be a better name, since AMD created the first version of the architecture. --Evice (talk) 21:36, 27 August 2008 (UTC)

Intel 64 Implementations

Im a reading http://en.wikipedia.org/w/index.php?title=X86-64&oldid=238428961#Intel_64_Implementations

Am I alone to find the first paragraph to be barely readeable? The initial description of developement seems correct, but then the history of the launch (this chip has it, but this does not, but this had etc.) intermixed with others timelines (K8, Xeon, Windows x64 which appeared in March, 2005, BTW) and marketing declinations (servers vs. desktops vs. mobile etc.) makes it really complex to understand, and more nowadays since several years have passed. And it concludes by All [...] CPUs have Intel 64 enabled, as do the Core 2 CPUs, and as will all future Intel CPUs, which is now just wrong if we consider the mobile processors (see Atom). So I edited it rather at large. You can check the diff, and put back what may be needed. 212.111.102.30 (talk) 11:55, 17 September 2008 (UTC)

Too many facts can indeed dilute an article. You don't have to justify cleaning them up, even if you remove facts. Have done it myself enough times.--Anss123 (talk) 14:38, 17 September 2008 (UTC)

x86 privilege levels under x86-64

The article does not mention if protected mode privilege levels were retained in x86-64 long mode, and to what degree. AMD's white paper seems to state that the CS register is retained, although it does not mention how the CPL bits are treated.

My Google-Fu found a piece of code for the Xen hypervisor that stated that only rings 0 and 3 were used, and that rings 1 and 2 were to be ignored, but it wasn't 100% clear if it was Xen ignoring the rings or x86-64 simply did not provide them.

It might also be interesting to know the level of compatibility with rings 1 and 2 in 'compatibility mode' versus 'legacy mode'.

Dinjiin (talk) 01:21, 25 September 2008 (UTC)

Introduction edits

I altered the introduction a bit, any comments/criticisms? Monolith2 (talk) 23:31, 26 February 2009 (UTC)

A couple:
  1. I removed the section on PowerPC, as it isn't relevant to x86-64.
  2. The section on AMD/Intel cross-licensing is way too detailed for a lede section, and is somewhat conjectural in a way that isn't supported by your source. If it's in the article at all, it should be somewhere other than the intro. Regards, NapoliRoma (talk) 01:27, 27 February 2009 (UTC)

Thanks for the input! I only added the stuff on PowerPC because there was already mention of IA-64, which is not an x86 architecture either. I figured I might as well mention the one other major 64-bit processor and how it was incompatible with x86-64 as well. You're right as far as the part about AMD/Intel cross-licensing -- do you think that topic is worth adding as a section in this article? Or is it irrelevant and better placed somewhere else? Cheers, Monolith2 (talk) 14:54, 27 February 2009 (UTC)

No prob.
There are several other major 64-bit processors (SPARC and POWER at the very least, but also MIPS and others), but the relevance of Itanium in the intro is not that there are other 64-bit architectures, but that people were confusing the two Intel 64-bit architectures. That line actually started as a hatnote, I believe.
As for the licensing info, it could have a place in this article, but I think first it needs a more definitive source. It could go in the existing #Intel section, or it could even merit its own separate section. Regards,NapoliRoma (talk) 15:58, 27 February 2009 (UTC)

The end of the #Legal issues is poorly sourced at the moment. One of the sources doesn't say what the article claims except in the comments which are obviously not a reliable source. The other making a specific claim only uses the legal agreement (i.e. a primary source) which has been significantly redacted for confidentiality reasons so comes rather close to OR Nil Einne (talk) 22:46, 24 October 2009 (UTC)

From the cited legal agreement:
6.2. Termination for Cause.
---------------------
(a) A party may terminate the other party's rights and licenses
hereunder upon notice if the other party hereto commits a
material breach of this Agreement and does not correct such
breach within sixty (60) days after receiving written notice
complaining thereof. In the event of such termination, the
rights and licenses granted to the defaulting party shall
terminate, but the rights and licenses granted to the party
not in default shall survive such termination of this
Agreement subject to its continued compliance with the terms
and conditions of this Agreement.
---------------------
The specific claim based on the legal agreement seems to be a straightforward summary of this section (which is not redacted), and not original research. 72.226.73.121 (talk) 02:55, 17 February 2010 (UTC)

Archiving

Does anyone object to me setting up automatic archiving for this page using MizaBot? Unless otherwise agreed, I would set it to archive threads that have been inactive for 60 days.--Oneiros (talk) 23:11, 29 December 2009 (UTC)

 Done--Oneiros (talk) 01:00, 2 January 2010 (UTC)

Now you can download OLEDB x64

2010 Office System Driver Beta: Data Connectivity Components http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c06b8369-60dd-4b64-a44b-84b371ede16d --211.127.229.23 (talk) 11:42, 16 March 2010 (UTC)

Wrong/Conflicting values for maximum physical memory

The article mentions two different maximum limits for physical memory possible under amd64

First is mentioned under section 1.2 (Architectural features) as below:
Larger physical address space: Current implementations of the AMD64 architecture can address up to 1 TiB (240 or 1,099,511,627,776 bytes) of RAM

then, near the end of the page, it is mentioned under section 4.2 (Older implementations):
Recent AMD64 implementations now provide 256 TiB (281,474,976,710,656 bytes) of physical address space

I think second one is right, it is also corroborated by the following link:
Long_mode#Current_limits

Please check. —Preceding unsigned comment added by Gurpreet007 (talkcontribs) 09:40, 1 January 2010 (UTC)

Fixed, with references to the AMD Architecture reference (reference 1). I haven't found a "horse's mouth" reference for the recent extension to 48 bits implemented of physical address. Page 4 of reference 1 still cites the 40-bit limit, but that doc is three years old. Jeh (talk) 19:38, 25 May 2010 (UTC)
Found that reference for 48 bit PAs under 10h. Added to article, also added to AMD K10 article. Jeh (talk) 19:53, 29 May 2010 (UTC)

Lede and statement of compatibility

The sentence in the lede on compatibility is based on one of the fundamental design features of x86-64: the existing x86 instruction set remains in silicon, thus applications built to the existing x86 instruction set will run on an x86-64 chip just as on any other x86 implementation in silicon.

Yes, there are edge cases where a bad OS implementation or a funky bit of incompatibility may not cause this to always be the case, but you can say the same thing for mask revisions. The point is that these are edge cases, and the lede is not the place to hang an ornament for every conceivable edge case. If you want to add discussions of this type, feel free to do so in the body.--NapoliRoma (talk) 17:11, 11 April 2010 (UTC)

Problem is that old statement allowed flawed interpretations that 64-bit code is not better than 32-bit. We make progress now, but your last edit was step backwards I will undo it. 17:31, 11 April 2010 (UTC)~ —Preceding unsigned comment added by GODhack (talkcontribs)

Virtual and physical address limits

I have added the fact that it is possible for a computer to use more than 4 Gigabytes of RAM with a x86-64 CPU compared to a x86-32 CPU. This is one of the most commonly cited advantages I have seen in articles I have read on 64-bit CPUs and 64-bit operating systems but for some reason this was not mentioned in the lead section of the article. A 64-bit CPU theoretically allows for a maximum of 16 Exabytes (16 billion Gigabytes) of RAM. --GrandDrake (talk) 21:27, 11 April 2010 (UTC)

I'm afraid the articles you've been reading are examples of the fact that this is one of the most widely misreported aspects of these CPUs.
x86 CPUs are 32 bit and support a 32 bit virtual address space; x64 CPUs, in long mode, suppport (theoretically) a 64-bit virtual address space (48 bits in current implementations). However virtual addresses do not address RAM directly. They always go through a translation step and it is the width of the entries in the translation table (specifically, the width of thhe "PFN" field in the page table entries), not the width of the original virtual address, that determines the maximum supported RAM. For example, an x86 CPU in PAE mode happily supports up to 64 GB RAM even though it is still a 32-bit CPU; the page table entries in that mode supply 24 bits of physical page number, add 12 bits of byte offset from the original V.A. and there are your 36 bits. Correspondingly, you'll notice that the address pins on x86 CPUs go up to A35, they don't stop at A31. This has been true since the Pentium Pro.
On x64 CPUs: the spec provides for up to 52 bits of physical address (12 from the byte offset, 40 from the page table entry) however current implementations are limited to 48 bits. By coincidence this happens to be the same as the implemented width of virtual addresses, but that's all it is: coincidence.
This is all detailed in the "architectural features" section in the "Larger physical address space" entry, and backed up by the AMD architecture reference manuals.
As for operating systems, it was simply a decision of Microsoft to limit their 32-bit "client" OS SKUs to < 4GB of RAM. Many of the 32-bit Server versions of Windows have supported more than 4 GB of RAM, up to 64 GB of RAM in fact, on x86 CPUs since Windows 2000, using PAE mode. And no, PAE mode is not a "hack" as some writers claim; it is simply an alternate page table format supporting wider physical addreses. In fact the page table format on x64 in long mode is essentially PAE mode with the PDPT extended and a fourth table (the PML4 table) added, so anyone who dismisses PAE mode as being a "hack" should by rights also dismiss x64 long mode as even worse. Jeh (talk) 23:12, 11 April 2010 (UTC)
It is really only a coincidence that in x86, not in PAE mode, the virtual and physical addresses are the same width. There have been many architectures in which this was not the case. For example the PDP-11 supports 16-bit wide virtual addresses but up to 24 bits of physical address. The first VAXes had 32 bit virtual addresses but only 30 bit physical addresses - and half of the physical address space was taken up by I/O space, resulting in only 512 MB for RAM. (It seemed like a lot at the time.) Again, the physical address width depends on the format of page table entries and is largely independent of virtual address width. Jeh (talk) 00:34, 12 April 2010 (UTC)
I will look at further references on this matter but I would mention that even one of the AMD references states that "The need for a 64-bit x86 architecture is driven by applications that address large amounts of physical and virtual memory". As such why not mention the larger amount of RAM possible with the x86-64 architecture in the lead section of the article? --GrandDrake (talk) 12:37, 13 April 2010 (UTC)
"Large" is fine but to claim that x64 can give us 64 bits' worth of virtual space now, or of physical space ever, is wrong. Jeh (talk) 21:12, 29 May 2010 (UTC)
What they mean is "Address large amounts of memory directly and efficiently, as a flat address space available to the application." Various kludges mentioned above (and more) allow applications and OSes go beyond the limit given by the number of bits but it is messy. It's not the first time, there was a transition from 16bit to 32bit before, and it always looks similar, when the applications and OS push the address space limits various ugly workarounds appear and then they go away in the transition. Jmath666 (talk) 17:20, 13 April 2010 (UTC)
Jmath666: PAE is not in any way shape or form a "kludge" nor an "ugly workaround." What you may be thinking of is AWE (Address Windowing Extensions) in Windows, and similar things in other OSs; this is simliar to the concept of "Extended Memory Support" back in the bad old DOS days.
On the other hand, x86 PAE simply adds a third level of page table lookup (the PDPT) and widens the PTEs to allow more bits of physical address. In long mode on x64, the PTEs have the same format as they do under x86 PAE, and to PAE's three level lookup scheme, long mode adds a fourth table, the PML4 table. So if you think PAE is a kludge then you must believe that long mode on x64 is even more of a kludge, because it is sort of "PAE, even more so." Fortunately, neither is a kludge. They are just extensions of the page table structure from the two-level, 32-bit-wide form used on x86 without PAE. And they are completely transparent to applications and the overhead is very small (on the order of a percent or two).
GrandDuke: A 4 GiB virtual address space does limit the amount of RAM that any one application can use at one time to 4 GiB, because you can't use more physical addresses than you have virtual. It would be like trying to have more than 10 million direct-dial phones in a North America area code even though the phone numbers within an area code are only seven digits wide; there would just be no way to dial - address - those other phones. (AWE softens the "at one time" point.)
However the 32-bit virtual address space does not limit the amount of RAM that can be used by the OS plus all of the running apps, because each app - process - gets its own instance of at least part of the total VAS. In Windows the "part" is normally half, so the total amount of RAM that could be made use of on a 32-bit system is equal to 2 GiB for the OS plus 2 GiB for each process - a total only limited by the number of processes, for which there is no absolute limit.
If you don't believe that... in Windows, go to the "process" object in Performance Monitor and look at the "Virtual bytes" counter for the "total" instance. On most Windows systems, even 32-bit systems with only moderate loads, the total "virtual bytes" defined for all processes will easily exceed 4 GiB. And that's not even counting the OS, only the user-mode address spaces of processes. That number will be a little misleading due to "reserved" memory and shared memory; still, the total is pretty startling. (It's also usually a good argument against "I now have n GiB of RAM, so I can disable my pagefile, right?")
If this were not the case then there would have been no reason and no benefit for Windows Server editions to support up to 64 GiB RAM on x86.
Note that PAE mode could be easily extended to even larger RAM sizes while still staying with a 32-bit virtual address. (And is, on x64 processors in compatibility mode)
Therefore 64-bit architecture is not primarily about extending RAM size. It is about wider virtual addresses.
A lot of x64 systems are running just fine with 4 GiB or less of RAM; they can still provide 256 TiB of virtual address space. RAM size and virtual address space really are very much independent.
The statement in the abstract from the Stanford talk is at best an oversimplification of the situation. At worst is it misleading, ignoring as it does PAE mode. For some reason large numbers of people in the business, even people who should know better, seem to believe that PAE mode is some sort of awful hack similar to EMS from the DOS era. (Or else they ignore it completely.) In selling x64 as a solution for supporting more RAM, AMD is capitalizing on this misunderstanding.
Again I mention a counterpoint: PDP-11. 16 bit architecture, addresses generated by instructions are always 16 bits wide, hence you have a 64Kbyte virtual address space. This was later extended to 128 KiB via separate instruction and data spaces, much like CS and DS default segments in x86. How much RAM did it support? Up to 4 MiB (22 bits). How could you use it all? By running multiple tasks, each with its own 64 or 128 KiB, of course.
One could say that larger RAM size is a secondary characteristic of x86-64, but so are lots of other things, like the enhanced multimedia instruction set. Not everything needs to go in the article lede. If it did it wouldn't be the lede any more. Jeh (talk) 20:31, 13 April 2010 (UTC)
Jeh, you have posted some useful information on this issue but multiple articles (including articles from AMD, Apple, and Microsoft) have stated that the virtual address space limit is one of the main reasons for the change from 32-bit to 64-bit CPUs. As such I have gathered several reliable references and added information about this into the lead section of the article. --GrandDrake (talk) 04:24, 25 May 2010 (UTC)
Nothing I have written here contradicts the assertion that the increased virtual address space limit is one of the main reasons for the change. But I'm afraid that your addition to the lede shows your continued confusion on this point. Earlier you asked...

As such why not mention the larger amount of RAM possible with the x86-64 architecture in the lead section of the article

...but here you are talking about the virtual address space limit. They are not at all the same thing; references to the increased virtual address limit do not support claims of increased RAM limits. Of course x64 supports increased RAM limits also, but the increase is not from 4 GB to something larger, because x86 already supports 64 GB RAM and has done so since the Pentium Pro.
For both RAM and virtual address space, there are many reasons for not giving the exact limits for either in the lede. But a good one is that the true situation is complex (architectural vs. implementation limits, old limits that were imposed by specific operating systems on x86 but which were not actually inherent in x86, etc., etc.). The full explanation does not belong in the lede and a simple statement of "it was increased from x to y" is misleading, so that does not belong in the lede either. Jeh (talk) 05:08, 25 May 2010 (UTC)
I am clear on the issue now and I am going by the information in the articles I have read which includes articles from AMD, Apple, and Microsoft. Do you believe the information is wrong and if so can you provide reliable references in support of that? If you believe the information would benefit from further explanation than by all means add to it but that is not a reason for deleting information supported by several reliable references. --GrandDrake (talk) 06:27, 25 May 2010 (UTC)
No, you're not. You are still confusing virtual vs. physical addresses, and RAM vs. physical address space. And you are ignoring the fact that no x64 processor supports either 64-bit virtual addresses or 64-bit physical addresses. This is completely documented in "horse's mouth" references (the AMD and Intel processor architecture manuals) already given later in the article, where all of the details about architectural vs. implementation limits are spelled out. You are flatly ignoring references already given. I am a subject matter expert in this field and I have seen misstatements and misapplications of these concepts many, many times; your references and/or your applications of them are examples. I really don't want to take the time to teach the whole of x86 and x64 memory management here, and I am not going to go through all of the text in all of your references -- but if you will cite the specific sentences from each of your "references" that you think support your case, I'll tell you what's wrong with each one as far as the text you want to put in the lede is concerned. WITH pointers to the architecture books or other [i]truly[/i] reliable references for each. Jeh (talk) 10:02, 25 May 2010 (UTC)
And after all that, if you're still not convinced... it's still too much detail for the lede. Jeh (talk) 10:20, 25 May 2010 (UTC)
You state the information is refuted by AMD and Intel processor architecture manuals but you have provided no links to such information saying that the theoretical limit isn't possible. Do you believe that the theoretical limit is wrong and if so could you provide reliable references in support of that? --GrandDrake (talk) 03:43, 29 May 2010 (UTC)

I have added and improved the references to the architecture manual (reference 1) in the article. These document the true virtual and physical address limits. Except for the reference in the lede, they now all have page numbers. Please fetch that PDF, read and understand the referenced pages, and see how they support the article text. The virtual address width is theoretically 64 bits - but no AMD64 or Intel64 processor actually implements more than 48 bits. (And Windows only populates 44 bits' worth...) The physical address width is architecturally limited to 52 bits, there are just not any more bits available in the page table format, and current implementations only allow 44 bits. Period, full stop.

n.b.: The only reference that can trump Reference 1 is a later version of Reference 1.

Now it is true that a major motivation for moving to x64 is indeed larger VAS and PAS, and so in acknowledgement of your original point, I have added this point to the lede. I have even stated this as "vastly larger" and put this as the first point in the lede. But they are absolutely not 64 bits wide (and x86 was not limited to 32 bits physical, either). They should not be cited as 64 bits wide in the lede, as that is wrong; simply mentioning the true limits without explanation would violate the "principle of least astonishment", as many (like yourself) believe that they are 64 bits wide; and explaining the true limits, the difference between architectural vs. implementation limit, etc., would be too much detail for the lede: by the time you are done explaining it all adequately you have basically duplicated the text that appears later. No, "vastly larger" will suffice completely for the lede. (In general, anything that requires detailed refs doesn't belong in an article lede, particularly for an article as long and as detailed as this one.) Jeh (talk) 19:56, 25 May 2010 (UTC)

Do any of the references refute the theoretical limit stated by the seven references I posted? I can understand wanting to make clear the current CPU design limits but do you have any references that state that it would be impossible to extend the physical address width of x86-64 CPUs beyond 52-bits? --GrandDrake (talk) 03:43, 29 May 2010 (UTC)
Yes. The reference I gave cites the 52 bit limit as an architectural limit, not an implementation limit. Jeh (talk) 04:58, 29 May 2010 (UTC)
Where in that reference does it state that extending the physical address width would not be possible? After all as you previously mentioned an extension to the physical address width was done by Intel from 32-bit to 36-bit as seen in this timeline. --GrandDrake (talk) 05:44, 29 May 2010 (UTC)

The x86 architecture provides support for translating 32-bit virtual addresses into 32-bit physical addresses (larger physical addresses, such as 36-bit or 40-bit addresses, are supported as a special mode). The AMD64 architecture enhances this support to allow translation of 64-bit virtual addresses into 52-bit physical addresses, although processor implementations can support smaller virtual-address and physical-address spaces.

That's from the bottom of page 115. Now, true, that does not promise that it could "never" be extended beyond that limit. But that isn't how specs are written. If you are claiming that since it is not explicitly excluded then it must be mentioned as a possible future enhancement, then you are firmly in original research and even crystal ball territory. You cannot look at bits 52 through 62, currently unassigned in the PTE format, and say "hm, the physical addresses could grow by 11 more bits!" Well, you can say that to yourself; you could write it in an article at PC Magazine or Wired; but you can't put it in a Wikipedia article based solely on your speculation. We have to take the documented limit for what it is.

Another point is that in the page table, etc., entry formats on page 126, the high 12 bits of the fields (which is where more bits of physical page number would have to go) are documented as "reserved, MBZ" (must be zero). That's in legacy mode. Over on page 131, though, it says that bit 63 is the NX (No eXecute) bit, as we expect, and that bits 52-62 are "available", which means that (like bits 9-11) they are available for the OS to use to stash information. Now once OS code uses them as such, what do you think would happen if a future edition of the processor expanded the physical page address by a few more bits? Things would break. Once AMD has said that field is "available" they can't use it after that without causing a lot of people a lot of pain. The Intel doc similarly says that these bits are "ignored" by the processor in both legacy mode (they call it 32-bit mode, figure 4-7) and "ia32e mode" (table 4-14 through 4-19).

Regarding your references:

  • Your reference 5, to PC Magazine: An article in the popular press is at best a third-party source. And this one contains several wrong statements. For example: " A 32-bit system is limited to utilizing 4GB of RAM (232 addresses)." Flatly wrong, since even the now-ancient Pentium Pro can use up to 64 GB of RAM. It then says "The theoretical maximum of ram that a 64 bit OS can address is 16 exabytes, or about 16 billion GB, but Microsoft currently puts a 16TB limit on address space and allows only 128GB of physical RAM." Kudos to them for using the word "theoretical", but then they fail to mention the current "actual" hardware implementation limits on RAM, making this a poor reference. The mention of Windows' 16 TiB vas limit is similarly confusing: they should have said virtual address space, and even overlooking that, this ref doesn't help at all if you're looking for a reference for the current 256 TiB limit. It certainly does not provide any more references than we already have for any of the limits in the article.
  • Your reference 6, to Wired Magazine. Even more of a popular press magazine than PC Mag, hardly a place I would look to as an authoritative source. Starts out with the same problem: "32-bit processors like Intel's Pentium III/IV and AMD's Athlon have a memory limit of 4 GB per CPU." Wrong. (PAE) And then "A 64-bit computer can address 16 exabytes of memory." Well, maybe some 64-bit computers can, but x86-64 certainly cannot, not today. Note that they don't specify whether they mean virtual or physical "memory" here, which further weakens the case for trusting this "source."
  • Your reference 7, to a page at apple.com. This is a marketing piece, pure and simple. The statement that "32-bit applications (that run on x86 machines) can address only 4GB of RAM at a time" is phrased wrong; an application running in a virtual memory OS does not "address RAM" at all, it addresses virtual memory. Second, again we have the mention of a theoretical limit of 16 exabytes for 64-bit machines. But... they were just talking about RAM, so I have to assume that this statement refers to RAM also, and there's no way x64 is going to be talking to any more than 252 bytes of RAM. It's interesting, though, that this page refutes the claim that 32-bit machines are limited to 4 GB of RAM! (But they mention the Apple limit of 32 GiB... don't know where that came from, but Intel has never made an x86 with that limit. It was 4 GiB without PAE, and 64 GiB with.)
  • Your reference 8, to PC World. This mag is somewhere between PC Magazine and Wired in technical chops. Same bogus claims, asserting that 32-bit CPUs are limited to 4 GB of "memory" (whatever they mean by that: true for virtual, false for physical), and the same at best imprecise claim of 16 exabytes of "memory" for "a 64-bit processor." Not any x64's we can buy today...
  • Your reference 9, to an article at MSDN by Matt Pietrek. Now at last we have a reliable source; Matt Pietrek is a well known developer (he wrote the original versions of SoftIce, among other things) and author in the field. And he correctly distinguishes between virtual and physical address space. (finally! But I'd expect nothing less of him.) The problem here is that though he's a very reliable source, the article is not really required since we have the AMD and Intel docs. He does not support any contentions about 64-bit physical addresses; indeed he confirms the 52-bit architectural limit. The additional information of the physical memory limits under various versions of Windows is only going to be confusing here; similarly for the disparity between the processor's 248 bits of virtual address and Windows' implementation of only 44 bits. Note also that his tables do confirm that x86 systems are not limited to 4 GB RAM. All in all, a great source (as is any article by Matt) but at best, not needed as references for this article.
  • Your reference 10, to a page at amd.com. Surely AMD would get this right? Uh-oh: "64-bit computers have a memory limit of 16-exabytes of memory (2^64)". Here we go again. Maybe some do, but as we have seen, x64 isn't documented as being able to address that much RAM, ever (even if you did have a way to hook it up) and at present is a long way from that much virtual address space. Indeed Windows (the subject of this article) only uses 1/16 of the 48-bit v.a.s. that current x64 CPUs do implement. How can an article at amd.com be this far off? Ah-ha. Look at the attribution: "Andy Patrizio is a freelance journalist based in Los Angeles. He has covered the high tech sector for more than a decade and has written for InformationWeek, Dr. Dobb's Journal, SD Times, and JavaPro." Not what I'd call a technical expert.
  • Your reference 11, to a page at Microsoft's technet. Very few problems here, except for the claim of 43 bits for addressing on Windows x64 - it's actually 44 bits. (Obviously: 244 bytes = 16 TiB.) However, given the references already in the article, there is really no need for this one as well; furthermore, the fact that it is citing the Windows v.a.s. limit of 16 TB total will only serve to confuse someone looking for a reference for the 48-bit limits (256 TiB).

In sum, I'm afraid that most of your references are evidence that articles like this one need to be edited only by subject matter experts. As I said many days and a few thousand talk page words ago, there's a lot of confusion out there about these points, and a lot of confused people are writing things on the net. I'm afraid it does take a SME to figure out which references can be trusted and which are "not even wrong." And even of the ones you've given that are reliable, none of them really come out and say "the architectural limit is this, and the current implementation limit is that" in a straightforward manner. The AMD (and, now referenced, Intel) architecture books do so: they directly, obviously, and unambiguously support the article text. And so those are the references that should remain.

I am however going to add your refs 9 and 11 to the section on Windows. Jeh (talk) 11:45, 29 May 2010 (UTC)

Nothing in that reference you used argues against the theoretical RAM limit it merely states what the current RAM limit is for x86-64. I agree to various degrees with your criticism of the Apple, AMD, PC Magazine, PC World, and Wired references. The other two references though I would definitely consider reliable enough to include and I do not understand the arguments that "The problem here is that though he's a very reliable source, the article is not really required since we have the AMD and Intel docs" and "given the references already in the article, there is really no need for this one as well" since one reason for these references is to add additional information to the article. If you can find me a current reference for the 16 exabyte theoretical RAM limit by all means I will use it but at the moment you are ignoring two references in support of that information even when you agree that those references are reliable. --GrandDrake (talk) 00:33, 30 May 2010 (UTC)
Actually, yes, the references state that the architecture defines a maximum available 52 bits for physical address. That's the theoretical limit under this architecture definition. There are no more bits available in the page table entries than that, just as there are no bits available in the integer registers to support more than 64 bits of virtual address. Look for example at the Intel doc at table 4-1: "Physical address width - up to 52."
The claim that the theoretical limit is 264 bytes does not need to be "argued against." It is documented as not true for x64. It may be true for some "theoretical" processor, but that is not what this article describes. If you are arguing that the theoretical limit on x64 is 64 bits, then you need to find a reliable source that supports that claim... in the face of the manufacturers' own reference material that state it is 52 bits.
Neither of the sources you found that we've agreed are reliable support your contention. Matt Pietrek says

Although a 64-bit processor could theoretically address 16 exabytes of memory (2^64), ... current x64 CPUs typically only allow 40 bits (1 terabyte) of physical memory to be accessed. The architecture (but no current hardware) can extend this to up to 52 bits (4 petabytes).

Note that he does not say "x64 processors could theoretically address 16 exabytes." He says "a 64-bit processor could theoretically...." And he states that the architectural limit is 52 bits. And really, as we have seen, the "bitness" of the processor relates to integer register size and to virtual address size, not to physical memory size. Answer me this: if a 64-bit processor could theoretically address 64 exabytes, then why is x86 (a 32-bit processor) not limited to 4 GB of RAM?
The Technet article by Chris St. Amand does not mention 16 exabytes as even a theoretical limit. Doesn't mention the 52-bit limit either.
By the way, I don't know where you're going to put bit 63 as that bit in the PTE is already occupied by the NX bit. -- Jeh (talk) 05:08, 30 May 2010 (UTC)
At the moment the source you are using to argue against the theoretical RAM limit being 16 exabytes is a 400+ page primary source document. It does look like you may be right but it would be much more conclusive if you could give a secondary source in support of what you have said. You have stated several times that this is a common error but if so shouldn't there at least be one secondary source that mentions that? --GrandDrake (talk) 09:46, 30 May 2010 (UTC)
No. The sources (plural) I am using are individual, referenced pages, not "400+ pages". There probably are some secondary sources that get it right, but as I will explain below, we don't need them. Jeh (talk) 11:32, 30 May 2010 (UTC)
As for the argument against secondary sources I would mention that based on Wikipedia policy secondary sources are recommended and the use of primary sources (such as architecture books on x86-64) should be used with care. In fact if you use an interpretation of a primary source you are supposed to include a secondary source in support of that interpretation. As such I will not get into a debate about interpretations of primary source material and will simply refer to the Wikipedia policy on this matter. --GrandDrake (talk) 00:33, 30 May 2010 (UTC)
No "interpretation" is happening here; the page table formats and the numbers of bits they define are there for all to see; the definitions of "available" bits and "unused" bits are standard throughout the industry and are even confirmed in the respective manuals' definitions of terms. I recall a similar argument over a simple statement of fact in the Parallel ATA standards: Someone was insisting that that was a primary source (correct) and that we needed a secondary source instead (wrong). If an editor is interpreting a primary source then yes, that is WP:OR and a secondary source needs to be found to confirm it. But it is not WP:OR to look at the page table formats and see that there is no way for them to provide any bits to a physical address beyond bit 51. Nor is it WP:OR to then compute that 252 bytes = 4 petabytes. On simple statements of fact, primary sources trump secondary. That's why they're called "primary." I don't need a secondary source to confirm that the periodic table tells me that the atomic weight of carbon is 6. Your contention here is at that same level.
And, something I neglected to mention before: Even if AMD and Intel redefined those "available" or "unused bits", it cannot possibly go to 64 bits because the 64th bit in the PTE formats is occupied by the NX bit.
Now of course it is possible that AMD or Intel might come up with another mode that defines a different page table format, one that allows for more bits of physical address than 52... just as PAE defined a nrew page table format that extended x86 from 32 bits to 36. But to mention that possibility would be pure speculation. Unless you can find a reliable source that says AMD or Intel has this in the works...
And if that speculation was mentioned, then there would be no reason to suspect that the limit would then have to be 64 bits. Again drawing on the PAE example: PAE extended the physical address size to 36 bits, four bits larger than x86's natural integer size. Physical addresses simply do not come directly from machine registers and so are not limited by, nor automatically extended to, those registers' widths. The physical address limit derives from the address translation mechanism... in the case of x86 and x64, from the page table format.
In fact, this could even be done on 32-bit x86! x86 could support an "extended PAE mode" that would support, for example, 96-bit physical addresses. We'd just make the PTEs 16 bytes wide and change the lookup scheme again.
So why do you think that 64 bits is an eventual upper limit for physical address size for x64? Because it's a 64-bit machine? Do you not see that that has nothing to do with it? Do you also not see that it would require some future address translation mode that existing documentation does not even hint at?
So I ask again: Why are you so stubbornly insisting that this article mention 16 exabytes as a possible upper limit on physical memory size for x64? Clearly it would need a different page table format to do so. Do you have some prescient knowledge of what that format would be, and that it would indeed provide exactly 64 bits of physical address, and no more? Why couldn't it provide more? Heck, why not mention that "a future architecture extension could support 256-bit physical addresses"? It's just as "true." (And just as unreferenced.) Jeh (talk) 05:08, 30 May 2010 (UTC)
Just want to say that Jeh's argument here is absolutely spot on. There is no reason whatsoever to assume that the physical address space can be either less than 64 bits, equal to 64 bits, or more than 64 bits. There only limitation is the page translations tables which are in no way constrained to the use of only 64 bits. Jeh, you explained this very well (and managed to keep it civil). HumphreyW (talk) 05:19, 30 May 2010 (UTC)
Jeh, I am simply trying to follow the Wikipedia policy on information being reliabily referenced and secondary source confirmation of what you have said is what I would most like to see. Also what Wikipedia policy states that "On simple statements of fact, primary sources trump secondary"? During a featured article candidate process I was told by several people that secondary sources were recommended over primary sources and was pointed to the Wikipecia policy about "Wikipedia articles should be based on reliable, published secondary sources and, to a lesser extent, on tertiary sources." --GrandDrake (talk) 09:46, 30 May 2010 (UTC)
You are selectively quoting and interpreting. Let's read a bit further. First: "Primary sources are permitted if used carefully." Second: "Any interpretation of primary source material requires a reliable secondary source for that interpretation." Fine, but all of the article's statements that are referenced to the sources in question are practically direct quotes; no "interpretation" is occurring. I ask again: Would you insist on a secondary source to "interpret" the periodic table, to support a claim that carbon has an atomic number of 6? Or that two of them would therefore have a total of 12 protons? Nonsense.
Let's go on. "Deciding whether primary, secondary or tertiary sources are appropriate on any given occasion is a matter of common sense and good editorial judgment, and should be discussed on article talk pages." i.e. editors do have freedom to elect to use primary sources where appropriate - which is to say, where no interpretation or analysis is occurring.
Now here's the joker in the deck: "Secondary sources are second-hand accounts, at least one step removed from an event. They rely for their material on primary sources, often making analytic or evaluative claims about them." Hmmm... did any of your proposed secondary sources actually refer to either the AMD or Intel docs? Not that I can see. Maybe if those authors had actually read the primary sources we're talking about they'd have come to a different conclusion, instead of just blindly assuming that a 64 bit processor implies a 64 bit physical address.
I have to ask this: Would you be searching this hard for a justification for including "theoretical limit of 64 bit physical addresses" if it was not text you had written? Or might you, upon learning that physical address size is not related to integer size in an address translation environment, say to yourself "Hm! I never realized that!" and go on to something else, having learned something new? Jeh (talk) 11:32, 30 May 2010 (UTC)
It does appear that I was mistaken in the amount of RAM that can be supported by x86-64 CPUs. From my perspective though you were arguing with something I have seen stated in well over a dozen articles spanning more than 5 years and much of your argument was at first based not on links to references but on statements. Some good has come from this discussion though since I notice that you have improved the number of references and in line citations in the "Architectural features" section in the last few weeks. --GrandDrake (talk) 13:32, 30 May 2010 (UTC)

Evaluating sources

In general, any source must be considered of low quality (at least as far as referencing these points is concerned) if any of the following apply:

  • The source does not clearly state when it is talking about physical address space vs. virtual address space, in particular the source simply uses the vague term "memory".
  • The source actively conflates virtual and physical addresses. For example, "expanded virtual address space allows the processor to address more RAM." The two are not directly related, as evidenced in x86 with PAE: 4 GB virtual, same as always, but 64 GB physical (RAM). In fact it is the expanded physical address space that allows the processor to address more RAM.
  • The source claims that the physical address limit of x86 processors is 4 GB, blithely ignoring PAE, and also ignoring the fact that many 32-bit OSs do use PAE to allow more than 4 GB RAM on x86 (just not Windows client editions; this is a software restriction, not hardware).
  • The source notes that "64 bit processors" allow "16 exabytes" (or "18 exabytes" - it's 18 x 1018, 16 x 10246) without noting the current implementation limits of x64 processors that you can actually buy. i.e. if someone is really moving to x64 because of "64 bit address space" or "the ability to use 16 exabytes" they're going to be disappointed, because it isn't there. This one applies whether they're explicitly talking about virtual or physical, and also if they're not saying which.
  • The source is talking about Windows or is addressed to Windows users, but does not cite (or incorrectly cites) the Windows limits on virtual and physical address space, which are much lower than what current processors support.

Jeh (talk) 21:05, 29 May 2010 (UTC)

I am willing to debate the reliability of sources based on Wikipedia policy. --GrandDrake (talk) 00:33, 30 May 2010 (UTC)
You are questioning the notion that when a "source" gives incorrect, misleading, or incomplete information, it should not be regarded as reliable? Wow.
I ask again: Why are you insistent on including unreferenced, misleading, and wrong statements in the encyclopedia? Jeh (talk) 05:08, 30 May 2010 (UTC)
I am willing to discuss the reliability of sources based on Wikipedia policy but I have no reason to agree to your personal list of rules about what you consider to be an acceptable source. For instance believing that a source is "low quality" because it doesn't include certain information you consider important is an opinion. --GrandDrake (talk) 09:46, 30 May 2010 (UTC)
How about when it is directly contradicted by not just one, but several primary sources? Mind you, no "interpretation", the primaries just say one thing and the other sources say another - you'll still take the secondaries over the primaries? Would you never entertain the notion that the other material just might be wrong? You don't realize that an important role of editors here is to evaluate sources against just such an issue? Jeh (talk) 11:32, 30 May 2010 (UTC)
Whether I would take one source over another would depend on several factors with the factor of the source being primary or secondary only being one of them. Also after reading additional sources I now agree with you about the amount of RAM that can be supported by x86-64 CPUs so I believe that the discussion is now settled. --GrandDrake (talk) 13:32, 30 May 2010 (UTC)

Moved page numbers for several references into the references

During a discussion I had concerning information about theoretical RAM size for the x86-64 CPU I was told that there were page numbers attached to the references in question. It was only late into that discussion I noticed that the page numbers for that reference had been put into the article as small numbers after the reference link. Though the Template:Rp can be used it states that:

Warning

This template should not be used unless necessary. In the vast majority of cases, citing page numbers in the [ref ...]...[/ref] code is just fine. This template is only intended for sources that are used many, many times in the same article, to such an extent that normal citation would produce a useless line in [references /] or too many individual ones. Overuse of this template will make prose harder to read, and is likely to be reverted by other editors. Used judiciously, however, it is much less interruptive to the visual flow than full Harvard referencing and some other reference citation styles.."

I do not see any references in question being used to such an extent that it would make sense to use this template and in one case this template is used for a reference that is only used once. --GrandDrake (talk) 07:50, 1 June 2010 (UTC)

Jeh, note that "A minor edit is one that the editor believes requires no review and could never be the subject of a dispute". Note that besides the one cited reference that uses the rp template I also added a page number to the first AMD reference which it did not previously have. Also please tell me where you think I removed a comment in that edit. --GrandDrake (talk) 08:04, 1 June 2010 (UTC)
So I forgot to uncheck the "minor edit" button. It happens.
I was speaking of the comment re. "10h" vs. "10th". I didn't see it in the "after" column in the diff. But now I see it was there. Again, my mistake.
I will defer to the referenced style guideline for rp except for the AMD Architecture manual. (And the use of rp for the one ref was a mistake; I thought I was going to use that one again.) It is now referenced 11 times. That's more than enough to justify rp. Should that document be replaced by an updated version (as is likely) someone is going to have to be sure to find n different citations (might be more than 11 by then) and edit them all, making certain they are all consistent. Speaking of that, the existing citation is of low quality, lacking all but the most basic information about the document; I don't want to have to correct them all for that, either. Please restore the use of the rp template for that reference. I would also refer you to WP:CITEHOW:

You should follow the style already established in an article, if it has one; where there is disagreement, the style used by the first editor to use one should be respected.

I believe this applies to the first use of rp. Jeh (talk) 08:33, 1 June 2010 (UTC)
...and, I missed clearing the "minor edit" button again. Jeh (talk) 08:33, 1 June 2010 (UTC)
Very well, though I believe it is 8 references (a few are cited more than once) I will return the rp template for the AMD Architecture manual. As for that reference having "the most basic information" you were the one that increased the in line citations with that reference from 2 to 10 in the last month. Also I was the first person in over 2 years to add additional information about that reference so it would be great if someone did improve the information about the references in this article. --GrandDrake (talk) 10:15, 1 June 2010 (UTC)
Tomorrow. Jeh (talk) 11:04, 1 June 2010 (UTC)

mmapping of large files

In the section about larger virtual address space, there was a comment about mmapping being generally faster than using "read" calls. This is not true, although lots of people think it is. It is if you access just a few pages in the whole file, so that the rest of the file can stay on the disk. However when you access MOST of the file, read usually wins out. This is because the operating system as well as the disk tend to catch on to the fact that you're reading the whole file, and tend to do appropriate read-aheads. This doesn't happen if you access the file through a mmapped region.... I changed "generally" to "sometimes". -- REW

Actually, yes, it does happen, at least in Windows. That intelligent readahead mechanism you're talking about? It's also in the pager. In fact it's the same mechanism, as the file cache uses the VMM to move things in and out of cache, which is mapped to the file. So really, you're using file mapping whether you use the file mapping calls or not. Jeh (talk) 05:47, 9 June 2010 (UTC)

Why did AMD limit the physical address space to 52-bits with the x86-64?

Why did AMD limit the physical address space to 52-bits with the x86-64? With the physical address space only bits 0 through 51 are currently allowed while bits 52 through 62 are either reserved with AMD64 (as seen on page 128) or ignored with Intel64 (as seen on page 4-36) and the 63rd bit is used for the NX bit. This currently limits the physical address space to 52-bits even though the virtual address space can go up to 64-bits. Has either AMD or a reliable source given a reason for why the physical address space is limited to 52-bits with the x86-64? --GrandDrake (talk) 05:23, 3 June 2010 (UTC)

Who cares? Knowing this would not improve the article anyway so the discussion is irrelevant. Wikipedia is not a forum. HumphreyW (talk) 07:05, 3 June 2010 (UTC)

To GrandDuke:

One: You need to completely decouple the concepts of "integer register width" and "physical address width" in your mind. They are not necessarily related. There is no real reason to think that physical addresses on x64 "ought to be" the same width as the integer register width... so there is no point in looking for reasons that the sizes don't match.

Indeed, the fact that they do match on x86 without PAE is really just a coincidence of the PTE format. If, for example, x86's page tables had one more "reserved for software" bit, or if they had thought of the NX bit and put it in bit 31 of the PTE, we would have a 31-bit PA limit there.

I suspect that generalizing from x86's 32-bit PA is leading you astray. But that is really more of an exception than a typical case.

I've given examples of VA/PA mismatches before. I'll give you a few more. We'll start with the PDP-11. It is of course a 16-bit processor; all integer registers, including those that are always used to provide memory addresses (such as the program counter and the stack pointer) are 16 bits wide. Now on the most primitive PDP-11's there was no MMU... but on others the MMU allowed 18-bit physical addresses. On the higher-end Q-bus versions, and on still others with a dedicated memory bus like the 11/70, they supported 22-bit physical addresses.

For yet another example, this one from Intel: The 8088 was a 16-bit processor with a segmenting mechanism that allowed 20-bit PAs.

And yet another: One of the most successful architectures in history, IBM System/360, originally was a 32-bit processor with 24-bit physical addresses.

You can find similar examples throughout the history of MMU-equipped machines.

Two: I don't consider a limit that is "only" 1024 x 1024 x the typical 4 GB of a modern "largish" PC to be very "limited." That's six orders of (decimal) magnitude. It has taken about 20 years to expand typical RAM configurations by just three orders of magnitude (from 4 MB in the early 90s to 4 GB in 2010). Assuming a factor of 10 every 6 years, that gives 52-bit addresses about 36 years before they become "limiting." Even if the rate suddenly doubles, we still have almost 18 years. Assuming that x86-64 survives that long, there is ample time for a PAE-like extension to be added.

And finally, Three: There is a reason for leaving bits 53 through 62 available to the OS. I don't have a RS for all of this but I do have "insider information"; searches for this might help you find a RS: AMD consulted extensively with various OS kernel engineers, including to my certain knowledge some from MS and some from the Linux kernel community, in architecting the "system programming" features of x86. One thing they heard often was that OS memory management code could benefit from having more bits in the page table entries in which to store information, even for valid PTEs. So, they got them. Windows and, I believe, Linux both use those bits for a "working set index."

Searching for "working set index" and "page table entry" will find some evidence that this is true. For example, one of the reason codes for Windows bugcheck 1A means "the working set index in the page table entry is corrupted." Reliable evidence that AMD deliberately reserved those bits for that and similar purposes, at OS engineers' request, will be tougher or likely impossible to find.

But really, back to point one: Looking for a reason why "it's not the same as the machine bit width" is really misunderstanding the principle. It isn't supposed to be the same. There is no reason, other than a mistaken extrapolation from one example in recent history, to expect them to be the same. So I don't consider that it's needful to find an explanation for why they're not the same. The PTE format allows for a total of 52 bits of physical address, and that is that. Conversely, the point that there's no reason to expect them to be the same is not specific to x64 and so doesn't belong here. Jeh (talk) 07:32, 3 June 2010 (UTC)

Placing the subject in context is noted in the Wikipedia guidelines for both the featured article criteria and in writing better articles. If there is a reliable source that explains why AMD decided on 52-bits for the physical address space I think that information would be worth including in the article. Also if what Jeh said is right there might be a reliable source out there that would explain why bits 52 through 62 were ignored/reserved. --GrandDrake (talk) 10:16, 3 June 2010 (UTC)
But there are a myriad of design choices taken by AMD/Intel when designing a chip. To ask why for each and every choice would be folly. IMO there is already too much attention on the address space, and adding more by talking about why 52-bit was chosen and other bits ignored etc. is just too much weight given to only one factor. Should we also answer then unasked question of why AMD chose to disallow LAHF/SAHF (only to be reinstated later)? Should we also answer then unasked question of why AMD chose to zero the high 32-bits during 32-bit operations but not for 16-bit operations? All of these design choices are probably very complex and detailed, presumably involving trade-offs and budget considerations etc., and going into deep detail has a limited audience and only serves as stuffing to increase the article size without increasing the articles utility as a resource. Decisions were made, we don't really know why, but they were made. So instead of revisiting the "why", I think we should focus upon the "what", i.e. what it does, not why it does it. HumphreyW (talk) 10:28, 3 June 2010 (UTC)
I agree. See WP:NOTTEXTBOOK. That isn't "placing the subject in context." "Context" is the larger picture into which the subject of an article fits. What you're asking for here, on the other hand, are justifications for some very fine-grained details. These sorts of questions and the answers thereto, the tradeoffs, the other possibilities that were considered but rejected, etc., etc... these would be part of a case study of the design process, not a description of the actual architecture that emerged, the latter being the subject here. That is valuable information to some, yes, but it belongs in the textbook for a processor architecture design class in a computer engineering curriculum, not Wikipedia. Jeh (talk) 11:33, 3 June 2010 (UTC)

Addressing over 4GB and max array size

In Talk:X86-64#Why does the physical address space matter? several well-informed editors made convincing arguments that 32bit computers can access more than the 2^32=4GiB memory through various tables and address translations. But I am just a programmer, and the way it looks to me is that on 32 bit system I get a 32bit flat address space and cannot allocate over 2 or 3 or 4GB (depending on Windows or Linux flavor), my pointers are 32bit integers. My process simply cannot address more memory that 4GB. (Sure the hardware and the OS can create many flat addressable spaces each 4GB in size. Useful to the OS but not to my process.) On a 64bit system (hw+os) I do not have this limitation, my pointers are 64bit integers and I can go over the 4 GB limit, even in a single malloc to allocate a large array. So... isn't this a good reason to say in the article lead that 64bit allows to access more than 4GB memory? Jmath666 (talk) 07:54, 9 June 2010 (UTC)

Sign... It's not that simple. The entire long weary discussion with GrandDuke would not have happened if it were not for the fact that far too many writers have been imprecise on this very point. Some simply say "memory" without specifying virtual or physical; some refer to a "4 GB memory limit" and many readers think they're talking about RAM; others explicitly claim that x86 is limited to 4 GB RAM "because it's a 32-bit system"; and yet others actively confuse virtual and physical, saying one when they should be saying the other, e.g. "x86 limits processes to 4 GB of RAM." I work with this stuff and present it to my clients every day and I run into these misunderstandings all the time. Wikipedia absolutely should not contribute to this mess. If it takes a little more verbose and technical language to be precise, so be it. This isn't "Wired" or "PC World" or Moss Waltburg's column, it's supposed to be an encyclopedia. Jeh (talk) 08:30, 9 June 2010 (UTC)
I've added a bit to the lede indicating the benefit to the programmer. By the way, one of the reasons the details are not in the lede is that accurate details are too complicated, and really, too surprising. See, x64 does not currently support a 64-bit address space, either virtual or physical, and 64-bit physical is not even architecturally possible... so saying, in the lede or elsewhere, that it "supports 64-bit addresses" is just wrong. The real, complicated, surprising (but completely referenced) details are further down. Now if you can come up with a way to accurately summarize that information in the lede, with lede-like brevity, yet without being in any way misleading or imprecise about virtual vs. physical memory, please do so, but I'm just not seeing it. Remember, the lede is supposed to be an extremely brief summary; for a long article like this it is only supposed to be enough to tell the reader if they're at the right article or not. Jeh (talk) 08:42, 9 June 2010 (UTC)

Agreed. Well, can you think of a way to put 4GiB somewhere in the article lede, please? Overcoming the 4GiB limit (of what user code can malloc) is the whole point of the 64bit transition to my clients. They just want to solve on cheap Windows or Linux machines the big engineering models they did on expensive Unix workstations, which went through the 64bit transition many years earlier. We just use the computers, rather than designing them. Thanks, Jmath666 (talk) 08:58, 9 June 2010 (UTC)

...and someone else will think THIS detail really belongs in the lede also... eventually the lede contains the whole article. Really, the details are already in the "Architectural features" section, item "Larger virtual address space", and in the OS support sections (note that Windows gives you an 8 TB VAS per process, most other OSs give you 128 TB... another detail I don't want to put in the lede). Jeh (talk) 09:21, 9 June 2010 (UTC)

Why does the physical address space matter?

HumphreyW, on the issue of undue weight if anything the information on the physical address space is lacking. Despite being one of the top three issues mentioned by articles I have read on the x86-64 at the moment the physical address space is given a single paragraph in the middle of the "Architectural features" section with no explanation given for why the physical address space matters. --GrandDrake (talk) 23:14, 3 June 2010 (UTC)

The paragraph in question does spell out the number of bytes of RAM addressable and compare it to the limits on x86. I don't see how adding "justification for the 52-bit limit" (a limit not likely to be an issue for many years) will add to this. ... I suppose I can add a sentence explaining why more RAM addressability is a good idea, but this is not at all a point that's unique to x86-64. Jeh (talk) 05:17, 4 June 2010 (UTC)
I never said that it was unique to x86-64 and I know that you and HumphreyW believe that there is not a simple explanation for why AMD decided on 52-bits for the physical address space (which might be true). My previous post was only addressing the opinion that the address space had to much weight given to it which is something I disagree with in regards to the current information given on the physical address space. --GrandDrake (talk) 23:55, 5 June 2010 (UTC)
"I never said that it was unique to x86-64". No, but you said "with no explanation given for why the physical address space matters", as a criticism of the article's current coverage of this point. But "why the physical address space matters" is not unique to x86-64 and therefore isn't something this article should go into in much, if any, detail. I really don't think anything more than "increased physical address space allows more RAM" is required. Jeh (talk) 06:17, 6 June 2010 (UTC)
Based on Wikipedia policy a consensus develops from agreement of the parties involved. Considering you previously said that "I suppose I can add a sentence explaining why more RAM addressability is a good idea" I decided after several days to add some information since I thought you wouldn't be opposed to it. I would consider two short sentences to be concise and an explanation for why RAM is beneficial explains why the ability to access more RAM is notable. --GrandDrake (talk) 00:03, 8 June 2010 (UTC)

Well, I have since re-thought that position. In fact I think that what the article needs are referenced statements noting that increased physical address space beyond x86's 36 bits is really not a big deal for the vast majority of users.

First: A statement, no matter how well referenced, that RAM is faster than a hard drive is irrelevant to a description of x64's architectural changes; it rather belongs in an article comparing various data storage technologies. To put it the other way around - If it applies here then it applies just as well to the articles on PAE, on Itanium, and on other architectures with larger physical address spaces than their predecessors... going all the way back to 8088 vs. 8008 in the microcomputer field, and much farther than that in computing in general.

As for "consensus", consensus is represented by the previous state of the paragraph in question, which has been essentially stable for many many months. Yes, consensus can change, and for that, agreement is essential. You need to reach agreement with other parties for this change and I for one certainly am not in agreement that this material belongs here.

Second: a statement that "RAM is faster than a hard drive" (Really? You don't say? Imagine that!) is obvious to the point of being trite. Providing four different references to "defend" it, and then making essentially the same statement again, and using the same four references for the restatement, is being pedantic to the point of obnoxiousness; it's like making a statement that "trains roll better if they use round wheels instead of hexagonal" and then looking up a bunch of "references" to back it up. Even in an article where this statement needs to be made (perhaps, again, an article comparing various data storage technologies) these "references" are supporting a statement that no one is ever likely to challenge and therefore need not be supported by references at all. At most one reference would be completely sufficient... preferably to something like performance specifications.

Third: x64's vastly increased physical address limit is a moot point for the vast majority of users, except for the way Microsoft has chosen to support large RAM configurations, and the notion that we needed x64 to "break the 4 GB RAM barrier" is at best misleading. Allow me to explain.

Very few computers with x86 CPUs these days have CPUs that are not capable of PAE; netbooks are the only common exception. In other words, most x86 CPUs these days can address 64 GB RAM. They have had this capability since the Pentium Pro. Remember that many of the x86 "server" editions of Windows 2000 and 2003 Server do support more than 4 GB RAM - they do this via PAE, of course.[1]

It is only an implementation decision of Microsoft's to not support PAs more than 32 bits wide on their "client" versions of Windows (XP, Vista, and 7).[2]

The "4 GB RAM" limit of x86 is therefore largely a myth; it is a constraint imposed by the Windows OS, nothing more.

And the notion that we needed x64 at a hardware level to let us break this supposed 4GB barrier is therefore also a myth.

The way in which x64 does allow most users (those running Windows "client" OSs like Windows Vista or Windows 7) to use more than 4 GB RAM is very indirect: By using an x64 processor with a Microsoft x64 OS, you are using an OS which Microsoft has not "hamstrung" to 32-bit physical addresses.

The x64 physical address myth goes further than that. It is true that x64 allows RAM to go to 4 petabytes instead of the "mere" 64 GB permitted by PAE under x86, but really, 64 GB is already far outside of the amount that is likely to be put in most PCs anytime soon; or in fact, is even possible for installation, given chipset and DIMM slot limitations. I do happen to have a server class Xeon mobo here that can take up to 144 GB RAM with currently available DIMMs, but it is very much the exception, and even that would require 8 GB DIMMs, which are like hen's teeth and not supported by most consumer boards. The typical "desktop" or "deskside" motherboard has perhaps four or six DIMM slots supporting at most 4 GB DIMMs.

SO... In sum, I agree (even more than I did before) with HumphreyW: undue weight is even now being given to x64's 52-bit PA space. Yes, x64 allows users of Windows client OSs will be able to use more than 4GB of RAM for the first time. But they could have done that on x86 by using Windows Server OSs, or any of a number of Linux releases. Yes, x64 users will be able to use more than 64 GB of RAM... but for most applications, motherboards and DIMM configurations supporting even 64 GB are well in the future and more than 64 GB are farther away still. Jeh (talk) 02:56, 8 June 2010 (UTC)

I see no reason to explain, in this article, that RAM is faster than HDDs. That sentence just does not fit into the article about x86-64. Should we also explain that SRAM is generally faster than SDRAM? And we would then have to explain about why SDRAM is used over SRAM etc. The whole thing just becomes a mess with trying to justify all the different ways to store and access data remote from the CPU.
Explaining "why" 52-bit address width is also of no value to the article. No other articles on CPU architecture go into the "why"'s of various implementation details and I have not yet seen an argument here that convinces me that this article should break with that trend. HumphreyW (talk) 03:19, 8 June 2010 (UTC)
Well... this article does include a few statements of "what this is good for." For example, the item on increased number of registers. However the point that "RAM is faster than hard drives" is completely a) obvious and b) misplaced here. And as I've pointed out above, breaking the 4 GB "barrier" was done years ago in x86 space by the Pentium Pro. x64's increased PAS (vs. PAE's 36 bits) will remain a theoretical advantage for almost all users for quite some time. It therefore just isn't that big a deal. Jeh (talk) 04:01, 8 June 2010 (UTC)
Jeh, I don't see where you get the idea that "consensus is represented by the previous state of the paragraph in question" since if that is how Wikipedia worked nothing could ever be changed since there would always be a consensus against change. You can make the argument that we need to work towards a consensus since there is a disagreement on this issue and based on the Wikipedia policy on consensus that would be true.
Ok, call it "previous consensus" if that makes you feel better. Agreement, i.e. a new consensus, is still needed to support a change. This could be in the form of "nobody objects to the change," but that clearly is not the case here. Btw, according to WP:BRD the cycle goes "be bold" (you were), "revert" (I did), "discuss" (we are). It does not include you stubbornly restoring the original "bold" change as you insist on doing. Similarly, according to Consensus as a result of the editing process, the next step is supposed to be further discussion and compromise, not for the original "changer" to stubbornly insist on his change until talked out of it. As such you are not in a very good position to be citing WP policy here. Jeh (talk) 06:24, 8 June 2010 (UTC)
I try to follow Wikipedia policy but I do make mistakes. When you stated that I did not heed the consensus I should have explained why a consensus had not been reached in an edit summary without reverting your revert. Note though that I did check on the discussion page for an explanation from you before reverting your revert, that I was the first one to post about the revert of my edit to the discussion page, and that I was the first one to ask for a discussion about it. --GrandDrake (talk) 06:08, 9 June 2010 (UTC)
Since one of the major changes of the x86-64 allows for a vastly larger amount of RAM to be accessed I think mentioning why that is notable makes sense. After all the virtual address space explains why it is beneficial so why shouldn't the physical address space? If you think a simpler explanation would be better than I might agree to that but I do think that some sort of explanation for why RAM is beneficial should be given. --GrandDrake (talk) 04:27, 8 June 2010 (UTC)
As you state "one of the major changes of the x86-64 allows for a vastly larger amount of RAM", and it is just that only "one of" the many changes. So why give so much weight to just this change. There is no reason that state that 16 GPRs is beneficial, there is no reason to state that HDDs are slower than SDRAM, there is no reason to state that a larger physical address space is beneficial. All are plainly obvious, of course, and just choosing to comment on only one of those is unnecessary and unhelpful to the reader. There is no need to bloat up the article with words stuffing about how great larger address spaces are, the article is not a forum. Just the facts is enough, including opinions/anecdotes/discussions about the wonders of large address spaces is not useful. HumphreyW (talk) 04:55, 8 June 2010 (UTC)
GrandDrake, I spent over an hour writing about 900 words (with highly authoritative references for the fundamental facts), explaining to you why it isn't particularly notable and isn't beneficial to the vast majority of users: Despite many mistaken claims to the contrary, we didn't need x64 to get us beyond 4 GB RAM (e.g. Windows Server x86 supports more, and the 4 GB address limit in Windows x86 client versions is purely artificial); and the "increase" from 64 GB maximum to (whatever) will not matter to the vast majority of users for quite some time, because even though the CPU chips support RAM beyond 64 GB, most of the chipsets, DIMM slot configurations, etc., not to mention budget limits vs. RAM prices, do not. Did you just ignore it? Jeh (talk) 06:24, 8 June 2010 (UTC)
I disagree with the opinion that the increase in the physical address space isn't notable. The idea that most consumers won't reach the PAE hardware limit of 64 GB of RAM for several more years is true but even with that hardware limit being several years away for most consumers we are talking about a widely reported limit that has been discussed for over 5 years. I do not think it would make sense to ignore a widely reported issue simply because the hardware limit has not yet been reached by consumer computers. Also stating that the real issue is the operating system doesn't change the fact that it can be an issue for consumers today. A consumer in North America can currently buy 8 GB of RAM for under $200 (US dollars) and the vast majority of consumers are using an operating system (32-bit consumer Windows OS) that not only limits the computer to less than 4 GB of RAM but also normally limits a process to 2 GB of RAM (3 GB of RAM if you don't mind that it "may cause some drivers and/or services to fail"). From what I have read on Apple's website 32-bit programs on Mac OS X have a per process limit of 4 GB so this is an issue that affects that operating system as well. These are the reasons I think the physical address space for the x86-64 is a notable issue. --GrandDrake (talk) 06:08, 9 June 2010 (UTC)
No one is saying it is not notable. Just that it is not needed to go into why it is 52-bit (or even why the exposed address bus is less than 52-bit). Having lots of RAM is great but the inability of an OS to take advantage of that has no bearing on the x86-64 architecture. Things like 2GB and 3GB should be in an article about OSes, not in a this article. When I read this article I don't care about what an OS can or cannot take advantage of, I care about what the architecture can or cannot do. If I wanted to know about Windows or Apple's OS then I would go to those articles and read there about various limitations they might have. All that is really needed IMO is to say that some OSes cannot take advantage of the expanded address capabilities and that even some CPU chips don't give the full 52-bit bus. To go any further, and get into details about each and every implementation, is going too far. HumphreyW (talk) 06:43, 9 June 2010 (UTC)
I think the limits of the current/past implementations of the x86-64 does explain how the implementation of it has changed over time and at least for the major changes (40-bit physical address width increased to 48-bit physical address width) I would consider notable enough to include in the article. --GrandDrake (talk) 21:42, 9 June 2010 (UTC)
Well, GrandDrake, I have added a whole new section discussing the significance (or not) of the extended physical address space, with many high quality references (Intel, Microsoft). If you disagree, please come up with opposing references of at least the same quality, or else any changes you make to that section or to related text in the rest of the article will not stand. Jeh (talk) 07:07, 9 June 2010 (UTC)
I hope you were going overboard with the implication about "any changes you make to that section" being reverted without "opposing references" since there are some changes I might make in the tone and wording of that section. --GrandDrake (talk) 21:42, 9 June 2010 (UTC)
By the way your citations of the 2 and 3 GB process limit on x86 Windows, and the 4 GB limit on Mac OS, are at root virtual address limits, not physical. Now it is true that a process cannot use more RAM than its virtual address space, so by having a virtual address space limited to 2 GB a process's physical RAM usage is practically limited to 2 GB as well (and it will usually be considerably less due to other uses of RAM). But the primary limiting factor is virtual address space - the "32-bit" nature of an x86 CPU.
And this really supports my point: Yes, x64 does extend this per-process limit, but not because it supports a larger physical address space. As I hope I have adequately established, we already have a larger physical address space available on x86 systems, much larger than 4 GB. But x86 is a 32-bit architecture, so virtual address space is limited to 232 bytes. The point that x64 systems (hardware + OS) allow processes to grow beyond these virtual address space limits is completely true. This however does not argue for the significance of x64's larger physical address space, but rather of its larger virtual address space.... a benefit very thoroughly already covered in the article.
To put it the other way around, x64 processors with a 36-bit physical address limit would have the same practical benefit, for almost all users, as actual x64 CPUs do that support 40- or 48-bit physical addresses: They all allow use of a 64-bit OS, and of 64-bit executables that could use much more than 4 GB of v.a.s.; on Windows they even allow 32-bit processes to use 4 GB instead of being limited to 2 or 3. And they would all support considerably more RAM than almost all users are likely to buy.
Yes, 8 GB or even 12 GB of RAM is affordable for a medium-high-end system. But 64 GB of RAM would alone cost about twice as much as most people put into their entire computer and even larger configurations are of course correspondingly farther out of reach. There are fanout and timing issues with loading up a set of DIMM slots with so much RAM, too; the more DDR3 chips you have on the same channel, the slower the channel has to run. It therefore isn't always desirable to max out your RAM even if you can afford it. Jeh (talk) 07:21, 9 June 2010 (UTC)
The total of all RAM usage on an x86 system can of course vastly exceed 4 GB, as a portion of each process's virtual address space will be in RAM, and a portion of the system's virtual address space as well. The 2GB, or 3GB, or 4 GB per-process virtual limit is indeed per-process - each process can have its own virtual allocation of up to 4 GB, depending on the OS. And that is why large x86 systems can not only address RAM beyond 4 GB, they can use it effectively, even though each process is constrained to 2, 3, or 4 GB. Jeh (talk) 07:34, 9 June 2010 (UTC)
The fact that for five years (or whatever) people have been erroneously claiming that it was necessary at the hardware level to go 64-bit systems to support more than 4 GB of RAM is, I think, quite interesting (though not really a property of x64). In the interest of putting x64's properties into proper context I have covered this point in my new section: I have used a couple of your former references to establish that point, before going on to note (with "horse's mouth" references) that x86 has supported more than 4 GB RAM for about 15 years. (Pentium Pro was introduced in 1995)
If you are still at odds on some of these points, can I recommend some reading? Get a copy of Windows Internals by Russinovich and Solomon; even some of the earlier editions (called "Inside Windows 2000" and later) will be sufficient, and should be available used for not much money. (Just not the very early "Inside Windows NT" by Helen Custer.) Read and understand the chapters on processes and on memory management. I feel the material there will be of great benefit to you. Until you are able to discern which web writers are confusing virtual vs. physical addresses you are going to continue to fall into these traps.
If you will undertake that bit of self study I will be happy to answer point questions on the material - but I really don't have time to continue presenting it, one piece at a time, here.
In the meantime, can we please move on to something more productive for the encyclopedia? Jeh (talk) 07:27, 9 June 2010 (UTC)
I looked through the x86-64 article and I can not find any mention of the 4 GB per process limit based on the 32-bit virtual address space. It that limit mentioned in the article? --GrandDrake (talk) 21:42, 9 June 2010 (UTC)
Yes. "Architectural details", "virtual address space" bullet point. "This is compared to just 4 GB (232 or 4,294,967,296 bytes) for 32-bit x86.[5]" Of course that's really more a topic for the x86 article. Should we also, while we're naming all of the x86 vs. x64 GPRs, note which ones did not have byte-addressable forms on x86, but do now on x64? Jeh (talk) 23:00, 9 June 2010 (UTC)
Since you do agree that the 32-bit virtual address space limit is an important limit could there be a one sentence mention of that or an extension of the current sentence to something like "This is compared to the 4 GB (232 or 4,294,967,296 bytes) per process limit for 32-bit x86"? --GrandDrake (talk) 23:58, 9 June 2010 (UTC)
With the exception of the words "per process" that is what it says now. The "per process" aspect, and in fact the very concept of processes, is an operating system-dependent detail. And the x64 OS-imposed process space limits are already given in the OS-specific sections. If you insist, that would be the place to go into equivalent detail for each OS for x86, for comparison. But as far as the earlier sections of the article go, they are just describing the processor architecture, and the only thing the processor architecture imposes is a 4 GB virtual address space limit. The section on "physical address details" doesn't cover the various RAM limits imposed by the various editions of Windows Server x64 either. Jeh (talk) 00:11, 10 June 2010 (UTC)

The statement "up to a point" is ambiguous and makes it sounds like there is a hard limit

The statement that increased RAM helps "up to a point" is ambiguous and makes it sound like there is a hard limit. Since the phrase "which depending on the programs that are running can improve performance" is already conditional I don't see the need to add an ambiguous statement to the end of it. And different programs depending on different situations will vary in whether/how beneficial additional RAM would be (such as is the case with the 64-bit version of Photoshop). Instead of the article saying "which depending on the programs that are running can improve performance, up to a point" it says "which depending on the programs that are running can improve performance; though programs will have various points of diminishing returns." Would that be acceptable?

Also in regards to the references one links to what I believe is the wrong article since it doesn't have the quote that the reference uses and the information about the reference doesn't match up. The other reference is discussing the issue of the point of diminishing returns based on current price and performance for the Mailbox server role of Exchange Server 2010. --GrandDrake (talk) 01:28, 10 June 2010 (UTC)

Of course it's ambiguous, you can't say where the point will be without measuring performance under the particular workload (which doesn't just depend on what programs are running, btw). Re the refs, the first was a cut-and-paste error, and you could have just asked that I fix it instead of staring yet another round of edits. The second is quite applicable regardless of your read of it. It is well known that for any given workload there is a point at which adding more RAM will not improve performance much, if at all; this point is made in the first "operating systems" class in any competent Computer Science curriculum. If you persist in this objection I can easily come up with dozens of references for that and you will eventually have to back down, just as you have on previous points. Or you can save us both the time. Your choice. Jeh (talk) 05:46, 10 June 2010 (UTC)
Jeh, you didn't explain what you had against my suggested compromise and I have no idea why you think that I was asking for a particular point. I was asking that it be made clear that there was a point of diminishing returns and that it depended on the workload in question. Both of your references even use the the phrase "a point of diminishing returns" so I see no reason why "diminishing returns" should be left out of the statement. I would recommend that instead of simply trying to get me to "back down" that you at least consider the idea of compromise. How about replacing "up to a point" with "though a particular workload will reach a point of diminishing returns."--GrandDrake (talk) 06:33, 10 June 2010 (UTC)
Hm, yes, I misread your suggestion. Apologies.
But, really, I'm now disliking the whole thing. I guess it's just that (as with many other things) it's a case of having to be so verbose to be accurate, and a brief statement is misleading, so I'm going back to the idea that it's silly to have to explain here that more RAM capacity can be better, or why that might be so. (Would anyone really assume that it might be a liability? Come on!) In an article discussing paging dynamics and performance measurement and tuning, that would be different. i.e. this article is not the place to explain how virtual memory works, or the ramifications on how it works, which is what we're talking about. Think about this: Does an article on a new model of a car have to explain that a higher power engine will probably make the car go faster? Or that a larger gas tank will give it more range? Jeh (talk) 07:51, 10 June 2010 (UTC)
So along those lines I'm arguing for just

Current AMD64 implementations support a physical address space of up to 248 bytes of RAM, or 256 TB (281,474,976,710,656 bytes).[3] This is a large increase over the limit imposed by x86 processors. In practice, the importance and meaning of this increased RAM capacity depends on a variety of factors, including the operating system chosen.

and that's all. I even think the "this is a large increase over..." sentence can go, as it's been mentioned already in the bullet list. (Gosh, a 4096-fold increase is large! No bleep!) Jeh (talk) 09:10, 10 June 2010 (UTC)
Just an idea but could be done by moving the other information into a subarticle called "x86-64 physical address space" (to move most of the information there) or "x86-64 physical address space implementation" (to move the implementation information there for processors and operating systems). What do you think about doing something like that? --GrandDrake (talk) 07:46, 11 June 2010 (UTC)
Meh, I don't think there's nearly enough for another article. I propose just leaving it be for a while and see if anyone else has a contribution. It's now "more inclusive" of information and since that stuff has been written it's probably better to leave it there until a clearly better alternative arises. Jeh (talk) 10:08, 11 June 2010 (UTC)

Should x86 be used to refer to 32-bit x86 for processors and/or programs?

Since there is a disagreement on this point should x86 be used to refer to 32-bit x86 for processors? I ask since there should be a consistent term used when referring to 32-bit x86 in the article and at the moment the article has "x86", "32-bit x86", and "x86-32". Also should this apply to statements about programs where x86 is mentioned such as with "32-bit x86 executables"?

In most of the contexts used here (I haven't made an exhaustive check) these are all effectively synonyms. Technically speaking the "32-bit" is redundant once you've said "x86"; it is probably useful to mention now and then just to emphasize the point. In contexts where we might be talking about 16-bit code running on an x86 CPU, one would usually speak of "16-bit executables" to make that distinction. Contrariwise, "32-bit" by itself is, rigorously speaking, insufficient to mean "x86" as there are of course other 32-bit processors than x86. But none likely to be thought of in this context, in comparison with x64.
I have to say that rationalizing all of these occurrences (not just in this article) is not the most important WikiJob I can think of for anyone to do right now. Jeh (talk) 00:49, 10 June 2010 (UTC)
x86 is now used consistently in the article. If there are any statements where anyone thinks it is necessary you can change them back. Just an idea but if anyone thinks that it isn't clear that x86 refers to 32-bit x86 than it might make sense to consistently use 32-bit x86 instead. --GrandDrake (talk) 04:29, 12 June 2010 (UTC)

Oft-repeated disambiguation of binary prefixes considered ugly

Every time a memory (p. or v.) size limit is mentioned there is text like this:

therefore can address up to 256 TB (248 or 281,474,976,710,656 bytes) of RAM.

It's repetitious in the extreme, very visually annoying.

Isn't there a better way to do this?

Idea: there are only a few different values actually used. Each of them could have its own note (and the notes and references section would be split into two, keeping the notes separate). So for example, 256 TB would appear as 256 TB[note 1], and note 1 would say "256 TB equals 248 or 281,474,976,710,656 bytes (approximately 280 x 1012." The current footnote template that says that since we're talking about semiconductor memory, these units are being used in their binary sense, would go in the notes section as well.

This might be good enough to start a new trend. Comments?

Jeh (talk) 08:59, 10 June 2010 (UTC)

WP:COMPUNITS recommends disambiguation but there is no preference for how it is done and the notation style only needs to be consistent within the article. At the moment this article uses two notation styles and how about just going with the shorter notation style? Doing that would turn "256 TB (248 or 281,474,976,710,656 bytes) of RAM" into "256 TB (248 bytes) of RAM". --GrandDrake (talk) 08:10, 11 June 2010 (UTC)
I don't believe every use of GB/TB/PB/EB needs to be disambiguated (otherwise there's really no point in using the units), just the first use per unit. The rare case of mixing binary & decimal schemes would need this, but not the general case.
Actually, I really like your suggestion as a way out of the KiBs v. KBs dilemma. I suggested something along these lines here: simple disambiguation scheme. A footnote per unit use (not per value) would work great and is in common use today. Jeberle (talk) 16:21, 13 June 2010 (UTC)
For that matter, this article already has a footnote (4) with unit values and a link to Binary prefixes. Why not just add the footnote to GB, EB, PB and remove the inline expansions? Is this acceptable or grounds for a swift revert? Jeberle (talk) 19:12, 13 June 2010 (UTC)
The problem is that using one footnote per use of a unit (megabyte, gigabyte, terabyte, etc...) does not make it clear. For instance if you see 256 TB (without a footnote) and with no indication of whether it is binary or numeral than how do you know which one it is? Only by reading the notation for one of the values with a footnote would you see that all of the memory values in this article use binary. Something which works today but could also change in the future which makes it unreliable. I have to say that the footnote method would only make sense if it was used with every value or it would create the very ambiguity that it is supposed to remove. Also I checked on the discussion page for WP:COMPUNITS and the consensus was against the footnote method. As such I believe you need to first get a consensus in support of this new footnote method before it can be used. --GrandDrake (talk) 06:37, 18 June 2010 (UTC)
Hmm, the entire article uses binary units. Trying to be polite here, but I think only a simpleton would be confused. Now the article is littered w/ line noise that serves no purpose. Great. Jeberle (talk) 07:35, 18 June 2010 (UTC)
I checked on WP:COMPUNITS and several Wikipedia articles and after consideration I agree that one notation per unit is allowed. That still leaves the issue that the footnote method you are proposing has not been approved. If the desire is simply to reduce the amount of repetition in the article than I believe that power notation would work and I have edited the article to use that. --GrandDrake (talk) 09:47, 18 June 2010 (UTC)

Prefetch

The AMD 64bit srchitecture specifically included the prefetch instructions in the interface, the early Intel 64bit processors didn't support this and at least a couple of first 64bit revs of Linux didn't run on Intel CPUs for that reason. Alan Cox (talk) 22:03, 18 June 2010 (UTC)

Bias in how the physical/virtual address space are portrayed

One problem I notice in this article is that there is a bias in how the physical/virtual address space are portrayed. For comparison this is how the details of the 4 GB per process virtual address space limit is given:

"This is compared to just 4 GB (232 bytes) for the x86"

Not exactly a detailed explanation and compare that to how the physical address space limit is given:

"For comparison, x86 processors are limited to 64 GB of RAM in Physical Address Extension (PAE) mode,[7], or 4 GB of RAM without PAE mode." "In fact almost all x86 processors (from the Pentium Pro onward) can address up to 64 GB of RAM via physical address extension (PAE) a modification of the address translation scheme that is otherwise used.[7] Many x86 operating systems, including some versions of Windows Server, support this.[13][14][15] Provided that the operating system supports more than 4 GB of RAM, the increased physical addressing capability of AMD64 would therefore only be needed in systems requiring (and physically able to accommodate) at least 64 GB of RAM." "...even though x86 processors do not actually impose a 4 GB limit."

The 64 GB physical address space limit is noted clearly in several sentences while the virtual address space limit of 4 GB per process is not explained clearly in even one sentence. Attempts to clearly note the 4 GB per process virtual address space limit have been deleted. --GrandDrake (talk) 06:48, 19 June 2010 (UTC)

Note that the following two statements are unreferenced and repeat what earlier referenced statements had said:

"Provided that the operating system supports more than 4 GB of RAM, the increased physical addressing capability of AMD64 would therefore only be needed in systems requiring (and physically able to accommodate) at least 64 GB of RAM." "...even though x86 processors do not actually impose a 4 GB limit."

When I tried to delete them I was told that they were part of the narrative but at the moment that narrative looks biased by repeating information about one fact while ignoring a different fact. The average person who reads this article could easily come away from it thinking that they don't need x86-64 in regards to RAM since current 32-bit CPUs support 64 GB of RAM while never knowing that there is a 4 GB per process limit due to the 32-bit virtual address space. --GrandDrake (talk) 07:18, 19 June 2010 (UTC)

Oh for crying out loud. Yeah right, the article is "biased" against a discussion of virtual address space in favor of a discussion of physical address space. I think this is a completely ridiculous criticism.
If I remember right you started out your long series of edits here not even knowing that PAE existed or that it was relevant to the comparison with x64, not knowing the true PAS limits of x64 (to the point of being unbelieving even after being beaten over the head with the references), and also not being clear on the difference between virtual and physical addressing - I doubt you are clear on the latter even now, but that aside... You complained that a major talking point for x64 was the vast increase in RAM addressability - you thought at the time it would allow addressing 264 bytes and that x86 only allowed 232 - and you wanted that point put in the lede and more coverage of this issue in general.
Hm, yes, seems that I DO remember correctly:

HumphreyW, on the issue of undue weight if anything the information on the physical address space is lacking. --GrandDrake (talk) 4:14 pm, 3 June 2010, Thursday (16 days ago) (UTC−7)

Well, as we have established at great length, PAS is not that simple (on either side, x86 or x64). It's taken many hours to come up with text and references that properly describe the x64 PAS situation; it is now mentioned prominently in the lede and explained, relatively completely I think, in the body. It has also taken easily tens of hours of writing in the talk page, often answering the same points from you over and over again, to convince you that yes, the new text was correct and the references did back it up.
Now you're complaining that the coverage of PAS is too much?
Point 1 - re relative amounts of explanation: The 4 GB limit of VAS for x86 really only requires one sentence. I'm not sure how it could be "explained" any more "clearly". It's a 4 GB limit, period, full stop. It just isn't that complicated, and no more details (the origin of the limit, etc.) are necessary. Particularly as this article is not about x86, it is about x64. Nor are there, as far as I can tell, rampant misunderstandings about that limit, the way there are for x86's PAS limit.
Note that on OSs like Windows that use a single VAS map for both user mode and kernel mode, it is a little more complicated in that each process only gets to use part of the 4 GB, the rest being cross-process shared space used by the OS... should we explain that too, in the discussion of processor architecture? Of course not, that's an attribute of that operating system, not part of the processor architecture.
On the other hand the limit on PAS for x86 is not so uncomplicated. It depends on whether you're using PAE or not, and furthermore is subject to wide misunderstanding (as demonstrated by references you provided), and therefore needs more explanation (and, as demonstrated by how long it took to convince you, it needs explicit disabuse of preconceived notions). It would be grossly incorrect, would it not, to simply state that x86 is limited to 4 GB PAS? A more complex set of facts deserves more explanation.
Point 2: "the virtual address space limit of 4 GB per process is not explained clearly" - I tell you three times: Anything regarding "processes" is an operating system-specific detail. The CPUs have no concept of "processes." Any such explanation therefore does not belong in a discussion of the processor architecture.
The relationship of the 4 GB VAS per-process limit to usable RAM is also an OS-specific detail. And it's an oversimplication. Even if a process has defined the entire 4 GB VAS that does not mean it will be using 4 GB of RAM. But the collection of processes, plus the operating system, in an x86 system can most certainly use more than 4 GB RAM! If that were not the case there would not be Windows Server x86 editions supporting up to 64 GB RAM.
"4 GB RAM" is not even a true limit for a single process on a 32-bit OS. It is perfectly possible for an x86 OS to assign more than 4 GB RAM to a process. It won't all be addressable at once, of course, due to the VAS limit, but it can be assigned. It's possible in Windows, in fact, although it does require special application coding to do it (AWE). For another example, an x86 OS could designate RAM to be used for file buffers for a particular process, without mapping that RAM into the process's virtual address space. This again would be an OS-dependent function. That a user's OS of choice does not do this is not a limitation of the processor architecture.
Really this is just another example of the more general point that it is incorrect to think of VAS and PAS as being as directly related as you seem to think they are. And it is misleading to connect the two. Describing these two limits as if they are related only contributes to confusion.
Finally I have to mention that the vast majority of apps people are running on x64 operating systems are still 32-bit apps... and on Windows they do not have the "large address aware" flag set... so they will be limited to 4 GB VAS (or 2 GB VAS on Windows) anyway! Haw!
Point 3: "the following two statements are unreferenced and repeat what earlier referenced statements had said" - if it's repeating a referenced statement then it should not be deleted for being "unreferenced". At that point it is not a claim that is controversial or likely to be challenged, so it doesn't require a ref citation... but if you insist that yet more superscripts will make the article more useful it is certainly easy to add the ref tags.
Point 4: "The average person who reads this article could easily come away from it thinking that they don't need x86-64 in regards to RAM since current 32-bit CPUs support 64 GB of RAM while never knowing that there is a 4 GB per process limit due to the 32-bit virtual address space." It is not generally the function of WP articles to advise people on what they need to buy; it is fine if that happens as a side effect, but that they do not inform of every last factor for a buying decision in every article section is not a valid criterion for criticism, particularly when such factors are dependent on things not directly the subject of the article (e.g. operating systems).
I say again: PAS and VAS are different. The "Physical address details" section you are focusing on here is describing PAS, and discussions of VAS would be irrelevant therein, and even misleading by association.
And the expansion of VAS on x64 and comparison with x86 is expounded upon in the "operating system compatibility" section, particularly so for Windows. Did you just not read that far? Wait, I'm pretty sure you did; After all, you found a reference for one of the points re. VAS in that section. Although since I'm feeling in a particularly nitpicky mood tonight I'm going to correct something there in a moment.
So... to sum up: I don't think your criticism is valid. One, the article does already addresses the per-process virtual address limits of x86, in much more detail than what you quoted above! Just not in the "Physical address details" section, nor in the "architectural features" section, where they do not belong. Two, that there is more said about PAS than VAS is not due to "bias". It's simply that the situation with PAS is more complicated, and is the subject of considerable misunderstanding, therefore merits more explanation. Jeh (talk) 10:01, 19 June 2010 (UTC)
I understand that you want to make it very clear to people that the physical address space (PAS) of the x86-64 isn't needed until you go over 64 GB because of the physical address extension (PAE) but you get to the point of hammering that point home with unreferenced and negative statements. On the other hand you have removed, and threaten to remove, any information regarding the 4 GB per process limit of the virtual address space (VAS) simply because you don't think it should be mentioned outside of the operating system section. Why than are you okay with multiple statements about the operating system limits in the PAS details section while fighting so hard against having even one sentence about the 4 GB per process limit of 32-bit applications in the VAS details section? --GrandDrake (talk) 01:17, 20 June 2010 (UTC)
I repeat - since you seem to have ignored this point above - that stuff is only there in the PAS section because you insisted that the PAS changes be more completely described (and I won't tolerate an incomplete presentation). I am trying to resist further pollution of the architecture discussion with OS-specific details. I'd be very happy to move the OS-specific PAS details down to the OS-specific sections. Jeh (talk) 10:57, 20 June 2010 (UTC)
I notice you changed your earlier post and note that Wikipedia talk page guidelines recommend telling the other person you are discussing of that and using insertion markup to show that the comment was altered. I don't care when it comes to fixing grammar but you did a bit more than that. Also I am not really sure why you did that except for adding a quote I made from an old discussion (which was part of an even earlier discussion that you made into a new section), making some of your statements sound more certain, and telling me that the PAS and VAS are not related. Once again considering you are okay with multiple statements about the operating system limits in the PAS details section why are you against having even one sentence about the 4 GB per process limit of 32-bit applications in the VAS details section? --GrandDrake (talk) 05:11, 20 June 2010 (UTC)
Re the edit, I had started the edit before you commented, and you apparently finished your edit long before I did mine, so I wasn't aware you had read it as it was. You have just said that it was a minor change (not "substantially altering", which is a qualification on the guideline to which you refer (WP:REDACT), and you obviously looked at the history and found the changes... so what is the problem?
Back to the real topic at hand - your new text, while well-referenced, absolutely does not belong where you put it. It's like a very large speed bump. There is clearly no consensus for that change and you have not even attempted to address my points above, instead you just went ahead and did what you wanted. Well WP does of course support that, as in WP:BOLD; but it also supports my subsequent reversion, as in WP:BRD. That's "bold, revert, discuss" - so I expect further discussion here as the next step. To state a question that was implied above, and really should not have to be stated again: Do you have some substantive replies to my points above? Or should I go ahead and move the PAS details down to the OS-specific sections where they belong.
Just by the way, MacOS supports 4 GB per-process address space on x86 too. It can do this because they use a completely separate process address space for the OS, rather than having the app and the OS share an address space. Jeh (talk) 10:57, 20 June 2010 (UTC)
I wouldn't consider that a minor edit (if I did I wouldn't have mentioned it) and what I didn't understand was whether you were trying to reply to my new post with that edit. I now know that it wasn't a response and that it was done accidently. As for why editing a past post is a problem after someone has responded to it that is because even when the changes are noticed it is harder to respond to the new information when an old post is edited than when a new post is made.
As for moving the operating system details into the operating system section I have done that. You mention that you believe that I have not given "substantive replies" to your points and since you want a more detailed explanation I will give one.
Since you didn't have a problem with having an entire paragraph on PAE and several paragraphs on Windows in the PAS details section I was curious why there was no mention made about the 32-bit application limit in the VAS details section. Your response is that no operating system information belongs in the those sections since it is unrelated to the architecture and that is why you were keeping out any mention of the VAS limit with 32-bit applications. That though doesn't explain why you repeated the information about PAE multiple times when that information was already given in the architectural features section where is clearly states that PAE allows for 64 GB of RAM with x86. As such there was no need to repeat that information twice in the physical address space details section. Also you added a somewhat noticeable point of view. After all there isn't a sentence stating that you would need AMD64 if you wanted an application to address 5 GB of virtual address space without using special coding or a sentence stating that you would need AMD64 if you wanted to physically address 96 GB of RAM. In other words by going beyond stating the facts you added a point of view. --GrandDrake (talk) 02:40, 21 June 2010 (UTC)
On further thought I think the whole issue of "x86 can address up to 64 GB via PAE, but Windows client OSs won't let you go beyond 4 GB" is too x86-specific for this article entirely. And it did come across as sounding something like "yeah, but this isn't really the improvement you think it is." I had first moved it to the end of the Windows section (really, you shouldn't just move text into a section without considering how it flows with the preceding and following paragraphs) but then I thought the article is fine without it. (And this thing is getting pretty big, so removing nonessential material is good for that reason too.) I did add some more non-OS specific PAS issues, specifically motherboard limits, to the PAS details section. Now I admit that product lists from one manufacturer's web pages are not particularly compelling, but all that can be done there is to give more examples. It is absolutely true that no standard motherboard products support 256 TB; this is due to the combination of the number of DIMMs (actually the number of "ranks") that can be put on a memory channel and the DIMM standard configurations that have so far been published by JEDEC. I haven't found one that even goes to 1 TB - though Microsoft at least must have one as they do support 2 TB RAM on the highest Server product and they won't make that claim unless they've tested it. Anyway, that bit would have sounded a little pejorative, but I added a note that even the x64 consumer level motherboards, while not supporting even 64 GB RAM, nevertheless handle a lot more RAM than x86 consumer mobos did - that should ameliorate the "yeah-but" tone. Jeh (talk) 07:09, 21 June 2010 (UTC)
If the physical address space details section is about the x86-64 than it would make sense to use the limit of current x86-64 CPUs as the point of comparison. Other points of comparison that involed x86 or PAE would simply involve limits that aren't directly related to the x86-64. --GrandDrake (talk) 03:25, 23 June 2010 (UTC)
As I left it, nothing there was using x86 or PAE as a point of comparison, except "this is a large increase over the limit imposed by x86 processors". Are you now objecting to that is well? You said previously

I have added the fact that it is possible for a computer to use more than 4 Gigabytes of RAM with a x86-64 CPU compared to a x86-32 CPU. This is one of the most commonly cited advantages I have seen in articles I have read on 64-bit CPUs and 64-bit operating systems but for some reason this was not mentioned in the lead section of the article.

--GrandDrake (talk) 21:27, 11 April 2010 (UTC)

Are you now saying that you don't want such comparison made? x64 is an extension of and market replacement for x86 - comparisons and discussions of the improvements are entirely appropriate here. Jeh (talk) 07:32, 23 June 2010 (UTC)
If PAE wasn't being used as a point of comparison why than did one sentence state that "most consumer-level motherboards for x86-64 processors are limited to considerably less than 64 GB"? That number is the PAE limit and why wasn't "32 GB" or "48 GB" used instead? Why even have a second point of comparison when it has already been noted that no motherboard supports 256 TB which is the current limit for x86-64 CPUs? On the issue of comparisons you have been opposed to including any comparison with the 4 GB limit of 32-bit VAS in the VAS details section. Less than a week ago you said that "The 4 GB limit of VAS for x86 really only requires one sentence" and "Particularly as this article is not about x86, it is about x64". If there is no need for a comparison with the 4 GB limit of 32-bit VAS in the VAS details section than logically there is no need to have a comparison for the 36-bit PAE limit in the PAS details section. --GrandDrake (talk) 03:51, 24 June 2010 (UTC)
I think the comparison is justified in the "Larger physical address space" section. Without the comparison there is no basis to know how much larger it is over the preceding architecture. HumphreyW (talk) 04:15, 24 June 2010 (UTC)
Why though is a comparison with the preceding architecture justified in the PAS details section but not the VAS details section? The limits for both the 32-bit VAS and 36-bit PAS are already noted in the architectural features section. --GrandDrake (talk) 05:19, 24 June 2010 (UTC)
The PAS details section included the comparison with the x86 PAS limit because that section is describing a clarification of the x64 PAS limit: "It's theoretically much larger than x86, but depending on the rest of your system it probably isn't as large as the processor alone would suggest."
Now if that comparison is to be made at all, well, 64 GB is the maximum PAS for x86 (it is clearly wrong to say that the x86 limit is 4 GB, as if PAE did not exist, and there were x86 motherboards that supported 64 GB RAM, just not in the consumer space), so that is the proper point of comparison. The fact that it requires "PAE mode" to get there is irrelevant. (But as I said above, I did decide that the fact that Windows x86 client OSs don't let you use that much is really not relevant here as it is too much x86- and Windows-specific.)
However, the comparison with the x86 PAS limit is not essential to that point, so I've deleted it. The point about even smaller maximum RAM configs with current non-server-oriented motherboards will return, however, as soon as I have gathered more examples.
There is no such clarification point made in the VAS details section (nor need there be; the 48-bit VAS is just as large as the processor implements), so a comparison with x86 VAS would be off topic there. Adding it into the VAS details section just to achieve some sort of "parity" or "lack of (imagined) bias" between the two sections would be disruptive. (Reader: "Why is this article telling me this, again? And how is it related to the implementation of 'canonical addresses', which is the point of this section?")
There is simply no need for parity between these two sections. They are describing very different things and the points that need to be covered in those sections are not necessarily parallel or analagous. So "these two sections do not contain parallel points" is simply not a valid point of criticism, and your notion of "bias" isn't even wrong. It's like saying that someone is "biased" in favor of purple vs. wind. As I've tried to explain over and over, even though they might seem similar, VAS and PAS are fundamentally different.
What I am strictly opposed to, by the way, is any mention in the "VAS details" section of VAS limits being per-process limits. Because, as I have stated many times, the very concept of a "process" is an operating system-specific concept. The processor does not implement any such concept as a "process." If there is a valid reason to make the comparison of VAS on x64 vs. x86 in the VAS details section, fine (just don't mention "processes"). But "Because the PAS details section makes the analagous comparison" is not a valid reason. And the PAS details section doesn't make the analagous comparison now, so even if that had been a valid reason, it no longer exists. Jeh (talk) 08:44, 24 June 2010 (UTC)

The 4 GB VAS limit for x86-64 legacy mode is not mentioned in the article

The 4 GB VAS limit for the x86-64 legacy mode is not mentioned in the article. The AMD architecture specs is very clear that the VAS in legacy mode is limited to 32-bits. Since legacy mode is a part of the x86-64 architecture I think the 4 GB VAS limit in legacy mode should be mentioned at least once in the article. After consideration I agree that it would make the most sense to put that information in the legacy mode section. Are there any objections to including this information in the legacy mode section of the article? --GrandDrake (talk) 08:46, 24 June 2010 (UTC)

None at all from here. And the "legacy mode" section is the right place for it. Just don't say "per process". Jeh (talk) 08:50, 24 June 2010 (UTC)
No objection from me. If things are in the right place then of course. HumphreyW (talk) 09:01, 24 June 2010 (UTC)
GrandDrake, please do not make non-edit edits to the article page just to point people to the discussion page. The edit history is cluttered enough as it is. When you add an article page to your watchlist the corresponding talk page gets added too, so anyone who has the article on watch will see the talk page update with no further help... Unless of course they've changed that option or explicitly removed the talk page from their WL, in which case it's pretty clear that they don't care about the talk page anyway... Jeh (talk) 09:21, 24 June 2010 (UTC)
I have added the information to the legacy mode section. Jeh, when it comes to a dummy edit in my experience it is the best way to tell people that a section in the discussion page has been made on an issue that is under dispute and do you know of any Wikipedia policy that states that it should not be used in that way? --GrandDrake (talk) 05:48, 25 June 2010 (UTC)
No, but not every less-than-ideal practice is stated as such in a WP policy or guideline. Personally I find it annoying to look at a diff on the article, only to find out that there's no diff. Do you have any reason to believe that simply editing the talk page is insufficient? That anyone interested in an article you were working on, or this article in particular, was not following the article's talk page? Do you not agree that non-edit edits in the article history simply add clutter? If not we will just disagree on this point, I do not intend to discuss it further. Jeh (talk) 07:07, 25 June 2010 (UTC)
I believe that only editing the talk page of an article can be insufficient since it makes the assumption that the other people have the talk page on their watchlist, or that they periodically check it, which could make a dispute worse. In fact there have been incidents where I have seen that happen. As such I prefer to be cautious on this issue. --GrandDrake (talk) 00:19, 26 June 2010 (UTC)

I have an issue with the statement about AMD losing the rights to manufacture x86 architecture chips. The original x86 patents are definitely expired now. Anyone can manufacture x86 compatible chips now. I haven't seen the agreement between AMD and Intel. That would be controlling, but I doubt the contract can enforce rights on expired patents.--Celtic hackr (talk) 17:19, 28 November 2010 (UTC)

Implementation-relative registers should not be used to compare different architectures

"However, AMD64 still has fewer registers than many common RISC ISAs (which typically have 32–64 registers) or VLIW-like machines such as the IA-64 (which has 128 registers); note, however, that because of register renaming the number of physical registers is often much larger than the number of registers exposed by the instruction set."

Processors such ATOM without out-of-order execution engine, so does not have internal re-naming registers at all. Meanwhile, even though the Itanium architecture has 128 Gerneral registers, but implementations could contain much more registers than this without exposing to the user interface, similar with SPARC implementation with instruction windowing mechanism to access them. So the actual physical register number different among implementations, and should not be used to mention for some a specific architecture.

On the other hand, even though processor such as Intel Core 2 have potential much more physical registers to accelerate the internal out-of-order pipeline engine, because lack of huge architecture-level registers, most of compute operations are achieved with accessing the system memory. Meanwhile, Itanium has hugh architecture-level registers, some operations could be achieved without interfering external memory. The implementation register could not change this sematic behaviour. — Preceding unsigned comment added by 221.9.20.7 (talk) 22:10, 11 June 2011 (UTC)

x87 support dropped ?

Is x87 code set still supported in native 64-bit Long Mode ? Or not ? The question is because x64's native SSE2 floating point instructions only can do with 8-bytes floating point. x87 could have more precision, even if with less speed. But is it possible at all ? It seems that Delphi XE2 Win64 compiler does not do x87. Is it inpossible due to hardware, or they just decided tosimplify things for themselves? 79.111.218.128 (talk) 20:09, 30 September 2011 (UTC)

Address space limits under x64 (again)

Unregistered editor 83.108.118.234 (talk) edited the Linux section of the article as shown here, twice with the notably uncivil comment

Pretty sure the alledged [sic] 64-TB limit is bullshit.. afaik the linux64 kernel can handle close to 17 billion gigabytes of ram (aka ~16600000 TB))

If you're using "decimal billion" consistently that number would actually be 18 billion billion, that is, about 18.4×1018 bytes, i.e. 264.

However no operating system on x64 can support that much RAM (or that much virtual address space either), because the processors don't support it. Under the current architecture definitions these CPUs can provide a maximum of 256 TiB (248) of virtual address space, and 4 PiB (252) of physical address space (RAM); current implementations are further limited to 256 TiB of RAM (48 bits of physical address). That's all the bits that are implemented.

This is well described, with good references to the AMD docs, in the relevant sections of the article ("Virtual address space details" and "Physical address space details". There is also a very lengthy discussion of why this is so in the talk page archive here, section 33. (In response to someone else who didn't believe it.)

It is possible of course that a future change to the architecture could allow more than 48 bits of virtual address, or more than 52 bits of physical. But in the meantime, any statement regarding any OS running on x64 that claims implementation of more than 256 tebibytes of virtual address space, or of more than 4 pebibytes of physical address space (RAM), is simply wrong. It just isn't possible.

I'm removing the "dubious" template but leaving the "citation needed". Even though the IP's notion that a full 64-bit address space (either one) is supported by Linux64 simply cannot be true, a citation would still be nice for the limit that is claimed there.Jeh (talk) 03:40, 3 October 2011 (UTC)

The Wikipedia article currently states in the "Architectural features" section that "The architecture definition allows this limit to be raised in future implementations to the full 64 bits, extending the virtual address space to 16 EB (264 bytes)." Do you believe that this sentence should be changed? There are multiple statements made in the AMD64 Architecture Programmer's Manual (May 2011 revision) which state that the maximum limit for the virtual address space will be 64-bits. The statements can be found on pages 2, 3, 4, 13, 117, 120, and 130. A 64-bit virtual address space isn't currently possible with AMD64 since the translation mechanism has not yet been defined but AMD states that the maximum limit for the virtual address space will be 64-bits. --GrandDrake (talk) 22:19, 29 October 2011 (UTC)
The article says

The AMD64 architecture defines a 64-bit virtual address format, of which the low-order 48 bits are used in current implementations.[1](p120) This allows up to 256 TB (248 bytes) of virtual address space. The architecture definition allows this limit to be raised in future implementations to the full 64 bits...

and the AMD doc you've cited also says the maximum limit will be 64 bits. Perhaps I'm slow this afternoon but I'm not seeing a conflict. Jeh (talk) 23:04, 29 October 2011 (UTC)
More specifically... the doc on page 120 says

The AMD64 architecture enhances the legacy translation support by allowing virtual addresses of up to 64 bits long to be translated into physical addresses of up to 52 bits long. Currently, the AMD64 architecture defines a mechanism for translating 48-bit virtual addresses to 52-bit physical addresses. The mechanism used to translate a full 64-bit virtual address is reserved and will be described in a future AMD64 architectural specification.

Again, I'm just not seeing a conflict with the article as it now stands.
By the way... be aware, when reading the AMD doc, of the distinction between the size of an address and the number of bits that can actually be translated (which in turn limits the maximum size of virtual address space that can actually be used). For example, on page 130, the doc you cited says

The PAE-paging data structures support mapping of 64-bit virtual addresses into 52-bit physical addresses.

That means that virtual addresses are 64 bits wide. i.e. it takes 64 bits to store a virtual address, and the result of an LEA instruction in long mode will change all 64 bits of the destination. This is true even on today's implementations. But (as it says on page 120) not all 48 bits are actually translated. Since only 48 bits of the virtual address are meaningful (bits 48 through 63, being copies of bit 47, convey no additional information) that means that 248 different virtual addresses can be used. If the statement on page 120 isn't enough, the address translation diagrams confirm it. The high 16 bits are checked to see that they're equal to bit 47, but that's all that happens with them... today. Until AMD comes out with a chip that translates more bits of virtual address, it seems to me that the article is correct as written.
How would you suggest changing the sentence you cited? Jeh (talk) 23:22, 29 October 2011 (UTC)
I believe the sentence I quoted from the Wikipedia article is accurate but based on your initial post I didn't know at that time whether you believed that the sentence was accurate. --GrandDrake (talk) 00:28, 30 October 2011 (UTC)
I still don't see why... but that's ok. Jeh (talk) 05:12, 30 October 2011 (UTC)
In your initial post when you made the statement comparing the memory limits between the "current architecture definitions" and the "current implementations" at the time I didn't know what you meant by "current architecture definitions". --GrandDrake (talk) 06:24, 30 October 2011 (UTC)
From the AMD doc (again): Currently, the AMD64 architecture defines a mechanism for translating 48-bit virtual addresses to 52-bit physical addresses. Sounds consistent with "current architecture definitions" to me. "Implementations" means actual chips you can buy and these do translate 48 bit addresses, so no conflict there either.
The difference between the current architecture spec and curent implementations is in the physical address side (52 vs. 48 bits), not virtual. The article's statement to that effect is not in conflict with anything quoted here from AMD. Nothing in the AMD spec indicates that physical address size can be increased beyond 52 bits.
btw, until AMD tells us exactly how wider virtual addresses are going to work, all the article can say is that it's been promised or claimed as a future revision (and obviously no OS developer can claim "support", because they don't know how to code for it). Claims of future product plans, even if they're in a WP:RS, fall into WP:CRYSTAL territory unless carefully worded to indicate that these are merely statements of future intent. Jeh (talk) 12:36, 30 October 2011 (UTC)
I was simply giving an explanation for why I made a post showing that the maximum limit for the virtual address space will be 64-bits for AMD64 and I understood what you meant by "current architecture definitions" when you made your second post. To be clear this post and my last two posts were only meant to be explanations for why I made my first post. --GrandDrake (talk) 20:44, 30 October 2011 (UTC)
Well, I think the article's current wording, "The architecture definition allows this limit to be raised in future implementations to the full 64 bits", is just fine on this point. It is well supported by the AMD doc yet does not state with any certainty that the limit will be raised in the future, only that it can be. Jeh (talk) 21:53, 30 October 2011 (UTC)
I agree with that and from what I have read the sentence from the article is accurate. --GrandDrake (talk) 00:25, 31 October 2011 (UTC)

x32 ABI

Is there an article about x32 ABI?--78.49.64.83 (talk) 01:05, 1 November 2011 (UTC)

I created a short article on x32 ABI. --GrandDrake (talk) 01:59, 2 November 2011 (UTC)

Which one is used more often: x64 or x86-64?

Frankly, I have never seen x86-64 outside Wikipedia. All software vendors seem to like to use x64. But how about verifiable facts? Which ones is used more often? x64 or x86-64? Fleet Command (talk) 12:28, 21 June 2011 (UTC)

Mind you, I tried find out using search engines. Bing yielded 415,000 results for "x64" but 9,580,000 results for "x86-64" (23x). But Google yielded 20,700,000 search results for "x86-64" while it found 184,000,000 for "x64" (8.88x). Conflicting, isn't it? Fleet Command (talk) 12:39, 21 June 2011 (UTC)
And AMD now calls it "AMD64" and Intel uses "Intel64".
I think this article was originally "x64" but we went through a vote to rename it as "x86-64" was AMD's original name. Indeed there are a lot of references to "x64" in Google, but we tried to find someplace that said that someone who actually owned the IP in question had declared that "x64" was an "official" name, and failed. Since we could find references that AMD (who invented it) had originally called it "x86-64", but had since dropped that name, that seemed like the best vendor-neutral answer that was nevertheless verifiable to the inventor's literature. (At least, that's how I remember the discussion... it sounds good to me now! You can find it in the archives if you really want to know.)
You could always propose a move if you felt strongly about it. Personally I don't think it matters, since x64 redirects here anyway.
Btw, Intel doesn't officially call their 32-bit architecture x86 either, to them it's "IA32"... I'm not even sure what AMD calls their version. Jeh (talk) 23:53, 29 October 2011 (UTC)
x86-64 is used heavily in the Unix world; actually, nearly everywhere, where Microsoft, Windows and consumer-level products are not the topic. It is the most correct industry term. — Preceding unsigned comment added by 95.220.134.217 (talk) 19:32, 10 March 2012 (UTC)

Why mention IA-64?

Is there a strong reason for mentioning that x86-64 and IA-64 are not the same thing? If we do mention that, then we should also mention that x86-64 does not also relate with Alpha, MIPS64, POWER/PowerPC 64-bit, and other important 64-bit architectures. It is either all or none.

Note: I saw someone revert my edit with a comment regarding that Windows can run both on x86-64 and IA-64. So what? Linux does too, and on all of the architectures mentioned above. World doesn't revolve around Windows, so we won't make useless remarks just because of it and generic consumer's ignorance. The second point of the person aforementioned was that IA-64 and Intel 64 are both made by Intel. It could be somewhat relevant if this article were about Intel 64, but it isn't; it is about x86-64 in general — an ISA subset common to both AMD64 and Intel 64, i.e. a vendor-neutral specification.

We already have a line by the header, which reads: "[...] For the Intel 64-bit architecture called IA-64, see Itanium." — That is quite enough. 95.220.134.217 (talk) 19:35, 10 March 2012 (UTC)

Well, I think there is far more potential confusion with IA64 than with the others you mentioned - for the reasons I mentioned.
Your request for a "strong reason" to retain long-standing content is unreasonable. Long-standing content is assumed to be supported by consensus. Now of course consensus can change, but your disagreeing with the previous consensus is not evidence that it has. Nor is your disagreement with my reasons for supporting the long-standing text. You should give good reasons for making your change and your reasons must achieve general acceptance. They have not. That you think you have good strong reasons for your change is not sufficient.
Also btw, your re-reverting is not supported by guidelines. See for example WP:REVERTING: If you make a change which is good-faith reverted, do not simply reinstate your edit - leave the status quo up. If there is a dispute, the status quo reigns until a consensus is established to make a change. See also WP:BRD. Jeh (talk) 10:37, 11 March 2012 (UTC)

Changed a section name to make it more objective and removed some speculation and unrelated information

To make it more objective I changed the title of the section "Legal issues of the AMD-Intel duopoly" to simply "Licensing issues". I also removed some speculation and unrelated information. --GrandDrake (talk) 06:16, 24 March 2012 (UTC)

Windows server 200X memory

Recent edits have mentioned max ram for Windows server 2006 and 2012. It looks to me based upon this:

http://blogs.technet.com/b/schadinio/archive/2011/02/23/windows-server-2008-2008-r2-max-memory.aspx

...that there is no one number we can put in the article. So, should we try to show the range of numbers, or just not mention Windows server 200X memory? --Guy Macon (talk) 17:29, 2 June 2012 (UTC)

This article is way too confusing for a non technical person like myself (also re. "Too Technical" tag)

Just saying. 136.169.53.84 (talk · contribs) 01:15, 23 September 2012 (UTC)

Which word didn't you understand? ... just asking.
Seriously, a comment like that is not really helpful to anyone who is looking to improve the article. Exactly what confused you? Was it a particular section? Were you maybe looking for a more general treatment of 64-bit processors rather than details of the x64 architecture? Do you think parts of the article contradict each other? If so, which parts? ...etc. Jeh (talk) 02:10, 23 September 2012 (UTC)
And I have the same set of questions for the person who added the "Technical" tag without starting a discussion here. It's a technical subject and as a SME I don't see how much content here could be removed or simplified without damaging the coverage. It is possible that some more introductory material could be added but in my opinion that stuff is pretty well covered in the 64-bit article. Similarly, where this article discusses specifics of how x86-64 implements certain aspects of a 64-bit CPU, the general concepts underneath those specifics (like registers, address space, etc.) are well described in other articles WL'd from here. So I have to ask... what specifically are the problems you see? If it's just "I don't understand it," I think the problem is likely that this is not the right place for you to begin your study of this topic. Jeh (talk) 03:30, 23 September 2012 (UTC)
It's a technical subject true, but it provides no explanation for what these technical terms mean. For example: "The AMD K8 core was the first to implement the architecture" Awesome, but what is a core? The article doesn't explain this, neither does the AMD K8 article. "x86-64 also provides 64-bit general purpose registers and numerous other enhancements." Sweet! But what is a general purpose register? What is a register at all? You can click the first wikilink on the page and then click on Instruction set to learn what architecture means, but that isn't readily apparent on this article and nowhere does the article link to that page or try to explain in any way what an architecture is. It's not unreasonable to assume someone doesn't know what architecture means in that regard, yet no explanation is given. The "Architectural features" section is very technical, most people aren't going to have the slightest clue what "the number of 128-bit XMM registers (used for Streaming SIMD instructions) is also increased from 8 to 16." means. I'm not saying that means we should just remove anything "difficult to understand", but why is that a "feature"? Why is increasing the number of registers a good thing? Explaining that doesn't take away anything from the article, and gives a wider audience the ability to understand the subject. If the article is too technical to understand, that is the article's problem, not the readers. - SudoGhost 03:59, 23 September 2012 (UTC)
"But what is a general purpose register?" An item described as such by the processor register page, to which general purpose register redirects; I've made "general purpose register" in the lede link there. (Obviously, this article should not itself describe what a general-purpose register is, or even what a register is, any more than the x86 page or the System/360 page or the SPARC page or the Motorola 68k page or the page for any other instruction set - that's the job of the processor register page.) I've added a link to general purpose register. Other technical terms that are not x86-specific or x86-64-specific should also be handled in that fashion. Guy Harris (talk) 07:21, 23 September 2012 (UTC)

Confusing beginning - about 32-bit or 64-bit

Hi, I just came across this page from a link on the Windows Server 2008 page, since I wasn't sure about what the different abbreviations where for the architectures under Editions and System requirements. I thought x86 was always about 32-bit and x64 was 64-bit. This is how Microsoft labels their downloads at least. When I got to this page (in English) I got confused when just looking at the beginning where is says "x86-64 is an extension of the IA-32 32-bit version". An extension can be many things, so I got confused, but when reading another language (Danish) I figured out that it actually is 64-bit. So my suggestion is to change the intro a bit and start telling that "this is 64-bit" and then go on. What do you think about that? If you agree, then just update it. I don't have a suggestion right now about how to write it. /PatrikN (talk) 23:05, 15 November 2012 (UTC)

I've changed the first sentence to mention early on that x86-64 is 64-bit. Guy Harris (talk) 00:02, 16 November 2012 (UTC)
Thanks Guy for updating. Now it's clear ;-) This talk can now be archived. /PatrikN (talk) 00:06, 16 November 2012 (UTC)

Added information on video game consoles that use x86-64

Added information on video game consoles that use x86-64. I was able to find information on how RAM is reserved for the Xbox One but was not able to find that information for the PlayStation 4. --GrandDrake (talk) 19:53, 3 June 2013 (UTC)

Windows on x86-64 (x64) populating 256 TB?

No. The reference given shows 128 TB usable for kernel space under Windows 8.1, but still only 8 TB in user space. The 128 GB in user space is only for Itanium versions. The needed phraseology is not straightforward because this is nevertheless using all 48 bits.

Perhaps a good wording would be this: Windows, prior to Windows 8.1, only populated 16 TB of the virtual address space. Jeh (talk) 17:26, 20 January 2014 (UTC)

So... looking at the ref'd article again... I think it is saying that we can use 128 TB in user space on x64! But I still prefer the wording I suggested above. For one thing, it is unquestionably backed by the reference (as well as others like Windows Internals). Second point, it won't need to be changed in the future. For another, it is more relevant to the point in time when AMD made the decision to make canonical addresses only 48 bits. Jeh (talk) 17:38, 20 January 2014 (UTC)
If I'm following this right, the total is reached by adding "User-mode virtual address space for each 64-bit process" to "Kernel-mode virtual address space", right? My concern is that the source says "for each 64-bit process" which indicates each process hypothetically gets its own 8TB. - Josh (talk | contribs) 18:14, 20 January 2014 (UTC)
Whatever the size of user space is, yes, each process gets its own. If you consider all of the processes at once, then yes, the total VAS might be much larger than 8 TB (if that is the limit)... assuming of course that there is RAM+backing store to put it all! But only one process's VAS can be mapped at a time—that is, only one can be addressable at a time. They all use the same user space addresses as each other. So the total virtual address space addressable at any moment (on any one logical processor) is that of one process's user mode address space plus the kernel mode address space. x64's 48-bit canonical addresses limit that total to 256 TB no matter how many processes Windows defines. Prior to 8.1, Windows limited the total to 16 TB, for reasons well described here. (And this is why Windows 8.1 won't run on x64 CPUs that don't have that instruction.) Jeh (talk) 18:52, 20 January 2014 (UTC)
Hi. Virtual machines in NT 6.2 and later can have their own user spaces, right? (That's what i get from Windows Server 2012 article. Am I right?) Best regards, Codename Lisa (talk) 04:34, 21 January 2014 (UTC)
Every virtual machine gets its own user space and its own kernel space. But these don't add to the virtual address space that's available or visible or addressable in the host (or "real") system. Jeh (talk) 08:06, 21 January 2014 (UTC)
Well, Hyper-V doesn't run in user space, does it? Best regards, Codename Lisa (talk) 13:29, 21 January 2014 (UTC)
Hyper-V runs in what is whimsically called "ring negative one" in the host. (Whimsically because there is not actually a value of "minus one" in a privilege level field anywhere.) It implements both kernel and user virtual address space in each virtual machine. Jeh (talk) 19:52, 21 January 2014 (UTC)
Yes, it is rather a whimsical way of saying "it is bare-metal hypervisor that runs before supervisor." But if the matter is merely license, even running a Hyper-V wouldn't make it a matter of more memory per CPU; it would make it more memory per CPU per license. So, I guess there is nothing to be done. Best regards, Codename Lisa (talk) 08:48, 22 January 2014 (UTC)
While we're here... I disagree with your revert of my edit. That Windows eventually got around to allowing use of the 256 TB address space in 2013 is not particularly relevant to a design decision that was made in 1999 or 2000. Jeh (talk) 08:06, 21 January 2014 (UTC)
Er... you don't? Well, I think it is important to indicate that the first version of Windows to support the entire limited 48-bit subset came a decade later. So, I propose this compromise: [2]. Best regards, Codename Lisa (talk) 13:26, 21 January 2014 (UTC)
Telling what happened a decade later would be fine, but to put it in context of "later" would benefit from description of the "earlier" situation. I would argue that the lay reader might look at this and say "well, the design is short-sighted, Windows has already expanded to use the entire 256 TB!" (not realizing that there would have to be 256 TB of RAM + backing store before such address space could actually be put to use). And I'm afraid your compromise misstates the situation: The 1 TB limit is for RAM, not virtual address space. The two limits are not related. This is why, in a previous edit, I eschewed the use of the unadorned word "memory." The RAM limit can be larger or smaller than the VAS limit. Jeh (talk) 19:52, 21 January 2014 (UTC)
I was careful not to mention either memory or VAS; just to mention the end results. (If you think I am referring to VAS, I am afraid that was not my intention.) But yes, why, your compromise is lovely. It's always a pleasure working with you. Best regards, Codename Lisa (talk) 08:48, 22 January 2014 (UTC)
Ok, thanks! Not meaning to beat up on you over the "1 TB of memory" thing... but I encounter confusion of virtual vs. physical addressing and address limits and so on all the time, so I tend to notice anyplace where the usage is ambiguous. It doesn't help at all that MS also commonly uses the unqualified term "memory" ambiguously, sometimes meaning one and sometimes the other. Jeh (talk) 09:05, 22 January 2014 (UTC)

Diagram change requested

Ok, who's good with editing these SVG files? I would like to request that Image:AMD64-canonical--48-bit.svg and Image:AMD64-canonical--56-bit.svg be modified to include the text "(not to scale)" under the text "Noncanonical addresses".

If we were not trying to be so encyclopedic we could write "very much not to scale" instead. In the 48-bit diagram, the two canonical halves would be each 1/131072 of the total height. i.e. a tiny fraction of a pixel high! In the 56-bit picture the two halves would each be 1/512 of the total height, which still would not be an entire pixel on most screens... but it would be getting close to one.

Maybe we just need a "relative sizes not to scale" notation at the bottom of the whole thing. Jeh (talk) 09:15, 22 January 2014 (UTC)

To Codename Lisa: Yeah, that'll do :) Jeh (talk) 11:03, 22 January 2014 (UTC)

Short forms of registers

Even without the excessive lower-byte registers, the table is not working for me. It is not the case, for example, that there is one register of which EIP is the low 32 bits and RIP is the high. But that's what the table/diagram shows. I think all of the short forms should be removed in the short term. e.g. RIP would just show RIP. Jeh (talk) 21:18, 14 February 2014 (UTC)

Long mode vs. real mode and virtual 8086 mode

The Long mode section contained this

Real-mode programs and programs that use virtual 8086 mode at any time cannot be run in long mode unless they are emulated in software.

Which was deleted by Someone not using his real name (talk · contribs) with a rather WP:POINTy edit comment.

I have restored it. Here's why: The method described in Virtual 8086 mode is

The addition of VT-X has added back the ability to run virtual 8086 mode from x86-64 long mode, but it has to be done by transitioning the (physical) processor to VMX root mode and launching a logical (virtual) processor itself running in virtual 8086 mode.

Westmere and later Intel processors usually[c] can start the logical processor directly in real mode using the "unrestricted guest" feature (which itself requires Extended Page Tables); this method removes the need to resort to [the nested] virtual 8086 mode simply to run some MS-DOS application.

See, you're still not running the real mode or virtual 8086 mode app under long mode! Of course you can run, as in start, such a program while using a long mode operating system... as long as you do it by creating a whole new virtual processor, which itself can run in the desired mode (Virtual 8086 or real mode). Heck, you could also have a second machine handy running an x86 OS and send the old binary to it for execution...

...But the restriction on running such programs in or under long mode is still there. As it says in the AMD doc (whose cite I just updated), page 11, note 3:

Long mode supports only x86 protected mode. It does not support x86 real mode or virtual-8086 mode.

I did acknowledge the possibility of getting around this restriction via VT-X, by appending this to the restored sentence:

However, such programs may be started from an operating system running in long mode on processors supporting VT-X, by creating a virtual processor running in the desired mode.

I believe this covers it, better than the original sentence did, and certainly better than the article did without that sentence.

However, I'm not sure what the big deal is about VT-X here. You could run a DOS app under x64 Windows (or, I believe, Linux) by using any of a number of virtual machine hosts—DOSbox having been specifically promoted for that purpose—long before VT-X existed. Jeh (talk) 07:19, 18 February 2014 (UTC)

Thanks for fixing this. I'd just like to point out that my edit wasn't pointy. The material was unsourced, and it's (original) claim is not supported even by the sources you brought up, specifically "Real-mode programs and programs that use virtual 8086 mode at any time cannot be run in long mode unless they are emulated in software." Besides the vagueness as to what is a program (does an OS kernel/loader qualify? you can surely start a x86_64 kernel in real-mode then switch to long mode so "at any time" doesn't seem to apply in this case), and (2) "unless they are emulated in software" is also false even for the typical scenario, as there are other ways, such as hardware-assisted virtualization which isn't normally defined as "emulated in software". That DOSbox doesn't do it in any other way is irrelevant, really. Someone not using his real name (talk) 10:52, 18 February 2014 (UTC)
Hi. There is a problem that I saw in the original edit. Let me explain with an analogy. Take OpenShot Video Editor for example. It runs on Linux only. Suppose someone comes round and say it runs on Windows. I say, "oh yeah?" He says, "yeah, just download VirtualBox and Ubuntu and you are set! It runs on Windows." Wrong! "To run on an [OS name]" implies that if you take away the said OS, it won't run! Take away Windows out of this config and I can still run OpenShot straight on Ubuntu. Take away Ubuntu out of this config and OpenShot binary won't run on my computer, no matter what.
So when you virtualize a processor, the software is not running in the long mode. It is running in real mode. Take away the long mode and the program still works. Take away the real mode and no, it won't run. You can see the weak point in my argument right now: "...unless they are emulated in software." I should've removed this too, right?
Best regards,
Codename Lisa (talk) 01:13, 19 February 2014 (UTC)
Regarding the example of an OS loader... oh, please. You know perfectly well what is meant by "programs." But if you insist, then I will insist on pointing out that an OS loader that starts in real mode, loads the 64-bit OS, and then switches to long mode before transferring control to the OS is still not running its real-mode code under long mode. So that isn't a valid counterexample. Jeh (talk) 01:38, 19 February 2014 (UTC)

Lead image suggestions?

I think we should put an image in the lead section. Any ideas? The supercomputing cluster adoption chart looks best in its own section. Maybe we could use the logos of AMD64 and Intel64, or a photo of the first x86-64 processor (AMD Opteron). What would you suggest? Sofia Koutsouveli (talk) 01:27, 20 March 2014 (UTC)

Hello there! If you ask me, that would be an image of a processor. — Dsimic (talk | contribs) 01:42, 20 March 2014 (UTC)

Hi. I'm thinking about low-resolution versions of "AMD64 Architecture Programmer’s Manual" and/or "Intel 64 and IA-32 Architectures Software Developer’s Manual", if that's allowed by the law. Dannyniu (talk) 03:10, 14 June 2014 (UTC)

  1. ^ Microsoft Corporation (2008-08-22). "Memory Limits for Windows Releases". Retrieved 2010-06-07.
  2. ^ Mark Russinovich (2008-07-21). "Pushing the Limits of Windows: Physical Memory". Retrieved 2010-06-07.
  3. ^ Cite error: The named reference amd10h was invoked but never defined (see the help page).