Intel Underestimates Error Bounds by 1.3 quintillion
October 10, 2014 4:44 AM   Subscribe

Intel’s manuals for their x86/x64 processor clearly state that the fsin instruction (calculating the trigonometric sin) has a maximum error, in round-to-nearest mode, of one unit in the last place. This is not true. It’s not even close.
posted by Proofs and Refutations (65 comments total) 16 users marked this as a favorite
 
Fight the power (s of x)
posted by clvrmnky at 5:12 AM on October 10, 2014 [2 favorites]


I discovered this while playing around with my favorite mathematical identity. If you add a double-precision approximation for pi to the sin of that same value then the sum of the two values (added by hand) gives you a quad-precision estimate (about 33 digits) for the value of pi. This works because the sin of a number very close to pi is almost equal to the error in the estimate of pi. This is just calculus 101, and a variant of Newton’s method, but I still find it charming.

Is it wrong of me that I found this paragraph charming itself?
posted by GenjiandProust at 5:16 AM on October 10, 2014 [19 favorites]


Someone on Reddit gives some more details and points out that AMD processors have the same bug.
posted by maskd at 5:23 AM on October 10, 2014


The interesting thing about AMD is that they improved the computation of fsin and fcos in the K5 FPU, but then went back to being bit-for-bit compatible with the x87 behavior in their next generation CPUs.
posted by Slothrup at 5:28 AM on October 10, 2014 [4 favorites]


FOOPS.
posted by localroger at 5:31 AM on October 10, 2014 [5 favorites]


It's always interesting to be reminded that the way computers store numbers isn't exactly as numbers, and that, deep down, computers are weird and a little bit uncanny.
posted by You Can't Tip a Buick at 5:35 AM on October 10, 2014 [20 favorites]


I found posts about this from at least 2000, though it flew under my radar. Sounds like a doc bug more than anything.

Not as bad, though, as the FDIV Crisis.
posted by RobotVoodooPower at 5:38 AM on October 10, 2014 [2 favorites]


deep down, computers are weird and a little bit uncanny.

Finite math isn't hard, but it's not taught much and you really shouldn't be allowed near a floating point library until you thoroughly understand how the computer deals with integers. Floats are convenient when they work but will bite you in the ass hard in very unexpected ways if you don't understand them.
posted by localroger at 5:39 AM on October 10, 2014 [12 favorites]


Too many benchmarks and other programs depend on the less correct approximation to improve it now; fsin's behavior is enshrined unless somehow all those code authors can be convinced to change, which is generally considered impossible in an ecosystem as large as Wintel. The number of broken games alone would probably be staggering.

Also, I was always under the impression that trig calls even as far away from 0 as pi would be crappy, and it's up to the programmer to transform is to the range of 0 to pi/2, so hopefully that's more accurate.

But the documentation being wrong makes it well worth fixing!

Is anybody else really really upset with typographer's choices for pi in sans serif fonts? It just looks so wrong!!
posted by Llama-Lime at 5:40 AM on October 10, 2014 [1 favorite]


Yeah, it's hard to call this a "bug" beyond the documentation of it. It was a dumb-ish decision back in the day (likely driven by a combination of ignorance and limited silicon, probably more the latter) that just has to remain that way. Anyone who cares much is using other, more accurate, routines.

Bruce Dawson's blog (the FPP link) is a pretty regular gold mine of neat things like this. If you work with toolchains, CPUs, floating-point math, etc., he's well worth following. The topics are usually pretty narrow, but he's thorough and smart.
posted by introp at 5:46 AM on October 10, 2014 [1 favorite]


Also, this heading took me right out of Matha nd back to Catechism class: Actually calculating sin.
posted by GenjiandProust at 5:56 AM on October 10, 2014 [14 favorites]


Anybody else ever try to sell a "math coprocessor" with the x386 boxes for extra? Or was that the 486DX that made that obsolete?
posted by infini at 5:58 AM on October 10, 2014


The interesting thing about AMD is that they improved the computation of fsin and fcos in the K5 FPU, but then went back to being bit-for-bit compatible with the x87 behavior in their next generation CPUs.

Yeah, a processor monoculture was such a swell idea. The irony is that it's likely built into the x86 instruction emulator that runs on these processors - yeah, both AMD and Intel don't make CISC processors. They're post-RISC designs that runs a freakin' CISC emulation layer, because the lingering hangover we got from Wintel Forever kind of broke the market for modern processors on mid-range servers and high-end workstations.

If the only logical solution is something objectively sucktastic because network effects means it's cheap, it's not really a win. It stifles overall innovation. Hell, we can't even move to ARM because x86 is so entrenched.

MiPS is making a play for the mobile space, and I hope it takes off, as the processor family scales up as neatly as it scales down - it has a history of breathing fire in 64bit supercomputers and graphics workstations, as well as running cheap wifi routers and cable set-top boxes. The ARM ecosystem is having trouble making its case for the server room, a resurgent MIPS is flexible enough where it wouldn't have the same problem. A Chromebook that could spank a Macbook Pro in visualization tasks would be neat, too.
posted by Slap*Happy at 5:59 AM on October 10, 2014 [5 favorites]


As a Mac user, I have to laug---- *remembers we switched from PPC to Intel nearly 10 years ago*

um, I mean, this is no big deal.
posted by entropicamericana at 6:31 AM on October 10, 2014 [14 favorites]


As a (former and somewhat current) Mac user I'm not sure that the alternative is that great. Having binary compatibility all the way back into what's effectively PC prehistory is pretty neat. Apple's whatever-we-fucking-feel-like-this-week architectural changes (Motorola 68k! PPC! Intel! ARM!) drive a really, really obnoxious upgrade treadmill. By breaking all software from the previous architecture, suddenly you become dependent on emulation layers which can be removed at will. And in the last migration (PPC to Intel) Apple pulled the rug out from under anyone running PPC software quite quickly by killing Rosetta — going so far as to ensure you can't even legally virtualize a version of MacOS that has it — after only a few versions.

The Wintel platform has a lot about it that's crap, but at least it gives you the capability of running hilariously old (32-bit Win8 will still run 16-bit DOS applications) software.

After having to throw out far, far too much good software on the Mac side of things just because the hardware breaks down and suddenly the new hardware won't run an old OS and the new OS won't run the old software, Intel's dogged commitment to the x86 instruction set and binary compatibility uber alles seems rather charming.
posted by Kadin2048 at 6:45 AM on October 10, 2014 [11 favorites]


But the documentation being wrong makes it well worth fixing!

Yes, and Intel is in fact fixing the documentation.
posted by The Bellman at 6:48 AM on October 10, 2014


so if you keep reading, you realize that some compilers (many ? a few? I dunno, there are a whole lot of them), don't use that CPU instruction.. (sidebar: this was one of the mips vs cics arguments for years -- instruction bloat and how some instructions are rarely, if ever, used). What isn't answered:

So what ?

Documentation is wrong, you aren't getting the expected result from the instruction. Buildings are still constructed, space craft still launch into orbit and land on Mars (so long as there's no metric vs imperial problems), higgs-boson is detected.

Do shuttle builders et al use their own custom math library (which I think some would, arbitrary/infinite precision libs exist), or is the CPU's answer good enough, or do most compilers ignore this instruction anyway ?
posted by k5.user at 6:54 AM on October 10, 2014


Anybody else ever try to sell a "math coprocessor" with the x386 boxes for extra?

Sure. It was common to get mobos with an empty 387 socket, which could be upgraded by the customer later. There were even third party manufacturers of 80387s-a-likes from Weitek and Cyrix. More than you needed to know here.

was that the 486DX that made that obsolete?

Pretty much, is my recollection.
posted by bonehead at 6:58 AM on October 10, 2014 [1 favorite]


thank god, I'm always afraid the time has come for the memory to go... that was from my first job, selling computers door to door - specialist CAD set ups (first on the list), until AutoCAD killed the market
posted by infini at 7:06 AM on October 10, 2014


k5.user: So what ?
So, we have no idea what.

Weak Analogy: Imagine that some random person on the planet is given your username and password to one website. So what?

Probably nothing. Maybe you lose BIG. We don't know... yet. And, we may never know what was made different by this butterfly's wing beats.

But it's important that, going forward, EVERYONE who works with high precision on computers knows about this.
posted by IAmBroom at 7:07 AM on October 10, 2014


After having to throw out far, far too much good software on the Mac side of things just because the hardware breaks down and suddenly the new hardware won't run an old OS and the new OS won't run the old software, Intel's dogged commitment to the x86 instruction set and binary compatibility uber alles seems rather charming.

Yeah, but that's an OS issue, not a CPU issue. Processor emulation, as we saw with Rosetta, is a solved problem - by forcing it down the stack to the silicon, we're stifling new ways software and silicon can interact, and forcing everything new to fit into a square peg hole is acting as a bottleneck to both hardware and OS design. Smartphones are wildly diverse, and do a lot of fun SoC things not possible with x86. I'd like to see a more robust ecosphere on the desktop and in the server rack.

IBM is killing it in the server room with POWER and Z-series silicon, and Oracle's doing some fun new things with SPARC, enough to keep those processor families afloat in the age of commodified cloud server farms. Dunno. Maybe we need to wait for quantum systems to hit before we can break out of the Wintel/Lintel mode completely.
posted by Slap*Happy at 7:11 AM on October 10, 2014 [1 favorite]


But it's important that, going forward, EVERYONE who works with high precision on computers knows about this.

And that's what I mean - space ships still launch, dock and land. Higgs-boson still gets found, buildings (even ugly IM Pei monstrosities) are built and remain standing ..

That means either people are already compensating vis using better math libs (my guess #1), or compilers aren't actually using the instruction (guess 2, though TFA implies variability), or it really isn't that big of a deal (guess 3).

The schlock here about wintel/mac/os etc is noise.
posted by k5.user at 7:13 AM on October 10, 2014 [1 favorite]


I don't see the big deal, no one would ever need a number that large or precise.
posted by blue_beetle at 7:34 AM on October 10, 2014


It's always interesting to be reminded that the way computers store numbers isn't exactly as numbers, and that, deep down, computers are weird and a little bit uncanny.

Computers can store and manipulate a subset of the Integers correctly. If you run out of bits, you run into a problem.

The issue here is that you can't conflate real numbers with floating point numbers. Floats are *approximations* of real numbers. We use floats because our current silicon can't do real numbers. This really shows up when you use irrational numbers like π.

Floating point addition and subtraction is commutative (A+B=B+A) but they're not automatically associative ( [A+B]+C=A+[B+C]). Floats are not automatically distributive ([A+B] x C = AxC + BxC) as well. Real numbers are commutative, associative and distributive.

The error here, though, is pretty astounding. Math at the edge of the floating point is understood to be, well, different. Subtract two floats that are almost exactly equal tends to a massive loss of accuracy, because you set the most significant digits to zero and leave only the least significant digits. But those are known cases.

This one, well, the fact that the docs say it's accurate when in fact it has massive spaces of inaccuracy? That's a problem.

Or was that the 486DX that made that obsolete?

There was a 486SX, and you could get a coprocessor socket. Amusingly enough, the 487SX was, in fact, a 486DX. When you plugged it in, it shut down the 486SX and took over both CPU and FPU functions.

The Pentium was the first X86 series processor that was always sold with a floating point unit.

Similarly, the 68000 series CPUs didn't have either MMUs or FPU on die until the 68030 gained an onboard MMU and the 68040 gained an onboard FPU. There were versions of the 68040 that didn't have the FPU (68LC040, "low cost") or lacked both FPU and MMU (68EC040, "Embedded Controller") -- the 68000 series always had a large footprint in the embedded controller market, and an FPU was mostly just a waste of silicon in that realm.
posted by eriko at 7:57 AM on October 10, 2014 [7 favorites]


The schlock here about wintel/mac/os etc is noise.

It's relevant, because it ties in to the overall narrative of backward compatibility, which is a very big deal to some people. Intel and Microsoft value it very highly, presumably because many of their clients do as well.

That being said, I'm not entirely sure that Intel would actually break existing real-world code if they fixed this bug in newer silicon. While it's generally impossible for Intel to fix hardware "bugs," for fear of breaking code that relies on the buggy implementation, I'm having a hard time seeing how you'd break something by improving floating point precision.
posted by schmod at 8:26 AM on October 10, 2014 [1 favorite]


I had totally forgotten about the 486sx. I even had one -- actually, 2, i think. my friends and I thought it was hilarious, but you know what? back when it mattered (and we were all poor), we could save $50 by getting a 486sx versus a 486dx.
posted by lodurr at 8:33 AM on October 10, 2014 [1 favorite]


But it's important that, going forward, EVERYONE who works with high precision on computers knows about this.
I think this is right, but would add: while it's potentially important that people realize that the error exists, very few people who do computing will need to know. Virtually no one is writing assembly FSIN instructions these days. Those who are write the libraries that abstract this stuff from people who do the computing. If you fire up R or Matlab or grab SciPy or whatever your favorite tool is, they'll likely handle sin(x) correctly enough for your application. Someone who knows about this problem has likely already dealt with it. But, as you point out, the key word here is "likely." :(
That means either people are already compensating vis using better math libs (my guess #1), or compilers aren't actually using the instruction (guess 2, though TFA implies variability), or it really isn't that big of a deal (guess 3).
1. Yes, e.g., see the article's mention of newer glibc versions behaving well by default. There, I believe you can still get the inline (inaccurate) instructions if you compile with -ffast-math, which is you basically telling the toolchain, "I REALLY value speed over accuracy and error-checking/-reporting".
2. Yes (well, many of them), e.g, using SSE2 instructions instead of FSIN.
3. Yes; it's a very particular intersection of conditions where you have both an error-inducing input and a requirement for very high accuracy output.
posted by introp at 8:36 AM on October 10, 2014 [2 favorites]


> Anybody else ever try to sell a "math coprocessor" with the x386 boxes for extra?

The 80386 had an on-chip coprocessor. The cheapo version nicknamed the 386SX didn't, and there were other differences (like it would only connect to a 16 bit data bus.) All 486 processors had an onboard math coprocessor but the SX version had OMG had the coprocessor disabled. I would like to think this only happened to chips that tested out with coprocessor errors but I don't know that,
posted by jfuller at 8:57 AM on October 10, 2014


What server-side software uses trig anyway? Bugs that surface in 2014 but have been around forever - bugs in the normal operation of functions, rather than security-style misbehavious when fed malicious input - would seem to indicate that there's a lot of stuff in silicon that's just not seeing much action...
posted by Devonian at 9:01 AM on October 10, 2014


*starts mooning over IRQ numbers*
posted by infini at 9:02 AM on October 10, 2014 [1 favorite]


Since there's a lot of CISC-RISC debate going on in this thread, some of you may want to check out the OpenRISC project, which is trying to create an open source RISC specification and chip implementation that allows for vendor-specific extensions.

The idea is that you can extend your OpenRISC compiler to take advantage of vendor-specific magic if you need it. If your vendor specifies fsin, use it. If your vendor then admits it's buggy, recompile and don't use it.

It's not an impossible future.
posted by sixohsix at 9:04 AM on October 10, 2014 [1 favorite]


The bugs were probably "never identified before now" because the compiler designers who made the original decision to do the math another way are either in management or retired by now.

I.e., these bugs were identified. They were worked around. And then they were forgotten. That's the way culture works. It's the history of science.
posted by lodurr at 9:06 AM on October 10, 2014 [1 favorite]


If I ever need to find all the geeks on MetaFilter, now I know where to look!
posted by rabbitrabbit at 9:07 AM on October 10, 2014 [1 favorite]


Devonian: "What server-side software uses trig anyway? Bugs that surface in 2014 but have been around forever - bugs in the normal operation of functions, rather than security-style misbehavious when fed malicious input - would seem to indicate that there's a lot of stuff in silicon that's just not seeing much action..."

MOBILE APP (to SERVER): Please return all points of interest within a five mile radius.
posted by pwnguin at 9:10 AM on October 10, 2014 [4 favorites]


The Pentium was the first X86 series processor that was always sold with a floating point unit.

Strictly speaking (the best way to speak), when the 80486 was introduced in 1989 it was always sold with an FPU. They didn't introduce the 486sx until a few years later.
posted by ROU_Xenophobe at 9:11 AM on October 10, 2014


The 80386 had an on-chip coprocessor. The cheapo version nicknamed the 386SX didn't, and there were other differences (like it would only connect to a 16 bit data bus.) All 486 processors had an onboard math coprocessor but the SX version had OMG had the coprocessor disabled. I would like to think this only happened to chips that tested out with coprocessor errors but I don't know that,
I think you're confusing the 386 and 486. The 386 (later called the 386DX) did not have an on-board floating-point coprocessor. The 386SX was a 386 with a 16-bit data bus (!!!) for cost savings; it was a nightmare for performance. There were two versions of the 387, one for the DX bus and one for the SX bus.

As mentioned above, the 486DX was the first (mainline?) Intel processor with an on-board floating-point coprocessor.

I helped build a college machine for my brother and we put a shiny bleeding-edge 386 (a.k.a. 386DX) in it and had to ship him the 387 for it when they became available.
posted by introp at 9:16 AM on October 10, 2014


(the short answer to processor history, just remember anything with "sx" suxxed. )
posted by k5.user at 9:18 AM on October 10, 2014 [1 favorite]


The first 486SXs actually had the FPU present but disabled - whether because it was a way of using die with failed FPUs, or just handy price banding I'm not sure. Later designs, I seem to remember, omitted the FPU - with yields up, you get more per wafer...

And do server-side GIS services for mobile apps do full trig? I thought they did Pythagorean distance estimates across matrices of cartesian co-ords, and swallowed the inaccuracy in the name of efficiency. But I really do not know, and really would like to!
posted by Devonian at 9:23 AM on October 10, 2014


devonian, i saw an online app recently, for calculating distances from point to point using Google Maps, that explained the method in the terms you use. It seemed to be a person familiar with the industry. So I think you're right for at least some server-side GIS apps.
posted by lodurr at 9:37 AM on October 10, 2014


People pointing out that this isn't a bug or that the bug is in the docs, not the instruction, should read the article: he comes to the same conclusion.

Documentation bugs are bugs. When what you're working with does not behave in the way you have been told it should, the discrepancy between expectation and result is buggy behavior that has to be resolved.
posted by ardgedee at 9:40 AM on October 10, 2014


OK, sure, fine, as a developer I appreciate that it's hella inconvenient when shit isn't documented.

But as a student of anthropology, I find it fascinating to realize that at some point someone (and probably a lot of someones) knew about this, and not only didn't bother to write it down, but did bother to make product architecture and product management decisions based on it -- still without bothering to write it down.
posted by lodurr at 10:56 AM on October 10, 2014 [2 favorites]


The real error can be traced back to the failure to pass Indiana's Pi Bill in 1894 which clearly defined pi = 3.2
posted by JackFlash at 11:00 AM on October 10, 2014 [2 favorites]


I mean, it's a great illustration of how problems can be hidden from us because we're hardly ever put into a situation where they matter. The bug has never surfaced because people worked around it so successfully that in the ensuing decades people doing production work have not needed to know it exists.

There's an urban fantasy plot in here just waiting to get out.
posted by lodurr at 11:01 AM on October 10, 2014


... and on preview, i think JackFlash just started writing it.
posted by lodurr at 11:02 AM on October 10, 2014


The first version of the 486SX was a 486DX with a faulty copro. But as yields improved and 486SX sales improved it became wasteful to devote so much die space to a faulty FPU or just a plain disabled FPU. Later 486SX variants did actually have a die without a copro.
posted by Talez at 11:30 AM on October 10, 2014


We use floats because our current silicon can't do real numbers.

It's a pretty safe bet that future silicon won't do real numbers either, since that would require an infinite amount of silicon.
posted by localroger at 11:39 AM on October 10, 2014 [2 favorites]


"The first version of the 486SX was a 486DX with a faulty copro. But as yields improved and 486SX sales improved it became wasteful to devote so much die space to a faulty FPU or just a plain disabled FPU. Later 486SX variants did actually have a die without a copro."

Heh. I remember when this was happening. There were Usenet channels where hacks to enable the "faulty" copro were passed around. If you were "lucky" enough, you could enable to coprocessor and run a program to test whether it was actually faulty, or just part of a batch that had a high enough "failed" state to get sold as an SX processor.

I happened to be one of the "lucky" ones, which was great. Sadly, I never got into CS (at the time) to put it to any use.
posted by daq at 11:45 AM on October 10, 2014 [2 favorites]


localroger: It's a pretty safe bet that future silicon won't do real numbers either, since that would require an infinite amount of silicon.
Not if symbolic math is implemented in the hardware, allowing values like "e^pi" to be returned as a complete answer at the hardware level.
posted by IAmBroom at 11:53 AM on October 10, 2014


Heh. I remember when this was happening.

That was the chip I could afford when I bought my first box, assembled for me by a friend's cousin for the equivalent of 10 months salary and a loan from mom for another month and half's salary. In rupees. Otoh, I was account manager responsible for Creative Labs SoundBlaster Multimedia launch so I got the board for cost price and installed it myself. Yay sound and speakers for my games!! ;p

Honestly can't do anything anymore with these boxes after Win 95 abstracted the intelligence out of the system.


Why yes, I do indeed still have my "Multimedia issss Creative" black polo tshirt.
posted by infini at 11:56 AM on October 10, 2014 [3 favorites]


Real numbers and symbolic processing aren't the same, though, as will become apparent just as soon as you want to feed a sensor reading into one of the IO pins.
posted by Devonian at 12:06 PM on October 10, 2014 [3 favorites]


I discovered this while playing around with my favorite mathematical identity.

Like you do.
posted by bicyclefish at 12:21 PM on October 10, 2014 [2 favorites]


Thanks for the correction, Devonian. However - the input from a sensor reading isn't a real number, either. It's a noise-ridden real-world phenomenon, and now we're getting into the semantics of what "accuracy" in measurement really means...
posted by IAmBroom at 12:54 PM on October 10, 2014


Really,though, symbolic processing doesn't buy you much. If you don't reduce operator symbols like SQRT(val) and PI to arbitrary numeric values you aren't really evaluating expressions you are just writing them. It's not so much taking the value from a sensor that's the problem as generating accurate output for an expression like SQRT(2)PI. If you can't reduce that to an arbitrary value for input to other functions , those functions just become a new symbols, and if you do anything complex you end up with a wad of symbols that can't be disentangled calling itself a result.
posted by localroger at 1:06 PM on October 10, 2014 [1 favorite]


Accuracy in floating point calculations is not just down to hardware instructions, but is also tied to software — algorithms — that can generate "impossible" answers from classically-derived mathematical functions, when applied naively on modern processors of any sort.

John D. Cook's blog is often cited on Stack Overflow, where questions about floating point numbers come up. Here is a post where he shows three different ways to calculate a running standard deviation, and how easy it is to get a nonsensical negative result due to inaccuracies that get amplified by virtue of the number of calculations being done.

This is a critical matter now, where scientific research, from drug discovery to particle physics, is generally about Big Data: an experiment can regularly crunch millions or billions of numbers as a matter of course. Choosing the right algorithm is not only a matter of speed or memory, but also about being able to trust the answer, at all.
posted by a lungful of dragon at 1:49 PM on October 10, 2014 [4 favorites]


And that's what I mean - space ships still launch, dock and land. Higgs-boson still gets found [...]

If you don't find the minutiae of processor instruction sets interesting, this is perhaps not going to be an especially interesting topic for you. No, a misdocumentation of an obscure part of the x86 instruction set is not going to bring the world to its knees. But that doesn't mean it's not important to people who work on that scale, either.

When errors like this happen, it matters how they're dealt with because that affects backwards compatibility. Even if most compilers don't use that instruction, and thus most software doesn't touch it, the instruction still exists and therefore could still be used. Somebody's software out there (probably hand-rolled assembly, which isn't that uncommon) might use it.

An operation that's predictably, consistently wrong is in many ways better than one that's inconsistently (across a processor family sharing the same instruction set and claimed to be compatible) correct.

You can't go and just "fix" the behavior in the next version of silicon, because by then somebody might have used the 'broken' instruction and their code will work differently on a processor that allegedly implements the same instruction set. Or at least, if you do that, you have to be careful to document the change so that the behavior is predictable.

That is the real cost of maintaining the x86 CISC architecture, and I think Intel deserves some credit for managing it for so long. The silicon cost of the CISC preprocessor / translation layer isn't especially high, and with each new process generation it goes down in "cost". In other words, back in the 90s you could make a really strong argument for RISC vs. CISC because you freed up a lot of transistors, percentage-wise, by dumping all those weirdo instructions. But on a modern processor it's not that big of a deal, and it becomes less of a deal every year. Hence why CISC "won" on the desktop despite being arguably inferior architecturally. (Offer not valid on mobile platforms or in places where backwards-compatibility doesn't matter.)
posted by Kadin2048 at 1:51 PM on October 10, 2014 [6 favorites]


Not if symbolic math is implemented in the hardware, allowing values like "e^pi" to be returned as a complete answer at the hardware level.

If "at the hardware level" just means, like, "in the CPU registers," (as opposed to the FPU registers like this bug,) then this already happens all the time, just shaving a few binary integers out of your program's usable range so that they are now e, pi, Graham's number etc. I guess what you really want is some analogue to the FPU that does abstract algebra, meaning, I don't know, its opcodes are themselves registers?

Has someone done this for a lark perhaps?
posted by LogicalDash at 3:23 PM on October 10, 2014


The silicon cost of the CISC preprocessor / translation layer isn't especially high, and with each new process generation it goes down in "cost".

The cost in this case is locking down the hardware to the point where software is stagnating as well. There are two reasons the microcode is there -

1) Backwards compatibility with software the maintainers don't understand anymore. (This should be abstracted to the OS layer.)

2) Letting the "133+ h4Xx0rZ" pretend they can "code to the metal" rather than abstract it to a modern language/compiler design that knows what the fuck it's doing. (This should be laughed out of the industry.)

Neither is a particularly good excuse. Meanwhile, crusty, slow old SPARC is rolling out chips that can sniff out and snuff buffer overruns and accelerate database read operations exponentially. Larry Ellison is one of the Old Guard visionaries - his take on processor design should open some eyes in the next few iterations.
posted by Slap*Happy at 6:51 PM on October 10, 2014 [1 favorite]


Letting the "133+ h4Xx0rZ" pretend they can "code to the metal" rather than abstract it to a modern language/compiler design that knows what the fuck it's doing.

Yeah I recently exercised my 133+h4Xx0rZ skills on an embedded platform and my assembly code was only 100 times faster than the compiled C code I converted. I truly expected it to be 500 times faster, that shit is getting better all the time.
posted by localroger at 12:34 PM on October 11, 2014


Also, a quote I wish I'd said but I didn't and I don't remember who did: "If you don't know how to solve your problem with integer math, you don't know how to solve your problem."
posted by localroger at 2:57 PM on October 11, 2014


on an embedded platform

Which isn't what we're talking about, here.
posted by Slap*Happy at 3:35 PM on October 11, 2014


Also, dude? C? Really? 1968 C? What the fuck were you developing on?
posted by Slap*Happy at 8:54 PM on October 11, 2014


Slap*Happy: "Also, dude? C? Really? 1968 C? What the fuck were you developing on?"

Probably an embedded platform?
posted by pwnguin at 11:40 PM on October 11, 2014


The C compiler was GCC. Which, if I'm not mistaken, is the tool used to compile both Linux and OS/X nearly in their entirety. If that's ca. 1968 then maybe Philp K. Dick was right that we're all reliving 70 A.D.
posted by localroger at 7:24 AM on October 12, 2014


Don't let him know Linux is written in C. He'll have a coronary.
posted by Talez at 8:35 AM on October 12, 2014 [1 favorite]


Devonian: "And do server-side GIS services for mobile apps do full trig? I thought they did Pythagorean distance estimates across matrices of cartesian co-ords, and swallowed the inaccuracy in the name of efficiency. But I really do not know, and really would like to!"

I'm only very novice at GIS applications, but this page about the history of PostGIS suggests they're using floating point calculations in some places:
"indexes were re-done to use bounding boxes defined with 32-bit floats instead of 64-bit doubles. The experimental system was part of PostGIS as an option, but was not made the default geometry implementation until the 1.0 series."
The PostGIS dev manual has a Special Functions index with language suggesting floating point is occasionally relied upon. The reality of mapping political boundaries often includes difficult mathematical constructs, whether it be the 49th parallel, or a twelve mile arc centered on New Castle, Delaware. One use of PostGIS is as the official state record of parcels of land, and in contractual disputes it would be unfortunate if the most commonly used mapping software tools demanded efficient subsecond queries with no way to make a different tradeoff.
posted by pwnguin at 11:18 AM on October 12, 2014


« Older Nobel Peace Prize 2014 goes to an Indian and a...   |   How D&D created the female gamer Newer »


This thread has been archived and is closed to new comments