- Theories
Especially
theories when the computer first became commercial to the public. What were
some bizarre things that people had thought about the machines and the things
they do?
- Computers don’t make mistakes.
This is true only
in a very limited, technical sense. If you give a modern computer a list of
numbers to add up, for instance, it will do so correctly basically 100% of the
time. But most of the things we use computers for now involve making inferences
about something out in the world. This is an entirely different kind of
problem, and one in which computers make mistakes all the time.
In fact, not only
do computers make a lot of the same mistakes that humans make(like
overgeneralizing a rule and failing to recognize when it no longer applies) In
many cases they actually make a lot more mistakes. It’s a very big deal in AI
research of you can build a complex image recognition algorithm that performs
near human performance, for instance.
Argument:
Computers make
mistakes:
They do not make
mistakes. Computers do exactly what they are told to do - Nothing more, nothing
less. If you do encounter a glitch or bug, it is likely a result of the humans
who have * the code for that software. That most programmers write machine
code. i.e. that we type literal 1s and 0s into PCs to write code.
Computer cannot be
wrong, it cannot misbehave, it cannot do anything on it’s own, it is not smart,
it is just a dumb piece of hardware which needs to be told every single thing
every single time. It does not learn from it’s mistakes.
- Hardware and software
There remains to
this day a general misunderstanding of the line between hardware and software.
One which I find worrying as I've seen several very successful scams prey on
it. People understand basic things, like for instance that you can't just
download extra RAM or a few more gigabytes of free space. But the things that
are really important from a development standpoint, like the relative retrieval
speeds between a remote source (The cloud), a local hard disk, system RAM, and
the processor cache, that's all gibberish to them. And unethical developers are
willing to exploit that ignorance to sell them on all manner of products that
any sophomore CS student who has completed their class in machine architecture
could tell you are bunk.
- Cables
Another popular
one is this weird belief people have regarding cables. People tend to jump at
cables designed using methods that are good for analog signals. They're stiff,
made of gold, heavily shielded, and more often than not completely identical in
performance (or worse) than a five dollar one off Amazon. Your TV or monitor
only reads 0s and 1s. 0s and 1s cannot get fuzzy due to your microwave being
turned on. Having a fancy cable to move those 0s and 1s will not make colors
any brighter, or sound any sharper.
- Story concerning Charles Babbage
There is a famous
story concerning Charles Babbage, before whom few realized that machines could
be built capable of even a fraction of the computational power of modern
devices. In his memoirs, Babbage described the sort of conversations he had
when discussing the functionality of his analytical engine design with his
peers:
On two occasions I have been asked, "Pray, Mr. Babbage,
if you put into the machine wrong figures, will the right answers come
out?" ... I am not able rightly to apprehend the kind of confusion of
ideas that could provoke such a question.
— Charles Babbage, Passages from the Life of a Philosopher
It's an understandable misconception. To such people, the
machine was a black box which spit out answers to problems. The mechanical Turk,
their best role model for such a machine up to that time, had been known for
wiping all the chess pieces off the board if its opponent made an illegal move.
Even if they knew that it had been a human inside, they could be forgiven for
expecting that Babbage's machine might be designed to recognize and reject
invalid input.
Such misconceptions persist to the present day—even more so,
perhaps, now that software does do an excellent job of recognizing and
rejecting invalid input. There are people who see this input validation
occurring everywhere and think that this means that the software somehow knows
what they want to be done with their data. But the maxim remains as true today
as it ever has.
- Computers are clairvoyant.
Computers are,
sadly, unable to predict the future. Even those running sophisticated,
specialized forecasting models aren’t 100% accurate.
- Monitors are not the problem.
People are often
frustrated when their computers crash or do other unexpected things. Even I get
frustrated when things go wrong, especially if it’s not the direct result of
something that I did. But being frustrated is no excuse for the things that
many say and do to their completely innocent computer monitors. Users yelling
in frustration at their screens because a program crashed, a file was lost, or
the network went down. Meanwhile, the real culprit often sits nearby blinking
its hard drive light in sadistic glee as its peripheral receives all of the
abuse.
- Computers are smart.
Computers are not
smart. Computers are incredibly dumb. They do exactly what they’re told to do,
even when the instructions are so obviously wrong that they could not possibly
be intentional. Computers have no intuition, they do not refer to previous
experience, and they can’t really interpret meaning. They just follow orders,
regardless of how dumb those orders are.
Computer cannot be wrong, it
cannot misbehave, it cannot do anything on it’s own, it is not smart, it is
just a dumb piece of hardware which needs to be told every single thing every
single time. It does not learn from it’s mistakes.
They are very good at doing exactly to the
word of what you tell them to. This is also why most programmers use very
strict terminology. Computers seem smart because they do have ability to repeat
things we tell them a lot and very fast but they are unable to figure out if
what we told them was wrong. If you make a program that adds two numbers and
always produces a result 1 the computer will do it. It won’t bother him that
142 and 651 do not add up into 1.
We’re making
progress - the Curiosity rover on Mars has sophisticated software that prevents
it from falling off of cliffs and the like. But Stuxnet convinced nuclear
centrifuges to spin so fast they tore themselves apart. That’s because most
computers, including the ones running those centrifuges, are not smart.
- Programmers knows binary
I’ve heard this
from a few (clearly non-techy) people, probably because everyone hears that
computers run on 1s and 0s, so they assume that’s what programmers must be
typing. Programmers basically just write in a version of English that’s
customized to make what we’re trying to do easier. It’s not that crazy. Anyone
could code without having to know binary.
- That
Apple is trying to eradicate free software. (Not even Microsoft is trying
to do this anymore.)
It’s not a secret
that Richard Stallman has never had much love for Apple, or Steve Jobs
specifically. His response to Jobs’ death was more befitting a hated dictator
than the tech industry’s Number 1 Beloved Asshole, but the FSF’s vendetta
against Apple (partially, but ostensibly totally, about look-and-feel lawsuits)
was at its height during John Sculley’s era. I have this whole spiel about what
I think Stallman’s real problem with Apple is, but it’s off-topic for this
question (tl;dw Jobs and Stallman were both major supporters of democratization
of computing power, but had wildly different ideas about what that means; Jobs
was a pragmatist, Stallman is a CS purist).
This brings us to
the free software community’s slightly demented response to the LLVM project.
The FSF’s de facto flagship project for years has been the GNU Compiler
Collection, a suite of tools for building programs from source code and for a
long time the standard for Apple’s platforms, licensed under the GNU Public
License (short form: do what you wish with the code, but if you distribute your
changed program, you have to distribute your changes to all comers as well).
Version 2 of the license, which the Linux kernel still uses, has been found
mostly acceptable to industry, but version 3 introduced some significant
changes (most surrounding “Tivoization”, or the use of code in a hardware
product where it can’t be changed by the user, although there was apparently
issues over the extremely baroque dialect of legalese that it was * in as well)
that sent many corporate GNU contributors screaming for the doors. Apple’s
history with GNU went back to Steve Jobs’ days at NeXT, where Jobs’ team,
having recently taken over the Objective-C language from its creator Brad Cox,
had asked if it could make its version of Objective-C, based on GCC,
closed-source, and the FSF’s lawyers (but not Stallman himself) said probably
not; although Stallman and fans seem to think this was a particularly
acrimonious transaction and the start of an alleged grudge on Jobs’ part, it
seems to have been a relatively drama-fre
ems to have been a relatively drama-free exchange based on
what little I know about it, and NeXT and later Apple became a major
contributor to GCC development. Until GPLv3.
What wound up
happening is that Apple’s fork of GCC, based on the last GPLv2 version, started
falling behind the original. Apple bailed on GCC development when the license
changed, and started looking for alternatives; they found Chris Lattner’s LLVM
project, hired Lattner, and started pumping big money into it, the end result
being the Clang compiler used by Apple, Google, FreeBSD, and a number of
others, as well as the Swift and Rust languages built on the LLVM back end.
It’s also licensed under the Apache license. This is the sticking point. While
Apache is “free” under Stallman’s definition, it doesn’t require modified
versions to contribute their changes back to the user pool. For quite some
time, Stallman had been using the GPL as leverage over the free/open source
software world in general, based on the assumption that as long as it was the
main game in town for FOSS compiler development, no one could create a private
version that left it in the dust. However, GCC from a technical standpoint is a
mess; its code base is a masterpiece of organizational obfuscation, until
recently not allowing plug-ins for fear of proprietary code slipping into the
mix. It changed this in response to LLVM, which was designed from the beginning
to be highly modular, and has rapidly overtaken GCC as a base for much compiler
research.
The long story
short is that GCC is losing the battle on many fronts. Stallmanites want to
blame Apple for pushing an inferior license as a trojan horse for creeping
proprietariness, but as it turns out, it isn’t in Apple’s or any other
company’s interest to do so. Coders want free/open source tools, and Lattner
himself is committed to keeping LLVM and its satellite projects, at least those
under Apple’s umbrella, free and open source. The ugly truth that Stallman et
al don’t want to face is that GCC is losing not because of the corporate
world’s resistance to FOSS, but because of the FSF’s refusal on licensing
grounds to allow programmers to create the tools they need using GNU software.
(Stallman’s reluctance to allow support for LLDB in GNU Emacs is another
example that left a lot of people shaking their heads.) The changes to GCC’s
code base to allow plug-ins were too little, too late in the eyes of many
users, and not everyone in the FOSS movement approves of the FSF’s extremism to
begin with and prefers a more permissive form of licensing than GPL. Along
comes LLVM, which is everything compiler designers wanted from GCC, and they’re
pretty indifferent to the GPL vs Apache question; as long as they can get
access to and easily understand and work with the code, it doesn’t matter what
license it’s under.
The worst part of
all this? The FSF could easily create an LLVM fork and relicense it to GPLv3 if
they wished to serve as the base for GCC 6.x; the Apache license allows this,
and there’s already a GCC front end for LLVM called DragonEgg. But no one would
use it, and it would suffer a fate similar to OpenOffice.org, which is in a
state of advanced bit rot because most of its developers went with the
LibreOffice fork rather than the Apache Foundation, who Oracle ultimately gave
the code to. The economics of gift economies are such that there’s often little
point in duplication of effort; unless project leadership is completely
intractable (as the LibreOffice forkers feared Oracle would be), a fork means
diverting worker power solely for political reasons, meaning neither project
advances as fast as it could had the fork never happened. All these subtleties
are lost on people who see the world in an us vs. them mentality and act as if
compromise means capitulation. It may be that Apple’s relationship to FOSS
might be the same as the relationship of big game ranchers to wildlife
preservation efforts, but shady or not, progress is progress
(As a side note, this mentality isn’t limited to Apple.
There’s a few within the free software side of the community that consider the
Raspberry Pi organization to be a front group for Broadcom marketing and/or
lying about their commitment to free software, due largely to the binary blob
driver issue and the inclusion of software like Mathematica in the Raspbian
distro. And Miguel de Icaza, creator of Gnome and Mono, who is in extreme
disfavor with Stallman et al for his work with Microsoft technologies, is
probably directly responsible for one of the biggest victories for free/open
source software ever, the open sourcing of large parts of .NET and Microsoft’s
compiler technology. The free software side of the community is so blighted by
pessimism and paranoia (cough*royschestowitz*cough*bradleykuhn*cough) that they
can’t tell the difference between victory and defeat.)