Hardware

Photo via Wikipedia
Photo via Wikipedia
Lesson Outline

✅ What we’ll cover


After completing this lesson, you’ll be able to:

  • Define essential terms related to computer hardware
  • Describe in broad strokes what different pieces of computer hardware do and how they do it
  • Construct a mental model of how a computer stores data

We’re going to start our discussion of the building blocks of new media with hardware for one simple reason: hardware is tangible. Unlike so many things we’ll discuss in this course, you can see and touch hardware.

Of course, hardware without software doesn’t make much sense, so we’ll occasionally have to peek around the corner at the next lesson and talk about some basic software concepts in this lesson. If anything we talk about with software here doesn’t make complete sense yet, do your best to trust that things will get disambiguated in the next lesson.

To keep things simple, we’ll talk about the hardware that makes computers tick by sorting things into three broad categories: data, processing, and inputs and outputs.

Let’s dive right in.

Data


What is data?


We’re going to start with data because, as we’ll see, it’s really the whole point of all of the rest of a computer.

? Stanford CS 101 on Bits and Bytes

(674 words / 4-6 minutes)

Digital data is what we call information that computers can understand. It seems both obvious and crazy to say it this plainly, but computers are just clever groupings of billions of very simple, very tiny machines that can perform operations really, really quickly. That speed is what makes them so powerful, but for us to make any use of that power, we need to get our information (data) into a format that computers (machines) can work with.

The basic unit of digital data is the bit, or bi nary digi t 1—a switch that is either flipped on or off, represented as a 0 or a 1. (In computer science, you start counting at 0. If you asked a computer to count three apples, it would count “0, 1, 2.” It understands that there are three things; it just starts at 0.)

How do computers physically store data?


There are a bunch of different ways you can store these bits: we used to use punch cards (there is / isn’t a hole). Then, we moved on to magnetic storage (magnet’s polarity is north / south). This was the basis for most computer storage for a long time, including mechanical hard disk drives (HDDs), sometimes called spinning disks because the platters in them spun around very quickly and the data on them was read by a floating head, not unlike a record player. 2

Thankfully, we’ve mostly moved on to solid state drives (SSDs or flash memory), which use integrated circuits to store data electrically, which means they’re much faster, quieter, and more durable than traditional HDDs, though they’re still a fair bit more expensive than traditional HDDs3.

How do we refer to quantities of data?


Anyway, regardless of the medium in which you store it, digital data at the end of the day is just a bunch of 0s and 1s. However, it turns out that a single bit all alone isn’t really all that useful. For a variety of historical reasons, we typically group a bits together into a byte. Let’s pause for a bit 4before we go on and use this new knowledge to resolve that pesky old “megabits / megabytes” confusion:

1 Byte = 8 bits
Kilobyte = KB = 1000 bytes (8000 bits)
Megabyte = MB = 1,000,000 bytes (8,000,000 bits)
Gigabyte = GB = 1,000,000,000 bytes (8,000,000,000 bits)
Terabyte = TB = 1,000,000,000,000 bytes (8,000,000,000,000 bits)

Got it? If not, you can play around more here or here.

How do you store information in binary data?


How do you take groups of eight bits that are either 0s or 1s and turn them into meaningful information? You make up some rules! 5 These rules, made up by human beings just like you and me, are called formats or standards.

Here’s an example6:

Say your name is Scott, and you wanted to store your name in a computer. You could use the ASCII (American Standard Code for Information Interchange) standard to do so:

Scott
 S = 83 c = 99 o = 111 t =116
 83 99 111 116 116
 1010011:1100011:110111:1110100:1110100
 5 bytes = 40 bits

Those 40 bits would be stored as a series of positive or negative magnetic charges (on a traditional HDD) or electrical charges (on an SSD) and would take up 5 bytes of space on your disk.

With just a little bit of imagination, you can see how you could store any other type of information in a computer. Want to store a picture? Create a standard to define a grid of a given size, and then a way to define the combination of red, green, and blue color values in each square on that grid, and congratulations—you’re invented a bitmap file (or a JPEG, or a GIF, or whatever—all twists on the same idea)!

Of course, it’s all a bit more complicated than that, but hopefully you’re now starting to conceptualize the general principle of how to make data machine readable.

A note on Required reading just the first part readings


The links in the following sections (CPU, GPU, etc.) are a slightly different type of required reading. First, note that they’re formatted a little differently than normal Required readings, which you are required to read from start to finish, and which look like this:

? Reading Title, Author names

(XXX words / XX-XX minutes)

Required reading just the first part readings, by contrast, look like this

? Intro only

? Example required reading just the first part title

(Word count / estimated reading time)

Note the following two differences:

  • “? Intro only” pre-heading
  • Single-page emoji (?) vs. book emoji (?)

For these readings, you’re still required to click through to the linked pages, but you’re only required to read the main description / introduction (or the noted sections) for any of them. 7

If you’re a moderately curious person, you’ll probably find yourself scanning the rest of each linked page to get a general sense of the structure of the topic, and you’re of course welcomed (and encouraged) to read some or all of the rest of it. But, if you’re pressed for time or simply not that interested in the topic at hand, again, all that you’re required to read is the main description / introduction.

Also, now’s probably the time to get used to reading this course by using multiple tabs all open at the same time. I’d suggest leaving this tab open the whole time you’re reading this lesson, opening all the new tabs as you get to each section, and then closing each of said newly-opened tabs as you’re done with it.8

Processing


Okay, so once you have all that wonderful digital data, you want to be able to do stuff with it—edit your pictures, input new data to your spreadsheets, etc. Of course, all of this requires software, too, but for now, let’s focus on the hardware components that process data.

What is a CPU?


? Intro only

? Central Processing Unit, Wikipedia

(276 words / 2-4 minutes)

To use an imprecise-but-still helpful metaphor, the Central Processing Unit, or CPU, is the brain of the computer—it’s what does most of the general-purpose computing in a computer. Read this excerpt from “What is Code?9:

A computer is a clock with benefits. They all work the same, doing second-grade math, one step at a time: Tick, take a number and put it in box one. Tick, take another number, put it in box two. Tick, operate (an operation might be addition or subtraction) on those two numbers and put the resulting number in box one. Tick, check if the result is zero, and if it is, go to some other box and follow a new set of instructions.

You, using a pen and paper, can do anything a computer can; you just can’t do those things billions of times per second. And those billions of tiny operations add up. They can cause a phone to boop, elevate an elevator, or redirect a missile. That raw speed makes it possible to pull off not one but multiple sleights of hand, card tricks on top of card tricks.

In the above example, the CPU is the clock. That’s why you’ll sometimes hear people talk about “clock speed” when they talk about CPUs. CPU speed is currently measured in gigahertz (GHz), meaning that modern CPUs can perform billions of operations every second.

What is a CPU core, and why are most modern CPUs multi-core?


Not only that, but most modern CPUs are multi-core, meaning that each CPU is actually two—or three, or four, or six or eight, or more—CPUs, or cores, on a single chip. Why? The actual answers are pretty complex, but there are two main ones.

First, as CPUs run faster and faster, they create more and more heat. Most modern CPUs top out somewhere around 3GHz in speed even though we’ve had the technology to make much faster chips for a while now. It just turns out that doing so turns your laptop (or tablet, or phone, or whatever) into a small griddle, and that’s not terribly pleasant.10

Second, we’ve figured out how to manufacture transistors at smaller and smaller scales, now measured in nanometers. This has allowed us to actually fit those multiple cores on a single chip, and it turns out that doing so leads to two major benefits. First, we can continue to increase computing power with much lower increases in heat output. Second, these increases in computing power also come at a lower power consumption cost, which is especially relevant in the age of battery-powered devices like phones and laptops.1112

There’s a lot more we could say about CPUs, but we’ll leave it at that for now.

What is a GPU?


? Intro only

? Graphics Processing Unit, Wikipedia

(417 words / 3-5 minutes)

Graphics Processing Units, or GPUs, were originally designed just for improving computer systems’ graphics (especially 3D) performance. It turns out that the kind of math you need to do to create 3D graphics is really tough on CPUs and can be handled much more efficiently by very parallel, purpose-built tools. Hence, GPUs.

For a while, the only folks who cared about GPUs were 3D artists and gamers. But, as we’ve gotten better at designing software that can take advantage of GPUs’ parallel computing powers (and as 3D effects have found their way into more everyday computing), we’ve figured out how to leverage GPUs for non-graphics related tasks13, leading to the rise of GPGPU, or general purpose computing on graphics processing units.

What is an SoC?

? Intro only

? System on a Chip, Wikipedia

(340 words / 2-4 minutes)

The rise of Systems on Chips, or SoCs, has coincided with the rise of mobile electronics. As the devices we use have gotten smaller and smaller, integrating entire computing systems on a single chip has enabled the necessary reductions in size and has also led to efficiency gains.

What is RAM?


? Intro only

? Random-access memory, Wikipedia

(262 words / 2-4 minutes)

If a CPU is a computer’s brain, Random-access memory, or RAM, is its working memory. RAM stores information only while a device is powered on. It’s the fastest possible tool a computer has at its disposal for reading and writing data. The amount of RAM in a device affects its ability to multi-task, as well as a few other performance metrics.

Input devices


We now know about data, and we know, more or less, how computers work with that data. Now let’s talk about how we interact with computers—how we tell them to do things, how we see what they’re doing, and how we connect things to them.

Keyboards, mice, and touch—those are pretty easy, right?


At least conceptually, yes! In fact, you don’t even need to read anything about these topics! (Though if you want to, there’s plenty to say—each of these input methods has its own fascinating history and design considerations.)

Instead, I want to just share one more sneak peek at “What is Code?” to help you think about how keyboards (and, with a little imagination on your part, other input devices) interact with the software that runs a computer:

Consider what happens when you strike a key on your keyboard. Say a lowercase “a.” The keyboard is waiting for you to press a key, or release one; it’s constantly scanning to see what keys are pressed down. Hitting the key sends a scancode.
… Every key makes a code. The computer interprets these codes. There are many steps between pressing the “a” key and seeing an “a” on the screen.
Just as the keyboard is waiting for a key to be pressed, the computer is waiting for a signal from the keyboard. When one comes down the pike, the computer interprets it and passes it farther into its own interior. “Here’s what the keyboard just received—do with this what you will.”
It’s simple now, right? The computer just goes to some table, figures out that the signal corresponds to the letter “a,” and puts it on screen. Of course not—too easy. Computers are machines. They don’t know what a screen or an “a” are. To put the “a” on the screen, your computer has to pull the image of the “a” out of its memory as part of a font, an “a” made up of lines and circles. It has to take these lines and circles and render them in a little box of pixels in the part of its memory that manages the screen. So far we have at least three representations of one letter: the signal from the keyboard; the version in memory; and the lines-and-circles version sketched on the screen. We haven’t even considered how to store it, or what happens to the letters to the left and the right when you insert an “a” in the middle of a sentence. Or what “lines and circles” mean when reduced to binary data. There are surprisingly many ways to represent a simple “a.” It’s amazing any of it works at all.

Display technologies


What is a pixel?


? Intro only

? Pixel, Wikipedia

(245 words / 2-3 minutes)

A pixel, or picture element, is the smallest addressable part of a digital display or digital raster image.

What is a Retina Display and where does the term come from?


? Intro only

? Retina Display, Wikipedia

(488 words / 3-4 minutes)

Retina Display is Apple’s marketing term for a high PPI (pixel per inch) display with a pixel density high enough that the human eye can’t distinguish individual pixels at a typical viewing device.

What is a CRT screen?


? Intro only

? Cathode ray tube, Wikipedia

(549 words / 3-5 minutes)

Cathode ray tube displays are the old-style displays in televisions and computer monitors that preceded modern flat-panel displays. They are much deeper and heavier than corresponding flat-panel displays. Read the linked article for an explanation of the truly wild way CRT displays work.

What is an LCD screen, and how does one work?


? Intro only

? Liquid-crystal display, Wikipedia

(317 words / 2-3 minutes)

Liquid-crystal displays are the dominant display technology of our time, present in TVs, laptops, smart phones, and more. LCDs use electricity along with the “light-modulating properties of liquid crystals combined with polarizers” to hide or show monochrome light.

What is an LED?


? Intro only

? LED, Wikipedia

(382 words / 2-3 minutes)

A light-emitting diode “is a semiconductor device that emits light when current flows through it.” LEDs can function as individual status lights, as light sources such as light bulbs, or, most importantly in the context of display technologies, as backlights for LCD displays.

What is an OLED screen?


? Intro only

? OLED, Wikipedia

(475 words / 3-5 minutes)

An organic light-emitting diode “is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound that emits light in response to an electric current… An OLED display works without a backlight because it emits its own visible light. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD).”

The display advantages of OLED (deeper blacks and thus higher apparent contrast ratio, which means the image “pops” more, and ability to be thinner and lighter) are offset by the higher price of OLED and increased risk of burn-in on the display.

OLEDs are becoming the display technology of choice for high-end smartphones, TVs, and smart watches, among other devices.

Video + audio connectors


What is HDMI?

? Intro only

? HDMI, Wikipedia

(274 words / 2-4 minutes)

High-Definition Multimedia Interface is a display standard and cable used to transmit audio and video from one device to another over a single cable.

What’s 1080p?


1080p video, also known as Full HD (high-definition) video, consists of images of 1,920 pixels of horizontal resolution by 1,080 pixels of vertical resolution.

How about 4K?


Though 4K resolution can actually refer to a few different specific formats, in general, 4K video has the same resolution as a 2 x 2 grid of four 1080p HD images, or about 3840 x 2160. It’s called 4K video because each image has approximately 4,000 lines of horizontal resolution.

What about headphone jacks?


? Intro only

? Phone connector (audio), Wikipedia

(162 words / 1-2 minutes)

Thankfully, they’re still a thing, though largely on laptops and desktops and in dedicated audio production equipment. One of the few analog (vs. digital) interfaces we still use today!

Network interfaces


These are super-important, but we’re going to save the details for our lesson on networks. For now, just think about the fact that computers really become interesting, fun, and powerful if you have a way to connect them to each other. If you’re really curious, you can read articles on EthernetWifi, cellular data connections like LTE (4G) and 5G, and the differences between a router and a modem.

Data transfer


Boy, USB sure is confusing, huh?


? Intro, Overview – Connector type quick reference, and History only

? USB, Wikipedia

(1805 words / 10-12 minutes)

It is! And ironically, it was designed to make things simpler! Thankfully, things are getting better, but we’re not out of the woods yet.

What about the different shapes of USB?


Look through the chart in the above reading to understand the various shapes, but in short:

  • USB A: The dumb old one that you could only plug in one way.
  • Mini USB: The really dumb small one that you could only plug in one way was only popular for a while.
  • Micro USB: The slightly less dumb small one (still can only plug in one way!) that somehow is still on modern devices
  • USB B: The weird one you mostly plug into printers and scanners
  • USB C: The good one! Reversible and small. Good job, USB people!

And the different speeds of USB?


  • USB 1: Original flavor. Sooooo slow, but pretty good for when it was introduced.
  • USB 2: Kinda better! Still not good, but you can survive here.
  • USB 3: The blue one! Most USB stuff today is USB 3.
  • USB 4: The future! Based on Thunderbolt (more on that in a minute), but more expensive than USB 3 to manufacture, so still not widespread.

Wait, USB’s used for power now, too?


Yup! A lot of things in USB PD (USB power delivery) are more confusing than they should be, but things are getting better here, too.

And now Thunderbolt’s USB-C, too? (And what’s Thunderbolt?)


? Intro only

? Thunderbolt, Wikipedia

(109 words / 1-2 minutes)

Thunderbolt is a high-speed data interface that never really took off outside of folks with high-end data needs, but the technology from latest versions (Thunderbolt 3 and 4) is being rolled into USB 4, so hopefully we’ll see wider adoption.

What is Lightning?


? Intro only

(119 words / 1-2 minutes)

? Lightning, Wikipedia

Lightning is Apple’s proprietary cable for iPhones, iPads, and some accessories (mice, keyboards, etc.). It’s a well-designed cable (small, reversible) and was introduced before USB-C was, but since the emergence and widespread adoption of USB-C, Apple’s continued use of Lightning has become less defensible.

The European Union has mandated that all consumer electronics sold starting in 2024 must support USB-C, and it’s widely rumored that the 2023 iPhones (iPhone 15 and iPhone 15 Pro) will adopt USB-C, as many iPads already have.

What is Bluetooth?


? Intro only

? Bluetooth, Wikipedia

(203 words / 2-3 minutes)

Bluetooth is a low-power, low-distance (typically less than 30 feet in real-world usage) wireless protocol used for wireless headphones, smart watches, smart home devices, and more.

Three rules of hardware


Over time, computer hardware becomes:

  1. Smaller and smaller, to the point where a device’s size is dictated by human factors (the size of our hands, etc.), not hardware requirements
  2. Mechanically simpler, containing fewer moving parts. The fewer things that move, the more reliable the devices.
  3. Becomes faster and/or more efficient. Though it theoretically has a maximum limit, this rule of hardware has been described as Moore’s law (named after a co-founder of Intel), which specifically says that devices’ computing power will double every 18-24 months.

A few comments on the three rules of hardware

Discussion questions


  • What are some of your earliest memories of electronics hardware? Devices / things you loved? Hated?
  • Does your current computer use a traditional HDD (spinning disk) or SSD?
  • Real talk: does the concept of digital data really, truly make sense to you? If not (and that’s okay—this is complicated stuff!), talk about what you’re confused about, and see if anyone in your discussion group can help clear things up. (If you’re all still confused, just bring me into the conversation with a mention! (You can do this any time, by the way—not just on this question / topic!))
  • Did you read past the first part on any of the “Required Reading just the first part”? If so, what did you learn?
  • Do you remember the speed of the CPU in your first computer? If so, how does that compare to your current computer?
  • How many cores does your current computer have?
  • One of our excerpts from “What is code?” concludes with “It’s amazing any of it works at all.” After reading this lesson, do you agree?

Words on / reading time for this page: 2,952 words / 15-20 minutes

Words in / reading time for required readings: 5,917 words / 37-59 minutes

Total words in / reading time for this lesson: 8,631 words / 52-79 minutes


  1. It can be helpful to remember this to help distinguish bits from bytes, which we’ll get to in a second

  2. You have one of these if your computer makes noise when it launches an application or opens a file. If you’re really old like me, you actually used to get concerned when your computer stopped making noise when it was on.

  3. Though that price gap is shrinking rapidly

  4. (Ha!)

  5. Seriously!

  6. Cribbed from the NMI’s founder and the instructor for the previous version of this course, Dr. Scott Shamp

  7. You’ll also probably want to at least look at the pictures to understand what these things look like, too.

  8. Admittedly, this is easier done on a laptop/desktop/tablet than on mobile, but even on mobile, it works pretty well.

  9. With which we’ll be spending much more time next lesson

  10. It’s for this reason that high-end desktop gaming computers, that want to use the fastest possible chips no matter what, are sometimes water-cooled. It’s also why data centers are often built in cooler climates near water sources.

  11. PPW—performance per watt—is now an important metric.

  12. Some mobile CPUs even are designed to have both high-powered cores that can be switched on for processor-intensive tasks (and switched off when they’re finished) and low-powered cores that take care of simpler / background operations.

  13. Like machine learning, for example