Blog Post - Everything Is Abstract!

Intro

I have been interested in understanding how computers work since I was a child, so I spent a lot of time researching that topic.

At first, it was writing simple Processing applications in middle-school (check it out if you want a beginner-friendly tool for writing code that does visual stuff, it’s great). This provided me a deeper sense of understanding of how computers work. If you want to see something on the screen, you have to write a computer algorithm that manages what to show you, and where to show it. To draw a line, you would call a function that takes two points, and potentially a colour. The algorithm would iterate over the pixels in between the two points, and change the colour of each to the newly supplied one. With this simple tool, you can draw any (approximative) shape you want, and with added logic, you can make the shapes move, interact with one another, or anything, really.

This was mind-blowing to 12 year-old me. Everything that I did on my computer had someone write this sort of complex logic? Every time I pressed on a button in internet explorer, in a game, or anywhere else, something this complex was happening under the hood, and it was so seamless that I never had to think about it.

This is the concept of “abstraction”. Abstraction makes models that generalise other concepts, reducing the cognitive load required to interact with it. We can therefore “abstract” some ideas to make them more digestible and potentially more easily understood for everyone. In the context of an application that lets users drag a box across the screen, “abstraction” is thinking about dragging a box, instead of thinking about the continuous redrawing of that box at every frame, at the location of the cursor.

Little did I know at the time, but even now, after a Master’s degree in Computer Science, I still feel like I’m finally barely starting to understand my computer.

This blog post ties in my long-term (probably impossible if done strictly) project of having a usable computer entirely running software written by me, including:

  1. its BIOS/UEFI program that initialises the hardware
  2. a kernel which would handle memory, processes, filesystems, control devices and do networking
  3. a (non-optimising) compiler collection (probably for C), which encompasses an assembler, a linker, a compiler which all together let a programmer write down high-level code which gets translated into an executable binary
  4. a mostly-POSIX shell, which lets people easily interact with the operating system
  5. an interpreter for a very high-level language such as python

Writing down a hello world program in Python and have the entire toolchain be written be me is a life-long dream. I don’t know how realistic this project is, considering the unbelievable complexity of each component. But hey, some people managed to do basically what I want to do, so it doesn’t hurt to try.

Because of this, I’d like to share and appreciate how “abstract” and high-level our computer life is. I will be doing this by quickly analysing different levels of “abstraction”, where they came from and how they relate to one another inside of an average computer. Obviously, this blog post is tiny and can barely begin to summerise a minuscule fraction of the knowledge that we have accumulated over the millenia, so forgive me if it’s to “abstract” (lol) and doesn’t go into detail (and also for possible inaccuracies).

Layer 0: Wires

Everyone knows that computers are basically just machines that move 1s and 0s around, right? It’s not wrong, but that is ALREADY an abstraction over what is actually going on inside of a computer: it’s just a bunch of wires that send current to one another sometimes. So the lowest layer considered here is the wire. Technically, we could see electrical currents as an abstraction over classical electrodynamics, itself being an abstraction over quantum mechanics (it always comes down to quantum mechanics!) but since I’m a computer scientist, not a physicist, I’ll let them write down their own blog posts about how everything comes down to them at the end of the day (and mathematicians will wait for their turn to describe maths, etc…).

So computers represent information with wires, but how do they do that? Mathematicians and philosophers agree that the single most basic unit of information is a “truth value”. A truth value is a value that can have one of two possible ones: true, or false. Thanks to classical logic, we know that we can represent any information with truth values.

We therefore choose to represent those “truth values” as either 1s or 0s, which we encode into the wires by sending a “strong” signal when a certain value is true, and a “low” signal when it’s false.

There’s already so much complexity, when we barely even started! Strap up, as this is barely the beginning…

Layer 1: Logic Gates and Mathematical Logic

(to be continued whne I have more time)