Starting February 22nd, Carrie Anne Philbin will be hosting Crash Course Computer Science! In this series, we're going to trace the origins of our modern computers, take a closer look at the ideas that gave us our current hardware and software, discuss how and why our smart devices just keep getting smarter, and even look towards the future! Computers fill a crucial role in the function of our society, and it's our hope that over the course of this series you will gain a better understanding of how far computers have taken us and how far they may carry us into the future.
Hello, world! Welcome to Crash Course Computer Science! So today, we’re going to take a look at computing’s origins, because even though our digital computers are relatively new, the need for computation is not. Since the start of civilization itself, humans have had an increasing need for special devices to help manage laborious tasks, and as the scale of society continued to grow, these computational devices began to play a crucial role in amplifying our mental abilities. From the abacus and astrolabe to the difference engine and tabulating machine, we’ve come a long way to satisfying this increasing need, and in the process completely transformed commerce, government, and daily life.
So we ended last episode at the start of the 20th century with special purpose computing devices such as Herman Hollerith’s tabulating machines. But as the scale of human civilization continued to grow as did the demand for more sophisticated and powerful devices. Soon these cabinet-sized electro-mechanical computers would grow into room-sized behemoths that were prone to errors. But is was these computers that would help usher in a new era of computation - electronic computing.
Today, Carrie Anne is going to take a look at how those transistors we talked about last episode can be used to perform complex actions. With the just two states, on and off, the flow of electricity can be used to perform a number of logical operations, which are guided by a branch of mathematics called Boolean Algebra. We’re going to focus on three fundamental operations - NOT, AND, and OR - and show how they were created in a series of really useful circuits. And its these simple electrical circuits that lay the groundwork for our much more complex machines.
Today, we’re going to take a look at how computers use a stream of 1s and 0s to represent all of our data - from our text messages and photos to music and webpages. We’re going to focus on how these binary values are used to represent numbers and letters, and discuss how our need to perform operations on larger and more complex values brought us from our 8-bit video games to beautiful Instagram photos, and from unreadable garbled text in our emails to a universal language encoding scheme.
Today we're going to talk about a fundamental part of all modern computers. The thing that basically everything else uses - the Arithmetic and Logic Unit (or the ALU). The ALU may not have to most exciting name, but it is the mathematical brain of a computer and is responsible for all the calculations your computer does! And it's actually not that complicated. So today we're going to use the binary and logic gates we learned in previous episodes to build one from scratch, and then we'll use our newly minted ALU when we construct the heart of a computer, the CPU, in episode 7.
Today we’re going to create memory! Using the basic logic gates we discussed in episode 3 we can build a circuit that stores a single bit of information, and then through some clever scaling (and of course many new levels of abstraction) we’ll show you how we can construct the modern random-access memory, or RAM, found in our computers today. RAM is the working memory of a computer. It holds the information that is being executed by the computer and as such is a crucial component for a computer to operate. Next week we’ll use this RAM, and the ALU we made last episode, to help us construct our CPU - the heart of a computer.
Today we’re going to build the ticking heart of every computer - the Central Processing Unit or CPU. The CPU’s job is to execute the programs we know and love - you know like GTA V, Slack... and Power Point. To make our CPU we’ll bring in our ALU and RAM we made in the previous two episodes and then with the help of Carrie Anne’s wonderful dictation (slowly) step through some clock cycles. WARNING: this is probably the most complicated episode in this series, we watched this a few times over ourselves, but don't worry at about .03Hz we think you can keep up.
Today we’re going to take our first baby steps from hardware into software! Using that CPU we built last episode we’re going to run some instructions and walk you through how a program operates on the machine level. We'll show you how different programs can be used to perform different tasks, and how software can unlock new capabilities that aren't built into the hardware. This episode, like the last is pretty complicated, but don’t worry - as we move forward into programming the idea of opcodes, addresses, and registers at this machine level will be abstracted away like many of the concepts in this series.
In which Carrie Anne presents a new sing-a-long format and faces her greatest challenge yet - signing off an episode.
So now that we’ve built and programmed our very own CPU, we’re going to take a step back and look at how CPU speeds have rapidly increased from just a few cycles per second to gigahertz! Some of that improvement, of course, has come from faster and more efficient transistors, but a number hardware designs have been implemented to boost performance. And you’ve probably heard or read about a lot of these - they’re the buzz words attached to just about every new CPU release - terms like instruction pipelining, cache, FLOPS, superscalar, branch prediction, multi-core processors, and even super computers! These designs are pretty complicated, but the fundamental concepts behind them are not. So bear with us as we introduce a lot of new terminology including what might just be the best computer science term of all time: the dirty bit. Let us explain.
Since Joseph Marie Jacquard’s textile loom in 1801, there has been a demonstrated need to give our machines instructions. In the last few episodes, our instructions were already in our computer’s memory, but we need to talk about how they got there - this is the heart of programming. Today, we’re going to look at the history of programming and the innovations that brought us from punch cards and punch paper tape to plugboards and consoles of switches. These technologies will bring us to the mid 1970s and the start of home computing, but they had limitations, and what was really needed was an easier and more accessible way to write programs - programming languages. Which we’ll get to next week.
So we ended last episode with programming at the hardware level with things like plugboards and huge panels of switches, but what was really needed was a more versatile way to program computers - software! For much of this series we’ve been talking about machine code, or the 1’s and 0’s our computers read to perform operations, but giving our computers instructions in 1’s and 0’s is incredibly inefficient, and a “higher-level” language was needed. This led to the development of assembly code and assemblers that allow us to use operands and mnemonics to more easily write programs, but assembly language is still tied to underlying hardware. So by 1952 Navy officer Grace Hopper had helped created the first high-level programming language A-0 and compiler to translate that code to our machines. This would eventually lead to IBM’s Fortran and then a golden age of computing languages over the coming decades. Most importantly, these new languages utilized new abstractions to make programming easier and more powerful giving more and more people the ability to create new and amazing things.
Today, Carrie Anne is going to start our overview of the fundamental building blocks of programming languages. We’ll start by creating small programs for our very own video game to show how statements and functions work. We aren’t going to code in a specific language, but we’ll show you how conditional statements like IF and ELSE statements, WHILE loops, and FOR loops control the flow of programs in nearly all languages, and then we’ll finish by packaging up these instructions into functions that can be called by our game to perform more and more complex actions.
Algorithms are the sets of steps necessary to complete computation - they are at the heart of what our devices actually do. And this isn’t a new concept. Since the development of math itself algorithms have been needed to help us complete tasks more efficiently, but today we’re going to take a look a couple modern computing problems like sorting and graph search, and show how we’ve made them more efficient so you can more easily find cheap airfare or map directions to Winterfell... or like a restaurant or something.
Today we’re going to talk about on how we organize the data we use on our devices. You might remember last episode we walked through some sorting algorithms, but skipped over how the information actually got there in the first place! And it is this ability to store and access information in a structured and meaningful way that is crucial to programming. From strings, pointers, and nodes, to heaps, trees, and stacks get ready for an ARRAY of new terminology and concepts.
Today we’re going to take a step back from programming and discuss the person who formulated many of the theoretical concepts that underlie modern computation - the father of computer science himself: Alan Turing. Now normally we try to avoid “Great Man" history in Crash Course because truthfully all milestones in humanity are much more complex than just an individual or through a single lens - but for Turing we are going to make an exception. From his theoretical Turing Machine and work on the Bombe to break Nazi Enigma codes during World War II, to his contributions in the field of Artificial Intelligence (before it was even called that), Alan Turing helped inspire the first generation of computer scientists - despite a life tragically cut short.
Today, we’re going to talk about how HUGE programs with millions of lines of code like Microsoft Office are built. Programs like these are way too complicated for a single person, but instead require teams of programmers using the tools and best practices that form the discipline of Software Engineering. We'll talk about how large programs are typically broken up into into function units that are nested into objects known as Object Oriented Programming, as well as how programmers write and debug their code efficiently, document and share their code with others, and also how code repositories are used to allow programmers to make changes while mitigating risk.
So you may have heard of Moore's Law and while it isn't truly a law it has pretty closely estimated a trend we've seen in the advancement of computing technologies. Moore's Law states that we'll see approximately a 2x increase in transistors in the same space every two years, and while this may not be true for much longer, it has dictated the advancements we've seen since the introduction of transistors in the mid 1950s. So today we're going to talk about those improvements in hardware that made this possible - starting with the third generation of computing and integrated circuits (or ICs) and printed circuit boards (or PCBs). But as these technologies advanced a newer manufacturing process would bring us to the nanoscale manufacturing we have today - photolithography.
So as you may have noticed from last episode, computers keep getting faster and faster, and by the start of the 1950s they had gotten so fast that it often took longer to manually load programs via punch cards than to actually run them! The solution was the operating system (or OS), which is just a program with special privileges that allows it to run and manage other programs. So today, we’re going to trace the development of operating systems from the Multics and Atlas Supervisor to Unix and MS-DOS, and take at look at how these systems heavily influenced popular OSes like Linux, Windows, MacOS, and Android that we use today.
So we’ve talked about computer memory a couple times in this series, but what we haven’t talked about is storage. Data written to storage, like your hard drive, is a little different, because it will still be there even if the power goes out - this is known as non-volatile memory. Today we’re going to trace the history of these storage technologies from punch cards, delay line memory, core memory, magnetic tape, and magnetic drums, to floppy disks, hard disk drives, cds, and solid state drives. Initially, volatile memory, like RAM was much faster than these non-volatile storage memories, but that distinction is becoming less and less true today.
Today we’re going to look at how our computers read and interpret computer files. We’ll talk about how some popular file formats like txt, wave, and bitmap are encoded and decoded giving us pretty pictures and lifelike recordings from just strings of 1’s and 0’s, and we’ll discuss how our computers are able to keep all this data organized and readily accessible to users. You’ll notice in this episode that we’re starting to talk more about computer users, not programmers, foreshadowing where the series will be going in a few episodes.
In which lines will be flubbed and punches thrown - but don't worry, we'll be back to your regular scheduled Comp Sci next week!
So last episode we talked about some basic file formats, but what we didn’t talk about is compression. Often files are way too large to be easily stored on hard drives or transferred over the Internet - the solution, unsurprisingly, is to make them smaller. Today, we’re going to talk about lossless compression, which will give you the exact same thing when reassembled, as well as lossy compression, which uses the limitations of human perception to remove less important data. From listening to music and sharing photos, to talking on the phone and even streaming this video right now the ways we use the Internet and our computing devices just wouldn’t be possible without the help of compression.
Today, we are going to start our discussion on user experience. We've talked a lot in this series about how computers move data around within the computer, but not so much about our role in the process. So today, we're going to look at our earliest form of interaction through keyboards. We'll talk about how the keyboard got it's qwerty layout, and then we'll track it's evolution in electronic typewriters, and eventually terminals with screens. We are going to focus specifically on text interaction through command line interfaces, and next week we'll take a look at graphics.
Today we begin our discussion of computer graphics. So we ended last episode with the proliferation of command line (or text) interfaces, which sometimes used screens, but typically electronic typewriters or teletypes onto paper. But by the early 1960s a number of technologies were introduced to make screens much more useful from cathode ray tubes and graphics cards to ASCII art and light pens. This era would mark a turning point in computing - computers were no longer just number crunching machines, but potential assistants interactively augmenting human tasks. This was the dawn of graphical user interfaces which we’ll cover more in a few episodes.
Today we’re going to step back from hardware and software, and take a closer look at how the backdrop of the cold war and space race and the rise of consumerism and globalization brought us from huge, expensive codebreaking machines in the 1940s to affordable handhelds and personal computers in the 1970s. This is an era that saw huge government funded projects - like the race to the moon. And afterward, a shift towards the individual consumer, commoditization of components, and the rise of the Japanese electronics industry.
Today we're going to talk about the birth of personal computing. Up until the early 1970s components were just too expensive, or underpowered, for making a useful computer for an individual, but this would begin to change with the introduction of the Altair 8800 in 1975. In the years that follow, we'll see the founding of Microsoft and Apple and the creation of the 1977 Trinity: The Apple II, Tandy TRS-80, and Commodore PET 2001. These new consumer oriented computers would become a huge hit, but arguably the biggest success of the era came with the release of the IBM PC in 1981. IBM completely changed the industry as its "IBM compatible" open architecture consolidated most of the industry except for, notably, Apple. Apple chose a closed architecture forming the basis of the Mac Vs PC debate that rages today. But in 1984, when Apple was losing marketshare fast it looked for a way to offer a new user experience like none other - which we'll discuss next week.
Today, we're going to discuss the critical role graphical user interfaces, or GUIs played in the adoption of computers. Before the mid 1980's the most common way people could interact with their devices was through command line interfaces, which though efficient, aren't really designed for casual users. This all changed with the introduction of the Macintosh by Apple in 1984. It was the first mainstream computer to use a GUI, standing on the shoulder of nearly two decades of innovation including work from the father of the GUI himself, Douglas Englebart, and some amazing breakthroughs at Xerox Parc.
Today we’re going to discuss how 3D graphics are created and then rendered for a 2D screen. From polygon count and meshes, to lighting and texturing, there are a lot of considerations in building the 3D objects we see in our movies and video games, but then displaying these 3D objects of a 2D surface adds an additional number of challenges. So we’ll talk about some of the reasons you see occasional glitches in your video games as well as the reason a dedicated graphics processing unit, or GPU, was needed to meet the increasing demand for more and more complex graphics.
Today we start a three episode arc on the rise of a global telecommunications network that changed the world forever. We’re going to begin with computer networks, and how they grew from small groups of connected computers on LAN networks to eventually larger worldwide networks like the ARPANET and even the Internet we know today. We'll also discuss how many technologies like Ethernet, MAC addresses, IP Addresses, packet switching, network switches, and TCP/IP were implemented to new problems as our computers became ever-increasingly connected. Next week we’ll talk about the Internet, and the week after the World Wide Web!
Today, we're going to talk about how the Internet works. Specifically, how that stream of characters you punch into your browser's address bar, like "youtube.com", return this very website. Just to clarify we're talking in a broader sense about that massive network of networks connecting millions of computers together, not just the World Wide Web, which is a portion of the Internet, and our topic for next week. Today, we're going to focus on how data is passed back and forth - how a domain name is registered by the Domain Name System, and of course how the data requested or sent gets to the right person in little packets following standard Internet Protocol, or IP. We'll also discuss two different approaches to transferring this data: Transmission Control Protocol, or TCP, when we need to be certain no information is lost, and User Datagram Protocol, or UDP, for those time sensitive applications - because nobody wants an email with missing text, but they also don't want to get lag-fragged in their favorite first person shooter.
Today we’re going to discuss the World Wide Web - not to be confused with the Internet, which is the underlying plumbing for the web as well as other networks. The World Wide Web is built on the foundation of simply linking pages to other pages with hyperlinks, but it is this massive interconnectedness that makes it so powerful. But before the web could become a thing, Tim Berners-Lee would need to invent the web browser at CERN, and search engines would need to be created to navigate these massive directories of information. By the mid 1990’s we will see the rise of Yahoo and Google and monolithic websites like Ebay and Amazon, forming the web we know today. But before we end our unit on the Internet we want to take a moment to discuss the implications of Net Neutrality, and its potential to shape the Internet's future.
Cybersecurity is a set of techniques to protect the secrecy, integrity, and availability of computer systems and data against threats. In today’s episode, we’re going to unpack these three goals and talk through some strategies we use like passwords, biometrics, and access privileges to keep our information as secure, but also as accessible as possible. From massive Denial of Service, or DDos attacks, to malware and brute force password cracking there are a lot of ways for hackers to gain access to your data, so we’ll also discuss some strategies like creating strong passwords, and using 2-factor authentication, to keep your information safe.
Today we're going to talk about hackers and their strategies for breaking into computer systems. Now, not all hackers are are malicious cybercriminals intent on stealing your data (these people are known as Black Hats). There are also White Hats that hunt for bugs, close security holes, and perform security evaluations for companies. And there are a lot of different motivations for hackers - sometimes just amusement or curiosity, sometimes for money, and sometimes to promote a social or political goals. Regardless, we're not going to teach you how to become a hacker in this episode but we are going to walk you through some of the strategies hackers use to gain access to your devices so you can be better prepared to keep your data safe.
Today we’re going to talk about how to keep information secret, and this isn’t a new goal. From as early as Julius Caesar’s Caesar cipher to Mary, Queen of Scots, encrypted messages to kill Queen Elizabeth in 1587, theres has long been a need to encrypt and decrypt private correspondence. This proved especially critical during World War II as Allan Turing and his team at Bletchley Park attempted to decrypt messages from Nazi Enigma machines, and this need has only grown as more and more information sensitive tasks are completed on our computers. So today, we’re going to walk you through some common encryption techniques such as the Advanced Encryption Standard (AES), Diffie-Hellman Key Exchange, and RSA which are employed to keep your information safe, private, and secure. Note: In October of 2017, researchers released a viable hack against WPA2, known as KRACK Attack, which uses AES to ensure secure communication between computers and network routers. The problem isn't with AES, which is provably secure, but with the communication protocol between router and computer. In order to set up secure communication, the computer and router have to agree through what's called a "handshake". If this handshake is interrupted in just the right way, an attacker can cause the handshake to fault to an insecure state and reveal critical information which makes the connection insecure. As is often the case with these situations, the problem is with an implementation, not the secure algorithm itself. Our friends over at Computerphile have a great video on the topic: https://www.youtube.com/watch?v=mYtvjijATa4
So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving cars, to cutting edge medical diagnosis and real-time language translation, there has been an increasing need for our computers to learn from data and apply that knowledge to make predictions and decisions. This is the heart of machine learning which sits inside the more ambitious goal of artificial intelligence. We may be a long way from self-aware computers that think just like us, but with advancements in deep learning and artificial neural networks our computers are becoming more powerful than ever.
Today we’re going to talk about how computers see. We’ve long known that our digital cameras and smartphones can take incredibly detailed images, but taking pictures is not quite the same thing. For the past half-century, computer scientists have been working to help our computing devices understand the imagery they capture, leading to advancements everywhere, from tracking hands and whole bodies, biometrics to unlock our phones, and eventually giving autonomous cars the ability to understand their surroundings.
Today we’re going to talk about how computers understand speech and speak themselves. As computers play an increasing role in our daily lives there has been an growing demand for voice user interfaces, but speech is also terribly complicated. Vocabularies are diverse, sentence structures can often dictate the meaning of certain words, and computers also have to deal with accents, mispronunciations, and many common linguistic faux pas. The field of Natural Language Processing, or NLP, attempts to solve these problems, with a number of techniques we’ll discuss today. And even though our virtual assistants like Siri, Alexa, Google Home, Bixby, and Cortana have come a long way from the first speech processing and synthesis models, there is still much room for improvement.
Today we're going to talk about robots! Robots are often thought as a technology of the future, but they're already here by the millions in the workplace, our homes, and pretty soon on the roads. We'll discuss the origins of robotics to its proliferation, and even look at some common control designs that were implemented to make them more useful in the workplace. Robots are often thought of as a menace or danger to society, and although there definitely is the propensity for malicious uses, robots also have the potential to drastically improve the world.
We’ve spent most of this series talking about computers. Which makes sense - this is Crash Course COMPUTER SCIENCE after all. But at their core computers are tools employed by humans and humans are pretty complicated. So today, we’re going to discuss some psychological considerations in building computers like how to make them easier for humans to use, the uncanny valley problem when humanoid robots gets more and more humanlike, and strategies to make our devices work better with us by incorporating our emotions and even altering our gaze. Oh, and we'll talk about Carrie Anne's all time favorite user interface design principle - knurling.
Today we’re going to go a little meta and talk about how computer science can support learning with educational technology. We here at Crash Course are big fans of interactive in-class learning and hands-on experiences, but we also believe in the additive power of educational technology inside and outside the classroom from the Internet itself and Massive Open Online Courses, or MOOCs to AI driven intelligent tutoring systems and virtual reality.
In our SERIES FINALE of Crash Course Computer Science we take a look towards the future! In the past 70 years electronic computing has fundamentally changed how we live our lives, and we believe it’s just getting started. From ubiquitous computing, artificial intelligence, and self-driving cars to brain computer interfaces, wearable computers, and maybe even the singularity there is so much amazing potential on the horizon. Of course there is also room for peril with the rise of artificial intelligence and more immediate displacement of much of the workforce through automation. It’s tough to predict how it will all shake out, but it’s our hope that this series has inspired you to take part in shaping that future. Thank you so much for watching.