TechStuff Tidbits: What does CPU architecture actually mean? - podcast episode cover

TechStuff Tidbits: What does CPU architecture actually mean?

Jun 21, 202318 min
--:--
--:--
Listen in podcast apps:

Episode description

What are the components that make up a CPU? What does it mean if a manufacturer used a 5 nm process to fabricate a chip? And is a multi-core processor always better than a single-core processor? 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you? You know? Over the last few years, there's been a lot of conversation around microchips in general, and CPUs and GPUs in particular. The pandemic led to bottlenecks in the supply chain. Manufacturing facilities had to shut down multiple times, particularly in China, and the initial skyrocketing value

of cryptocurrencies all had an effect on microchip supply. Meanwhile, multiple countries, including the United States, started looking into ways to shift away from depending so heavily on China for chip fabrication. And when we talk about chips like CPUs, we often will focus on two major factors. So first is the process used to actually fabricate discrete components on

the chip. We typically reference this in terms of a nanometer process, and the fewer nanometers represents more advanced processes, so you're working backward in numbers. Secondly, is the chip's architecture, and that's really what we're going to focus on in this episode. But in order to do that, we also have to talk about the other stuff. So a quick word on the fabrication process part. You might hear that company used a seven nanometer process or a five or

even a three nanometer process to make the chip. And you may know that a nanometer is one billionth of a meter. It's one scale up from the atomic scale. So a typical human hair measures between eighty thousand and one hundred thousand nanometers thick, as in, if you measured the diameter of the hair, that's the range you would

be at. So when you're talking about seven or five or even three nanometers, that's super duper small, right, Well, it would be if the nanometer designation still referred to component size. Now, once upon a time the scale reference to a process actually did correspond with at least some component size on the chip itself, but that has not been the case for several generations. Now. Part of the reason for that comes down to the limitations of physics.

As you shrink down to the bottom end of the nanoscale and into the atomic scale, you start to have to contend with quantum mechanics. Now, we don't encounter quantum mechanic effects on our normal scale like in our everyday lives. But at that tiny scale, things start to behave in a really wonky way, and relying on physical structures to rain in quantum silliness becomes a big challenge. I've done

full episodes kind of about this. We're not going to dive too deeply into it, so instead, the scale really is more of a marketing strategy. When you hear it's a five nanometer process, it doesn't mean that anything on that chip actually measures five nanometers in size. It's a way of indicating this process is more advanced than the previous seven nanometer process. So it really means that when you get down to the process and the architecture, you

start to converge on essentially the same meaning. So let's talk about that architecture. What does chip architecture actually mean. Well, we're going to stick with CPUs, also known as central processing units, and we can think of a CPU as having three major components. These are the registers, the arithmetic logic unit or ALU, and the control unit. So registers act kind of like memory, and that they hold information

that the CPU needs in order to complete operations. Logic gates make up the quote unquote memory of registers, and a logic gate follows a specific rule. It creates an output based upon the input coming into the logic gate. I'll do a full episode just about logic gates in the future to kind of expand on that and explain how these logic gates work and how by combining logic gates you can create different outcomes. So registers operate faster than RAM, a random access memory, which I often at

least will compare to short term memory in humans. RAM, in turn operates faster than a solid state drive or a hard drive, which I compare to long term memory with humans. So you've got registers, which are the fastest access of memory total, but it holds very little information. It's just tiny, tiny bits of information. Then you have RAM. Then you've got solid state drive or hard drive in registers. We actually we have five basic types, so let's list

them off, shall we. The instruction register stores the address and random access memory of the instruction to be used in a given operation. So that instruction could be some basic arithmetic function for example like AD. Next, you've got the memory address register. This stores the address within RAM of the data that is to be processed. So this is the data that's going to be transformed by that instruction in some way. Your instruction register has the info

on what operation to use. The memory address register has the info on which data is going to undergo that operation. Then you've got the memory data register. This stores the data that the CPU is actually processing at any given time. So while the other two registers are kind of like looking into the future like the next step, the memory data register is concerned with what's going on right now,

gosh darn it. Then you've got the program counter. This stores the address and RAM of the next instruction coming up, so the next one down the line. Finally, you've got the accumulator that stores the results of the calculations that were just performed. So the registers are one part of CPU architecture. Now let's talk about the ALU or arithmetic logic unit. The ALU is the brains of the CPU. Within the ALU are logic circuits which actually carry out

the operations on data. These operations span a wide range of arithmetic tasks like addition and subtraction, to things like incrementation and also comparison. So, for example, you might have a pair of operations that each produce a result and the ALU has to compare these results with one another to determine if they are the same or different. That's the kind of basic task the ALU handles, and it does this super fast. Finally, you have the control unit, which,

as the name suggests, controls the process. The control unit receives instructions, decodes those to get to the meaning of the instructions, sends commands to the other components to carry out those instructions, et cetera. The control unit is kind of like a floor manager. It makes sure all the departments are responding appropriately given the program that's running at any given time. The control unit also has a clock, but that clock isn't meant to keep your computer's time

accurate to local time. This clock oscillates a certain number of times per second, and we measure this in hurts, So one oscillation per second would be one hurts. Typically, with processors today, we're talking about the gigahertz range. A gigahertz would be a billion oscillations per second. So a three point two gigahertz CPU has a clock that in the control unit that oscillates three point two billion times every single So now the clock speed relates to how

quickly the processor can actually complete these operations. Some operations require multiple oscillations, but that clock speed or frequency, if you prefer, gives you an idea of how fast or powerful your computer is. Now. Other factors also play into this too. It's not just clock speed, but that is one big component in it. If you're familiar with the term overclocking, then all of the stuff I'm talking to

you about is old news to you, right. Overclocking is the practice of increasing that clock oscillation speed in the control unit beyond its default settings, which typically the manufacturer creates. Like they create default settings, they say this processor is rated at this particular clock speed, and going beyond that could potentially reduce the useful lifespan of the processor or

cause it to overheat, et cetera. So elite gamers typically will use programs to boost the clock speed on CPUs to get past these limitations and to push it faster than what it was rated as in order to milk out higher performance in their gaming ricks. Doing this does come with some trade offs. I mean, it does mean that you might be burning through your CPU faster than

you usually would. It also typically means that the computer's going to generate a lot more heat, so you need to have a good heat dispersal system in place to carry that heat away from the processor because, as we know, heat and electronics are not super friendly with one another. Connecting all these different components are wires called buses, So a bus might carry instructions, another bus might carry data. The capacity of buses also plays a part in how

powerful a computer is. I'll have to do another episode to explain things like what is a thirty two bit machine versus a sixty four bit machine, or even with the old game consoles, an eight bit machine. We'll talk about bitwidth and that kind of stuff, but that kind of plays into things like buses. You can think of it sort of like roads. How wide is the road so how many vehicles can pass side by side at

the same time. And one other thing that we will mention will be cores, and I'm going to get to that after we take this quick break. Okay, before the break, I teased that we're going to talk about cores. A CPU core is the smallest unit that can carry out all the jobs that a CPU does. So if you hear of a multicore CPU, that means each of those cores can do the job of a CPU, and they

can have multiple cores. You'll hear things like dual core, which means there's two of them, or quad core, meaning there's four of them, and beyond, each core can carry out the duties of a CPU. So does that mean a dual core or quad core processor is automatically better than a single core processor. Not necessarily so. For some types of computational problems, you can actually divide up the

problem into smaller tasks that could be completed simultaneously. So these are the types of problems that multicore processors are great at tackling because each core can tackle a different set of tasks and thus collectively they'll get to the answer faster. But if the problem cannot be broken down into smaller pieces, a very powerful single core processor might be better than a decently powerful multicore processor. And I

use this analogy all the time. Longtime listeners are probably tired of it, and they've anticipated it, and yes, it's okay to skip ahead a little bit if you are one of those people. But I like to describe multicore processors versus a single core processor by talking about an advanced math class, And in this version of it, I'm going to say there are five students in this advanced math class. Now imagine four of those five students are

all really good at math, right, they're gifted students. However, the fifth student is a genuine math genius. And the genius always completes any given problem faster than the other four students can. And one day the teacher presents a challenge to the class. It's a pop quiz that has

four questions on the quiz. The genius has to try and complete all four problems, but the other four students can actually divide up the quiz and each student can tackle a single problem on there, so collectively they can

solve the quiz together. So who is going to finish first? Well, if we assume that each problem is discreete and independent of the outcomes of the other problems, the four students are likely to finish their quiz collectively before the genius because each one's just doing one question, and the genius is still faster than all the individuals. But they have to do all four questions, whereas each smart student just

has to do one. The multi core processor wins in that scenario, but Let's say you find out that problem two on the quiz actually depends upon the outcome of problem one, and you find out the problem three depends upon the outcome of problem two, and the problem four depends on the outcome of problem three. Well, now you can't just divide up the problems between the four students because the student working on problem two has to wait to find out what the answer to problem one is

before they can get started. The genius in that case is going to win that race, right, because they're still faster than any individual is. So for certain types of computational problems and processes, multicore is the way to go, but not in every case, just in a lot of them. For a lot of computer users, it's more important to go multi corep because the typical uses that they rely upon with computers falls into that multi core set of problems.

This includes gamers, So a multi core processor matched with a really good graphics processing unit that's more important than having just a single core super fast processor. But again, it all depends on how you can thread the computational problems. And that's the general description of what computer architecture means. The actual design and layout of these components is what

sets one chip apart from another chip. Since it is increasingly challenging to shrink components down without getting into quantum effects or generating too much heat in a very small space, finding the best possible layout and orientation of components is critical. You know you're not going to be able to cram a whole lot more on, but you might be able to find an orientation that gets a little better performance

out of the components you have now. Back in the day, Intel which which is one of two major companies behind the processors used in most computers these days, used a development approach and chip designed that the company referred to as the tick talk method. So you can think of the tick part of TikTok as taking the same chip layout design from the previous generation, but then shrinking everything down a little bit, which allows you to cram more

components on the chip. So you're following the same architectural plan as the previous generation, but now all the components are slightly smaller so you can have more of them there. The talk sequence would involve creating a new architecture that better takes advantage of these smaller components, and then it would repeat tech talk Tick talk. So with tick you shrink stuff down, but you follow the same game plan as before. With talk, you create a new game plan,

and then you do tick again. And so each generation of Intel fell into one of those two design principles, and in this way Intel would iterate its chip designs. Each generation would improve upon the last. At least that was the idea, either by adding more capability in the form of more components added to the chip, or finding a new way to arrange those components that improve performance.

And by improved performance, I mean not just being faster or more capable, but also more power efficient or creating less heat, because these things do matter quite a bit. And that's our overview of chip architecture. I'll do more episodes about the basics of CPUs soon. Maybe i'll talk a bit about what makes an Intel chip different from say,

an AMD chip. And you may know, if you've ever built a computer, the type of processor you want ends up mattering a big deal, because it will tell you what kind of motherboard you can use, for example, because a motherboard designed to work with an Intel chip is

not going to work with an AMD chip. Thing. So we'll do another episode to talk a bit more about this in the future, and keep it nice and short and simple so that folks can listen, get a good understanding, and then know what to look for when they move forward if they ever decide to build their own computer. And I think we'll also, like I said, to an episode about things like logic gates to kind of understand at a very very very basic level, what is going

on when a computer is processing information. That's it for this Tech Stuff Tidbits episode. I hope you are all well. Just a reminder next week, I am on vacation and I will be back the following week, so we will likely have some reruns playing next week, but I will be back and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file