Hi Lazaros,
great, perfect. Appreciate your help.
Hi Lazaros,
great, perfect. Appreciate your help.
Hello, everyone.
I have a few questions regarding classic IOS. Does the term monolithic just mean that all processes share the RAM and CPU and that there is no isolation between them? So in other words, if one process requires more CPU and memory, it could end up taking way too much of it, so other processes would be starved? Since itâs âsharedâ. Another resource also says that âeach process yields the CPU to allow others to executeâ which is basically what Rene says as well.
I am referring to Reneâs example
For example, the âloggingâ process could require so many memory and CPU cycles that BGP is unable to perform some of its tasks. Itâs also possible that when a single process crashes, it takes down the entire system. This is unacceptable nowadays in networking.
How does this work? Is there no prioritization when it comes to these processes? Wouldnât BGP be considered as important as logging, for example? Because both BGP and logging are very important processes. Because, for example, a low-priority process canât just take away resources from a process like BGP, can it?
Also, how does a single process crashing bring down an entire system? If the process fails or crashes, I thought its resources would be freed and given to other processes, I didnât expect the possibility of it bringing down an entire system.
Thank you.
David
Hello David
In essence, yes. In classic Cisco IOS, âmonolithicâ means that all processes run in a single, flat memory address space with no memory protection or isolation between them. Specifically, all IOS processes (BGP, OSPF, logging, CLI, etc.) share the same memory pool. All processes are scheduled by IOSâs internal scheduler and share the same CPU resources. And there is no memory protection, unlike modern operating systems (Linux, Windows, IOS XE), where each process has its own protected memory space. In classic IOS, any process can theoretically access or corrupt another processâs memory.
So in classic IOS, you have one large program running directly on the hardware, not separate OS processes with individual memory address spaces.
Well, there is some prioritization, but itâs not as robust as in modern operating systems. Classic IOS uses cooperative multitasking, not preemptive multitasking.
In Preemptive Multitasking in modern OSes, the operating system acts as a strict enforcer. It gives each process a fixed time slice (e.g., 10 milliseconds). When timeâs up, the OS forcibly interrupts the process and switches to another one, regardless of whether the first process is finished. This guarantees fairness.
In Cooperative Multitasking in classic IOS, each process runs until it voluntarily yields the CPU (e.g., when it completes a task, waits for I/O, or calls a yield function). The scheduler then picks the next process based on priority.
For the specific example, while IOS does have internal process priorities (Critical, High, Medium, Low), and BGP typically runs at high priority, the scheduler can only choose which process runs next when the CPU is available. If even a low-priority process contains a bug or enters a condition where it loops without yielding, or takes a long time before yielding due to its nature, then it can monopolize the CPU, resulting in inefficiencies.
Because all processes share the same memory space with no protection boundaries, if a process crashes, it may write to an invalid memory address, and it may corrupt memory used by other processes or the kernel. It may overwrite vital BGP routing tables, core IOS scheduling data or other critical system information, resulting in corrupted data. The CPU later tries to use that corrupted data and encounters invalid instructions or garbage.
Resources arenât automatically freed because all processes run with unrestricted access to all system resources. A crash doesnât cleanly free resources; it leaves the system in an inconsistent, unsafe state, resulting in haphazard allocation, often requiring a reboot. Does that make sense?
I hope this has been helpful!
Laz
Hello Laz
This was very helpful, thank you. As always, I have some further things to discuss ![]()
This scheduler that you mention is called run-to-completion. Each process runs until it finishes or until the kernel decides that for whatever reason, it should no longer run and another process is picked.
Maybe I am not quite following but how many processes can be scheduled to run at the same time? Considering that IOS is a multitasking OS, it should be able to run more than one process but this makes me assume that only one runs at a time, until another one is scheduled? My book also says:
Run to completion schedulers are CPU efficient because the system does not need to
perform a context switch. A context switch is the capability for a single CPU to multi-
task between multiple processes.
Do you know how exactly is it? Are they trying to say that itâs not multitasking or how is it? ![]()
TechnicalyâŚ. there are multiple processes running but itâs always one at a time? So multitasking just refers to the ability of the OS to quickly switch between each process to give the impression that the different applications are executing simultaneously?
Thank you.
David
Hello David
Wow, this brings back memories! I remember when multitasking was a topic that was really pushed by the marketing departments of companies selling operating systems for PCs. (This was over 25 years ago!!) They were talking about multitasking, but really, it is the illusion of multitasking that they were able to achieve.
By definition, multitasking is the ability of the OS to rapidly switch between processes to create the âillusionâ that multiple applications are executing simultaneously. On a single CPU core, only ONE process actually executes at any given instant. And traditional monolithic IOS is designed to run on a single processing thread, so even if the underlying hardware has CPUs with multiple cores, only one core is leveraged.
So in such an arrangement, with the run to completion approach, each process runs until it finishes before the next scheduled process begins.
Hmm, this statement, although in essence correct, is a bit oversimplified and can be misunderstood. Let me elaborate a bit:
The point here is that run-to-completion minimizes context switching overhead. Keep in mind that:
So the more accurate statement would be: âRun-to-completion schedulers are CPU efficient because they perform fewer and less expensive context switches than preemptive time-sliced schedulers.â
Yes, thatâs right. This is true of both older monolithic IOS and the newer IOS-XE, especially if you have CPUs with only one core. If you have multi-core CPUs, then multiple processes can be genuinely run simultaneously, one on each core, but the OS must support this, and IOS-XE does, while IOS typically does not.
I hope this has been helpful!
Laz
Hello Laz.
Thanks again! Iâve some questions regarding XE now. I didnât want to include them in a single post since answering that would take a week ![]()
First of all, XE runs on a Linux Kernel. Resources often say that in classic IOS, processes have direct access to hardware.
Question 1
This means that any process can directly read, access and manipulate memory for itself or other processes in traditional IOS. Of course, correct code does not do this, which is why it asks the operating system to allocate or return memory â but it only does so because, in this case, the OS acts as a coordinator that keeps track of free and allocated memory to prevent collisions. However, the OS cannot prevent code with direct access to the hardware from accessing memory that it has not allocated to it.
With XE, my book says that only the Linux Kernel and its components can directly access the hardware. I assume that this is because IOSd runs as a separate process on top of Linux. My question is, does IOSd run in a separate memory space since it cannot directly access the hardware? So the Kernel has its own space while IOSd has its own space? Are there any platforms of abstraction?
Question 2
INE says that XE separates the control and data plane. This part confuses me the most. What does this mean? They say that the c/d planes were tightly coupled in classic IOS and that in XE, they are separated?
Iâm not quite sure how to understand this. If the control plane fails, the data plane keeps going, or?
But if that happens, the data plane simply cannot run without technologies such as NSF/SSO or redundant supervisors.
Thank you!
David
Hello David
I believe that you have a good understanding of how it all works! Let me elaborate and clarify:
When we say that classic IOS processes have âdirect access to hardware,â it means that IOS is the OS layer directly controlling hardware, with no protective intermediary.
IOS XE fundamentally changes this model. The Linux kernel runs in privileged kernel space and it has exclusive control over physical hardware: CPU, memory controller, network interfaces, buses, etc. The OS itself controls and manages process scheduling, memory management, and device drivers.
The IOS daemon (IOSd) runs in a separate, protected memory space from the Linux kernel. The Linux kernel has its own kernel address space, and each user-space process (including IOSd) has its own virtual memory space. The kernel enforces strict separation where one process cannot arbitrarily read or write another processâs memory (or kernel memory) without proper permissions and interfaces.
Hardware access is also mediated through kernel drivers and inter-process communication, not by the IOSd directly manipulating hardware registers. That means that if IOSd crashes, the Linux kernel and other processes can continue running, and the Linux kernel can restart IOSd without a full system reboot.
As for abstraction, yes, there are several layers of abstraction in IOS-XE, where the Linux kernel sits between the hardware and the platform/forwarding processes, which are Cisco-specific daemons that perform the device functions.
What is meant by this is that where the monolithic IOS image implements both the control plane and the data plane using the same image running on the same CPU, the IOS-XE decouples these functions into separate and distinct processes. This does have some specific advantages.
The control plane runs as an IOSd in the Linux user space while the data plane is implemented in dedicated hardware forwarding engines, depending on the platform. Specifically:
These are managed by separate platform processes. The purpose of this separation is not so much to allow the control plane to continue to function if the data plane fails or vice versa. As you suggest, this is not useful. The point is that there is performance isolation, fault isolation, and better process management. If the control plane does indeed fail, it can be restarted with limited disruption of the data plane, for example.
I hope this has been helpful!
Laz
Hello Laz.
Great, so happy we managed to discuss all this.
So here comes the final one, XR.
Thank you
David
Hello David
No, not in the traditional IOS/IOS-XE sense. IOS XR uses a fundamentally different commit-based configuration model. For the traditional IOS/IOS-XE approach, we have:
copy run start or write memoryBut in IOS XR:
commitOn reboot, IOS XR loads the last committed configuration from this database. So the committed configuration serves as both your running config and your âstartupâ config⌠theyâre always synchronized.
Absolutely! Itâs the dominant OS in Service Provider networks worldwide. Thatâs why you havenât seen it until you started working with SPs. You wonât see XR in enterprise branch offices but in SP cores and large-scale routing environments, IOS XR is the standard. For this reason, typically, you wonât see very much of it in CCNA or CCNP certifcations or labs ether.
However, IOS XR is deployed extensively on major platforms like the ASR 9000 Series, which is arguably the most popular SP router globally!! Other devices include:
The four-part naming reflects the rack/slot/module/port hierarchy in modular SP platforms. This is generally standard for SP deployments. For example:
GigabitEthernet0/2/0/5
So GigabitEthernet0/2/0/5 means: Rack 0 (standalone), Slot 2 (line card), Module 0 (first module), Port 5.
No, IOS XR runs on modular, fixed, and virtual platforms. While IOS XR was originally designed for large modular chassis, itâs now available across multiple form factors.
This means you can use the same IOS XR operational practices (commit/rollback, interface naming conventions, configuration syntax) across massive core chassis, compact edge routers, and virtual lab environments.
I hope this has been helpful!
Laz
Hello Laz.
Perfect, thank you!
On reboot, IOS XR loads the last committed configuration from this database.
About this part here. The SysDB is this?
My question is, what is meanât by âIOS XR loads the last committed configuration?â.
If we take a look at the last config that I committed
this configuration only enables the two interfaces, the rest of the configuration is contained within the other commits that Iâve made
Does XR load all of these or what is meanât by the fact that it loads the last committed configuration?
Also, is it okay for me to provide configuration in screenshots this time? It saves me some time and no one will copy it anyway since itâs just a show command output.
Thank you
David
Hello David,
Great question! Letâs dig a bit deeper into the IOS XRâs configuration model. When we say âlast committed configuration,â itâs a bit more involved than just that simple statement.
Each commit command creates a separate commit entry in the commit list. This is done to keep a better record of changes that are made to the OS. However, the âlast committed configurationâ refers to the complete, cumulative running configuration as it exists after all commits have been applied, not just the changes from your most recent commit. Keep in mind that each entry is not a separate file, but simply a record of the commit.
In your case, commit 1000000016 includes the two interfaces you enabled in that commit. But the complete configuration contains the results from that commit, PLUS all configurations from earlier commits all the way back to 1000000001, even if some of those modified the configs of even previous commits.
Like I said before, each commit entry is not a separate file. The running config is a single file which is the result of all the recorded commits. So the complete configuration youâd see with show running-config is what gets loaded on reboot, not just the two interface commands from the last commit.
Yes of course, thatâs fine, thanks for asking!
I hope this has been helpful!
Laz