- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
- programming@programming.dev
- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
- programming@programming.dev
- preps for bed
- starts closing apps
- refreshes lemmy…
Windows NT vs. Unix: A design comparison
- *sigh*
edit: back. a really enjoyable read. loved the POV as unix guy poking at NT. g’night for real now, lemmy.
Right?
*sigh*
*unzips*
*untars*
*unlhas*
Nice to see a pro NT article for a change but there are some details wrong
“It’s true that Unix has attempted to shoehorn other types of non-file objects into the file system”
‘Everything is a file’ was Unix’s design principle from the very start. It wasn’t shoehorned in. It is IMO superior to NT’s object system in that everything is exposed to the user as the file system rather than hidden behind programming api’s.
I always thought it was that everything was a file but that everything could be interacted with as if it was a file.
I’m not a kernel dev, but I’ve read often enough that there are some places where “everything is a file” somewhat breaks down on Unix. (I think /proc and some /dev)
For an “absolutely everything is a file” system have a look at plan9, it was the intended successor to Unix, but then that got popular while plan9 stayed a research project.
I know about 3 people on earth that ever ran it in anything approaching production. Two of them still found a way to use the acme editor til LSPs took over, one is still at it.
It remains a pretty cool project you can still find people maintaining the bones of it. I think the core utils are ported and in the arch repo.
Moving down the stack, Unix systems have never been big on supporting arbitrary drivers: remember that Unix systems were typically coupled to specific machines and vendors. NT, on the other hand, intended to be an OS for “any” machine and was sold by a software company, so supporting drivers written by others was critical. As a result, NT came with the Network Driver Interface Specification (NDIS), an abstraction to support network card drivers with ease. To this day, manufacturer-supplied drivers are just not a thing on Linux, which leads to interesting contraptions like the ndiswrapper, a very popular shim in the early 2000s to be able to reuse Windows drivers for WiFi cards on Linux.
Nvidia:
It’s a wonder that someone hasn’t implemented a similar wrapper for WDDM. I suppose they’d rather force the vendors to play nicely.
Also ndisgen under FreeBSD. MS could have been nice for a difference and not broken compatibility.
Awesome read, I’m switching to NT 3.1
The big issue that the author kind of mentions is that while the kernel has all these neat features, the overlaying OS seems to use them in such a way that they’re often not effective. XP before SP1 was a security nightmare and we got lucky that blaster was not working correctly. A secure token for the processes in your session? It doesn’t really help if every process you spawn gets this token with the user being the administrator (I know this is kind of different nowadays with UAC). A very cool architecture that allows easy porting? Let’s only use it on x86. Even today, it’s big news for Windows running on ARM, which the not-by-design-portable Unices have been doing for years.
Maybe if Microsoft had allowed the kernel to be used in other operating systems - not expecting a copyleft license - the current view is that Windows Is Bad, and the NT kernel is an inseparable part of Windows. And hell, even Windows CE which did run on other devices and architectures, doesn’t use the NT kernel.
So while the design and maybe even large parts of its implementation may be good and clean, it’s Microsoft’s fault that the public perception of the NT kernel.
XP before SP1 was a security nightmare
To be fair, Linux was a security nightmare before 2000 too. Linux didn’t have ACL’s until 2002.
with the user being the administrator
No one ran as administrator as default in a corporation, nor at home if you knew anything about computers. NT even suggested creating non privileged user accounts during setup.
Let’s only use it on x86.
It’s not like they didn’t try. When NT came out it was running on Mips, Alpha, PowerPC and Itanium. It wasn’t MS’s fault everything but x86 died. They tried more than anyone to support x86 alternatives. Now that ARM is capable of more than a PocketPC, they are on ARM.
Windows CE which did run on other devices and architectures, doesn’t use the NT kernel.
CE had extremely different requirements. The OS and Apps had to run in 2MB of RAM. NT shipped on many different CPUs.
XP before SP1 was a security nightmare
To be fair, Linux was a security nightmare before 2000 too. Linux didn’t have ACL’s until 2002.
yes, but XP at any SP is an unfixable mess compared to virtually any OS in the past 20 years (Temple OS excluded?), ACLs or not
not suggesting that you intimated otherwise, but its important to remind myself just how bad every XP instance really was.
It really wasn’t. Turn off services you don’t use, don’t run as admin and it was fine. Yes people would get viruses from running executables but that’s because Windows viruses were distributed widely because of market share. Linux wasn’t inherently more secure.
gotta disagree. microsoft’s vaunted API/ABI compatability combined with often broken process isolation made it an absolute mess. security features that should have protected users and systems were routinely turned off to allow user space programs to function (DEP anyone?).
SP2/3 taught users one thing only - if a program breaks, start rolling back system hardening. I cannot think of one XP machine outside of some tightly regulated environments (and a limited smattering of people that 1. knew better and 2. put up with the pain) that did not run their users as a local administrative equiv. to “avoid issues”.
if user space is allowed to make kernel space that vulnerable, then the system is broken.
security features that should have protected users and systems were routinely turned off to allow user space programs to function
So you blame Microsoft for allowing users to disable security features but don’t blame Linux for allowing it also?
if user space is allowed to make kernel space that vulnerable, then the system is broken.
Ssh has had bugs that give root on Linux. Does that mean Linux is broken too?
https://www.schneier.com/blog/archives/2024/07/new-open-ssh-vulnerability.html
So you blame Microsoft for allowing users to disable security features but don’t blame Linux for allowing it also?
I am saying that I have far fewer privilege escalation issues/requirements on a typical linux distro - almost as if a reasonable security framework was in place early on and mature enough to matter to applications and users.
we can get into the various unix-ish SNAFUs like root X, but running systems with non-monolithic desktops/interfaces (I had deep core software and version choices) helped to blunt exposures in ways that were just not possible on XP.
we are talking about XP here, a chimeric release that only a DOS/Win combo beats for hackery. XP was basically the worst possible expression of the NT ethos and none of NTs underlaying security features were of practical value when faced with production demands of the OS and the inability of MS to manage a technology transition more responsibly.
now, if you ask me what I think of current windows… well, I still dont persnally use it, but for a multitude of reasons that are not “security absolutely blows”.
apologies for the wall-o-text, apparently I have freshly unearthed XP trauma to unload. :-/
so, hows your day going? got some good family / self time lined up for the weekend?
running systems with non-monolithic desktops/interfaces
That’s security through obscurity. It’s not that Linux has better security, only that its already tiny desktop market share around 2003 was even smaller because of different variations.
MS to manage a technology transition more responsibly.
That’s again blaming the Microsoft user for not understanding computers but not blaming the Linux user for running as root.
I have freshly unearthed XP trauma to unload.
Where you tech support at a company?
Unix? Come on, really?
What is Unix in 2024?
Unix is literally the most important operating system (specification) family on the planet. Even bigger than M$ Windows. You’ve got all the Android phones, all the Apple iPhones, macOS, FreeBSD and all the GNU/Linux distributions. Unix-like installed base is by far the largest of any on the planet.
Don’t forget… The internet basically runs on it too :D
…and Netflix… And… And… and…
I think any modern unix-like operation systems: bsd based,linux based,haiku,minix and other else hundred branches.
dont forget darwin aka MacOS
And iOS, making Unix one of the largest operating systems on the planet
Basically mac os and ios is actually same thing both use darwin kernel same graphical stack.
That would be Minix 3, I think, because it runs in (yes, in) all modern Intel CPU’s.
Oh yeah, I forgot about management engine
in
and with one word, the conversation becomes deeply political.
This was most likely posted by a kid who just thinks Unix is “old” Linux and doesn’t understand the roots of what it actually means in terms of computing.
I haven’t yet read the article, but it may well be a comparison for which Linux, FreeBSD, Solaris can be united under the Unix umbrella as systems with monolithic kernels and similar conventions. Of course FreeBSD is much cooler than Linux and Solaris is much cooler than FreeBSD, but we get what we get.