Having tested Windows 11 for only four days, I cannot honestly say that I have explored every last bit of it thoroughly (but that had not been the purpose of this trip in the first place) — and I most certainly did not manage to wrap my head around everything I actually did explore. There may be a number of reasons for that.

That I didn’t spend more time testing it is certainly one of them; that I’m not a typical member of Microsoft’s target group, and therefore lack many years of gradual adjustment, is arguably another. Notwithstanding these limitations, a set of (to me) most obvious questions haunted me for the best part of the past weekend.

All right, here I sit and admit it: The previous sentence was a poor attempt at placing a clickbait just above the fold — sorry about that. The questions did not actually haunt me, they just kept popping up at the least expected moments.

Concept and Execution

How is it possible that a global player that has been successful enough to make its founder the wealthiest man on this planet (for quite a while), cannot seem to get its act together, while private individuals across the same planet (in the same era) manage to collaborate in order to create, develop, and maintain a vast variety of operating systems and relevant applications in their spare time — and distribute their produce, more often than not, in a timely fashion and free of charge?

It has been my understanding that Windows is supposed to be a general–purpose operating system for the average user. Naturally, when it comes to Microsoft’s marketing strategy I have nothing to offer but poorly informed guesswork, but I can rather confidently state that their definitions of “general–purpose” and “average” have little in common with mine.

In recent decades, the most frequently used argument against Linux–based operating systems (and consequently in favour of Windows) has been that the average user has no (or, at best, little) technical understanding. “You have to have a strong grasp of the command line”, some say. “You will have a hard time getting everything to run properly”, others pretend to know. Umm, right …

Here’s what I have to contribute to this discussion: During the past decade, I have not encountered a single peripheral device — all of which allegedly compatible with (the latest version of) Windows (at the time of individual release) by default — that would require more effort to make it work properly under any of the Linux distros I happend to test than it did under Windows.

What about the need to be proficient in the use of the command line, then? Granted, I do make heavy use the bash (at times, for a variety of tasks). Yet the reason for that is (in most cases) not a purely technical requirement; it’s about speed and efficiency — the “average” user can easily get by without ever launching it. What few routines actually do require the use of the command line under Linux, do also require it under Windows. The actual difference between these “unequal cousins” is, that the Linux command line works to the expectation of its “average” user.

(Just to provide a random example: The Windows PowerShell asks twice whether or not one wants to “Terminate [the] batch job” in response to Ctrl+C, which is supposed to terminate a running routine. What the heck!? Whatever the correct answer to this question in any one case — be it “Yes” or “No” — there is no justification for delaying the process unduly, by asking the same question twice. It is relative safe to assume that respective users know what they want, and why.)

Microsoft Store

Try as I may, I cannot see the advantage of having the “Microsoft Store” desktop app on the hard drive by default (or at all). I’m not addressing its quality or usefulness to the average user here (yet), but merely the amount of space it hogs, without serving an immediate purpose.

Every piece of software I happened to find in the Microsoft Store may also be easily obtained from outside this app. Consequently, an online store (accessed via the web browser) would serve the same purpose. One could download whatever software is wanted or needed from a Microsoft server (or strategically placed mirrors) on demand — a repository, if you will (more about that later).

Downloading software from a remote server — regardless of the actual source or method — requires a working internet connection; a desktop app does not make any difference in this respect. Having the Microsoft Store app sitting on the end user’s hard drive by default (possibly without ever being used) is a waste of mentioned user’s time and storage space — which brings us straight to the following two issues.

Offline Tutorial

Those 50 MB were much better invested in a PDF document that comes with the operating system, providing the average (as well as the more experienced) user with sufficient information as to how everything works. At a rough estimate, that e–book could run 5000 pages of text — at any rate, enough to get “better informed average users” — if it ran only one tenth of this page count, there would even be plenty of space left for detailed images.

Repository

Wouldn’t it be considerably more efficient (and arguably also more secure) to provide Microsoft data repositories tailored to respective versions of the operating system (because this is not what Microsoft Store offers)?

This way, every package could be thoroughly tested and reviewed (by Microsoft developers or certified partners), and only then released (to this particular repository rather than the general public).

Beside the obvious advantage that users would only install software that actually is compatible with their version of Windows (as opposed to possibly causing conflicts or dependency issues), they could also decide whether or not they need or want an individual app. After all, there is no point in having apps installed by default, just because they happen to be part of the “Microsoft Family”.

(The quality of individual repositories is, by the way, quite often the reason for users preferring certain Linux distros over others. Depending on the individual desktop environment, users have the option to access these repositories via a desktop app — quite similar to Micorosoft Store, yet considerably better organised — a traditional software navigator, or the command line.)

That one still has to go hunt down a third–party tool on the internet, that is, as often as not, more likely to achieve what Microsoft should know better, is a bloody disgrace (if you pardon my French).

Windows Ink Workspace

One of the features in Windows 11 I was most excited to test was the allegedly greatly improved Windows Ink Workspace. Well, I already did say that the graphic tablet worked better than it used to in previous versions (thanks to the software its manufacturer provided), so being able to expand its range of usability even further was definitely something to look forward to.

The first time I realised that I could use the pen to scroll up and down a website as though I was using a touch screen, I thought, “Now you are talking my language, Microsoft”, and a content smile ran across my face. However, it quickly vanished when I realised that I could not select any text with the pen, because every horizontal movement was also interpreted as a swipe. What the heck!? I had gained one useful feature, only to lose another that is even more useful?

As if that were not bad enough, I could not find this issue even mentioned in the official online documentation — let alone a workaround. It was pure coincidence that I happened upon a forum thread where exactly this problem was addressed. The proposed “solution” was to disable Windows Ink Workspace. This approach did actually work, but was a right bummer.

Interestingly, Michael Łeptuch offers a tool, called “Ink Workspace”, in his GitHub repository. It provides everything (except for the swipe feature) Windows Ink Workspace does, and a bit more — only in a more comfortable fashion.

Yes, Mr Łeptuch’s software is also available in the Microsoft Store, but that’s not exactly the point here. The question is, why does it take two different third–party applications to accomplish tasks that should be as common as pea soup by now? I mean, we are not talking about some exotic gadget, but a device manufactured by the world leader in this sector and a project Microsoft has worked on for quite a while.

Office and Its Components

For the neutral observer, it is difficult to miss that Microsoft is set to develop Outlook to become a centrepiece of sorts of the average user’s daily workflow, the indispensable sidekick of the likes of Word and Excel and Teams, if you will — “ay, there’s the logical rub”, to paraphrase old Willie Shakespeare.

Quite recently, I had the opportunity of a sneak preview of the next (projected) version of Outlook — and I laughed so hard, I nearly fell over. Yes, of course, users could be more efficient, if they had all tools necessary to accomplish their daily tasks in one place — but why, for the love of Dog, would Outlook, of all available applications, be the centre of action?

Granted, there may be a field of work that more than anythying else relies on the email client — I cannot seriously be expected to know each and every work environment there may be — yet I daresay this particular field is not representative of the vast majority of users out there.

Yes, of course, the likes of Thunderbird have, more or less successfully, also tried to establish themselves as one–stop information management tools, featuring all sorts of components alongside the email client, and this may — at least in parts — have informed Microsoft’s intention to expand Outlook’s range of use even further.

Yet — and, precious reader, one really doesn’t have to be a rocket scientist to realise this — Microsoft has one significant advantage over any of its contenders: Windows and Microsoft Office (of which Outlook is a part) are family, while the other operating systems and applications are developed independently of one another.

That is, no one can keep Microsoft from seamlessly integrating any or all parts of their Office Suite into the operating system. None of the applications the Office Suite comprises has to be granted permission to immediately interact with the operating system. Not quite clear yet what I mean? Well, here goes:

As soon as the installation of Windows 11 was complete, I clicked the “date and time” display (right–hand end of the taskbar), fully expecting it to open to let me select any one day of the displayed calendar, add an event or task, and set a reminder (which would then show in the notification area, once due), seeing that Microsoft 365 (and therefore also Outlook) was present — but nothing happened.

I was able to select a day, but this action didn’t trigger anything. No window to enter an event or task or note, let alone set a reminder, opened. What a disappointment! I was simply given the current weekday and date (information that may also be gathered from one’s wristwatch or mobile phone, but faster).

So I launched Outlook, and entered an event in the calendar tool to see whether this event would at least be acknowledged in the notification area, by way of marking that day in a different colour or something — but still nothing. Apparently, these two applications (the calendar in Outlook and the date–time display of the taskbar) don’t have access to a mutual database (and consequently no way of exchanging information).

The question begs, why not? Why forgo this decisive advantage? It would be an approach that actually could speed up the user’s workflow considerably. There would be no more need to launch Outlook every time one wants to add a task or event (or be reminded that a deadline is coming up).

If a user prefers to use, say, Thunderbird and Open Office instead of Microsoft’s own, fine, but then they’d get just a date–time display in the taskbar (just like everyone else, at the time of writing), and lack the advantage of being able to quickly access certain features (and information).

However, to this end, Microsoft would (probably) have to go in a different direction altogether. Rather than developing a user interface to integrate certain components with others, they would have to disintegrate them. That is, develop a first–rate email client and a calendar (“Microsoft To Do” does already exist, and is, in my humble opinion, considerably more useful, anyway) whose relevant content may be accessed, without having to lauch the parent application.

Here’s how this could work out (at least in my imagination): You turn on your computer, the notification area informs you that it’s Jane’s birthday, that you are supposed to attend a meeting at 10.00 and another at 14.30. The reminder of Jane’s birthday also displays a list of options: “send her an email”, “give her a phone call”, or “dismiss the reminder”. You click the email option, and a small textbox opens for you to enter your best wishes. You hit “send”, and click to dismiss the notification. There is no time to check your emails right now, because you are running late for the ten–o’clock meeting already.

A number of things went sideways that day and you never got around to even launch Outlook until late in the evening, but at least you haven’t missed the opportunity to send Jane some digital love on her birthday.

Settings vs. Control Panel vs. Registry

To put it mildly, having to turn to the “Control Panel” or the “Registry” to initiate even minor adaptations is a royal pain in the neck. Only very few changes need to be initialised, while the system is booted. So why make such a fuss? (Quite honestly, I cannot even remember when I last had to reboot a computer for any one change to take effect — most likely when I upgraded from Debian 10 to 11.)

I had to take to the Registry Editor twice and the Control Panel once (and reboot the system each time) inside of the first two days, even though the changes I desired were by no means drastic. Still, even on day four, I haven’t found a reliable way to set a global viewing mode for items in File Explorer, or keyboard shortcuts to launch certain applications.

For no obvious reason, applications are not responding to a shortcut I have already set several times. Checking these applications’ properties (which happens to be a scavenger hunt of sorts), I find the respective entry to be gone again. Is this merely a bug or an incentive of sorts for the user to keep pinning a launcher of every thinkable application to the Desktop, Taskbar, or Start Menu?

Seriously, if you still employ this abominable tactic “to gain quick access”, you have no idea how computers work. Without knowing you in person, precious reader, I can tell you this: The more launchers you have pinned to the Desktop or Taskbar, the less efficient your workflow.

Obviously, every adjustment in “Settings” (or the “Control Panel”, for that matter) triggers a new (or altered) entry in the “Registry”, anyway. So why should the average user even have to bother with the “Registry Editor”? And this brings us to the next question …

Permissions

Is it really a great idea to give the “original” user (the account “owner”) administrator rights by default? On the one hand, the “average” user is not considered informed enough to fully customise “their” instance of Windows, yet, on the other, they are free to initiate actions the scope of which they have no reasonable way of anticipating? What kind of logic is that?

Interestingly, Windows does know tasks that need administrator rights. Upon initiating these, the user is informed that administrator rights are necessary to go ahead. Yet all it really takes is to acknowledge this statement. So why delay the process in the first place? Do you really think having to hit a button will stop the “average” user from doing something foolish?

One Drive a.k.a. the Cloud

Like, what? Making the bold decision to synchronise directories other than those already available by default with the cloud? Oh, wait! That’s not even possible — “superuser” rights or no.

Why, actually? I could see why third–party cloud services, such as Dropbox, get their own local directory, if Windows were a particularly security–conscious system (which it isn’t, as we all know), but One Drive is a component of Microsoft 365. So why is it not possible to synchronise just about any directory (or file, for that matter) directly with the cloud rather than having to take the detour through another local diretory, which poses the potential risk that some files will not be backed up to the cloud (simply because some users, say, tend to be not quite as careful as one would wish)?

Unfortunately, it doesn’t even seem to take a sloppy user to “accidentally” get rid of documents that were meant to be backed up …

I am fortunately one of those “3–2–1 backup routine nerds”, otherwise I might have lost hundreds of files, testing this cloud nightmare — and this folly would have definitely been discussed in Part 1, Things I Didn’t Like (at All).

(Note for the benefit of the unsuspecting reader: “3–2–1 backup” refers to the method of having 3 copies of a document, stored on 2 different media and 1 of those copies deposited outside one’s own environment. Like, one copy on your hard drive, one in a cloud, and one on a USB drive with a trusted friend or some other safe place. The idea behind this approach is that even if your house should burn to the ground, and provided you survive this disaster, you will still have a working copy of all your relevant documents.)

What’s the story? Honestly, I cannot even begin to venture a guess, as I have never seen anything similar to that. I copied some directories to “One Drive – Personal” and Windows informed me that the process was complete and all directories had been synchronised with the cloud. Nothing to worry about.

Well, one would have to be born yesterday to believe such promises. Of course, I did check the cloud. Everything looked fine — at a first, random glance. Yet when I checked the larger directories, I almost lost it. Directories on the first and second level contained other directories and files, but directories on the third level and below contained only empty directories — not a single file.

Good thing, I still had the originals on the hard drive, right? Right — except I hadn’t. Apparently, there is some mechanism at work that synchronises in both directions. Instead of refilling (if they had ever been filled in the first place) the empty directories in the cloud with copies from the hard drive, the directories on the hard drive were also emptied to match those in the cloud.

Even while typing it right now, and having pondered this matter for two days already, I still fail to wrap my head around this absurdity. Why would anyone even implement such a ludicrous mechanism, that’s utterly contradicting the general concept.

And no, there is no general limitation of clouds that would allow no more than a two–level file structure, because all cloud systems I have used thus far allow for file structures of any depth — and not one file went missing to date.

Security

Since I dropped the keyword already, let’s talk about security. Why does Windows 11 still ship with a trial version of a third–party Security Suite? That seems to indicate that Micorosoft does not put an awful lot of trust in their own “solution”. That, in turn, triggers a number of interesting questions. To wit:

How is it that third–party tools are considered more reliable than anything Microsoft’s own developers manage to come up with? Or, to put it even more pointedly, does McAfee (the company whose Security Suite is included in Windows 11 by default) know Windows better than Microsoft does? Then, why does Windows 11 even ship with its own Security Suite in addition to McAfee’s software (which basically renders Windows Security bloatware)?

Comments