Legal Practice Technology Blog

HoudiniEsq Legal Practice Management
Windows 11 Danger

If you use Windows 11, your Law Firm is probably compromised

Cybersecurity experts have found that Windows 11 poses a serious threat to its users. Not just from cyber threats but from Microsoft directly.

Serious cyber threats were discovered in 2024 by cybersecurity experts, including Microsoft’s own security engineers, but Microsoft refused to fix a very dangerous exploit simply because it would disrupt Microsoft’s update and patch schedules.

Windows Defender can’t detect this threat.

September 2024, security researchers discovered something terrifying. A zero-day vulnerability in Windows 11 that’s been actively exploited for nearly a year, and Microsoft knowingly ignored it. This threat is not some obscure feature nobody uses. It was an exploit in the Windows Security Center itself. A component that’s supposed to protect your computer, is in fact the entry point for attackers and malware.

Since January 2024, hackers have been silently compromising Windows 11 computers worldwide while Microsoft’s Windows Defender security software failed to detect it. The estimate is that 4 million or more Windows 11 computers have been infected.

This malware is very sophisticated; it hides from your antivirus software, it doesn’t appear in the Task Manager, in fact, it runs silently in the background, stealing passwords, banking credentials, cryptocurrency wallets, corporate data, discovery materials, documents, private messages, and more. The alarming part of this story is that Microsoft knew about the vulnerability. They were notified of this zero-day exploit in March of 2024. A security researcher reported it through Microsoft’s Bug Bounty program.

Microsoft classified the exploit as low severity and didn’t prioritize a fix for immediate release. Researchers are furious, and they have the receipts showing Microsoft ignored warnings for months while millions and millions of its users were exposed to a significant threat. 

Thousands of times daily across millions of computers, the software that is supposed to protect you in Windows 11 was seriously flawed. Microsoft’s security software detects malware and uploads this information to Microsoft servers and their cyber threat team. This submission process requires elevated privileges to access files on your system. Without elevated privileges, the software cannot scan your system for malware. Security software needs high privileges to examine potential threats; this is normal behavior. The vulnerability is in how Windows security failed to validate the URL where the data is to be sent.

It is supposed to send samples to Microsoft’s official threat servers. But the validation check is a joke. Attackers discovered that they can redirect the submissions of this critical data to their own servers by manipulating local DNS entries on your computer.

Windows security doesn’t properly verify the server’s SSL certificate. It doesn’t confirm that the server it is communicating with is in fact a Microsoft server. It just sends the file to whatever the URL resolves to via your local DNS table. Here’s how the attack works. The malware modifies your computer’s DNS settings, points the DNS entry for Microsoft security-upload.com to the attacker’s own server.

The very next time Windows Defender tries to upload a suspicious file, it connects to this fake server instead. But that isn’t the worst of it; this is a bi-directional connection, data flows both ways. The attacker’s server responds with a malicious payload disguised as analysis results.

Windows security accepts it because it’s coming from what it thinks is a trusted Microsoft server. That payload now has system-level privileges. Complete computer control, all through the security software that’s supposed to protect you.

The genius of this exploit is its stealth. It doesn’t disable Windows Defender, it doesn’t trigger any security alerts. Windows security continues running normally, showing a green checkmark indicating that your computer is protected.

Users have no idea that they have been compromised. The malware runs silently, logging every keystroke, every password entered, monitoring every banking session, copying every document, and uploading everything to the attacker’s servers. This has been going on for months without the users’ knowledge.

A Taiwanese security researcher, David Chen, found this vulnerability in February 2024 while analyzing Windows 11’s network traffic and noticed that DNS entries could be exploited. He spent weeks developing a proof of concept and eventually successfully compromised a test Windows 11 machine through the Security Center flaw. As most security researchers do, he responsibly disclosed the exploit to Microsoft through their bug bounty program on March 8th, 2024. Microsoft’s response confirmed the vulnerability. They classified it as moderate severity and said they would include a fix in a future update but did not provide a timeline for the fix. Microsoft classified the exploit as non-urgent? Meanwhile, attackers have compromised millions of computers.

Microsoft’s bug bounty program is worthless. They don’t take cybersecurity researchers’ reports seriously. And as a result, users are paying a hefty price for Microsoft’s negligence.

The malware campaign nobody detected.

Microsoft has declined to release a security patch for this zero-day vulnerability that is currently being exploited by at least 11 hacking groups linked to countries like North Korea, Russia, and China. The malware campaign using this exploit has been running since January 2024. Infected users are estimated to be well over 3.7 million globally as of this writing. 

The exploit is clever. After initial infection via the Windows security exploit, it installs multiple mechanisms to ensure its persistence. It makes registry modifications, schedules tasks, and adds event subscriptions. Even if users find and remove one of these mechanisms, others help keep it alive. It hides from virus and security software by running in memory. It never writes files to disk. It uses a hollow process technique, which makes it invisible to Task Manager. Users literally cannot see it running even if they know what to look for. The malware’s capabilities are exhaustive.

The key logging captures every password, credit card number, and message typed. Screenshots are captured when you visit a banking site. Cryptocurrency wallet monitoring detects and steals Bitcoin, Ethereum, and other digital assets. Browser credential theft extracts saved passwords from Chrome, Edge, Firefox, etc. Email is monitored and copied. Messages in Outlook and Gmail are uploaded. The malware performs comprehensive data theft.

So who’s behind this? Analysis suggests it is a financially motivated cybercrime syndicate based in Eastern Europe. The command-and-control infrastructure being used in this exploit has hosting providers in Russia, Ukraine, and Romania. The protocols used in the exploit match those used by known cybercrime organizations. This isn’t state-sponsored espionage. It’s criminals stealing banking credentials and cryptocurrency for profit, and they’ve been doing it successfully for months while Microsoft’s security software completely failed to detect them.

The fallout has just begun, but we do know that by mid-September 2024, companies were discovering they’ve been compromised. A mid-sized accounting firm in Ohio discovered 200 of their computers were compromised, stealing client financial data, tax returns, banking information, and corporate records. Everything that went through those computers was compromised. The firm must notify thousands of clients that their data was potentially stolen.

Law firms aren’t exempt. A law firm in Manhattan found 67 attorney computers were compromised, with attorney-client-privileged communications, legal strategies, settlement negotiations, and trade secrets all potentially stolen.

The Bar association rules require immediate disclosure to clients. The firm sent notification letters to 3,400 clients in September. Their clients are furious and rightfully so. Many of their clients have moved to different firms. The reputational damage to a law firm can be catastrophic.

A healthcare provider in Florida discovered infections on over 300 computers. Medical records, patient information, and HIPAA-protected data. All accessed and potentially stolen. Federal regulations require breach notification to every affected patient. The provider must notify 89,000 patients. They are now subject to fines and lawsuits.

You can’t blame the malware in many instances. Law firms and healthcare professionals are responsible for protecting client and patient data. The fact that Microsoft’s security software failed doesn’t excuse the breach.

These are not isolated cases either. Hundreds of companies are discovering they have been compromised, each now facing notification requirements, regulatory investigations, lawsuits, and the destruction of their reputations.

Why didn’t Microsoft patch this when they knew about it?

A day late and a dollar short.

Microsoft finally released an emergency patch on September 10th, 2024. Six months after Chen’s bug report but only nine days after he went public with this information. 

The patch fixes the DNS validation flaw in Windows 11, which prevents the exploit from working. But it doesn’t remove any existing infections. Computers already compromised stay compromised. Users must manually detect and clean their systems themselves. A serious problem because most users don’t know they’re infected.

It’s just corporate spin.

Microsoft’s security bulletin is supposed to help users and cybersecurity experts, but all it really is is nothing more than corporate damage control. They acknowledge the vulnerability, thanked Chen for responsible disclosure, and moved on. They claimed that they worked as quickly as possible to develop and test a patch but failed to mention that they ignored the report for six months. In the meantime, millions of computers were compromised. They refuse to accept responsibility for this breach.

The cybersecurity community’s response is brutal. Bruce Schneier’s blog post sums it up: “The Windows security breach reveals systemic problems in Microsoft’s approach to security. They prioritize feature development over fixing vulnerabilities. They underestimate severity. They delay patches. Users pay the price. This needs to change fundamentally.”

Even Microsoft’s own security team is reportedly furious. Leaked internal messages show security engineers argued for an emergency patch, but management overruled them. They decided the risk didn’t justify disrupting update schedules. 

In a nutshell, a researcher reported a vulnerability. Microsoft refused to fix it. They ignored it. That’s not a technology problem. That’s a culture problem. It means Microsoft’s security processes are a failure. The company doesn’t take researcher reports seriously. It doesn’t prioritize fixes and it doesn’t move fast enough when vulnerabilities are actively being exploited in their own operating system.

Microsoft is complacent and forgets that Windows’ security reputation took over a decade to recover from their XP-era security issues, and now here we are once again. This breach has undone all that progress.

Class acton lawsuits are certain, and the legal theory is pretty straightforward. Microsoft knew about the vulnerability. Chose not to fix it promptly, and users were harmed by that choice. That’s negligence. Microsoft could face thousands of lawsuits and faces billions in potential damages. Unlike past security vulnerabilities where Microsoft could claim they didn’t know, Chen’s documented reports to Microsoft’s Bug Bounty program prove they knew and chose to delay. That makes the liability arguments stronger.

Class action lawsuits have already been filed in late September 2024. Three class action lawsuits I’m aware of are one in California, one in New York, and one in Texas. All with similar claims. Microsoft was informed of a critical security vulnerability. Failed to patch it promptly, and millions of users were compromised as a result. Microsoft’s negligence caused enormous damages to all affected.

Windows 11 was marketed as Microsoft’s most secure operating system ever, with enhanced security features, built-in protection, and touted to be safe by default.

The fallout will be devastating for Microsoft, no doubt, but what is catastrophic is all the corporations who relied on Microsoft are going to pay a bigger price, and many may never recover.

 

Frank A Rivera CEO HoudiniEsq

Best Legal Cloud Software HoudiniEsq

Frank A. Rivera
Software architect and Sun Microsystems Alumni. Frank is responsible for the development of key technologies across several sectors such as banking, intelligence, national security, and practice management. Frank architected and developed the Multi-Level-Gateway of the Trusted Solaris operating system after 9/11 allowing our intelligence agencies to securely share information without exposing credentials to the other agency. Frank also developed a streaming asymmetric block cipher that uses varying block sizes and a 768bit key providing for very strong encryption. Frank architected and developed the first cloud-based legal practice management product for the legal industry. Four years before the term Cloud Computing entered our lexicon. The product was acquired by LexisNexis in 2004.

Frank never received a degree in computer science but instead started his career with tech in the U.S. Military 18th Airborne Corp., Special Operations, 525th Expeditionary Intelligence Brigade, Fort Bragg North Carolina.

Windows11BlogPost

Is Windows 11 a risk to your Law Firm?

AI is great. Trust me, bro.

AI is being implemented into everything these days. Predictive analytics for business forecasting, automated fraud detection in finance, AI-driven diagnostics in healthcare, and of course, chatbots. These are just a few examples. A law practice is no exception. AI is transforming the industry by automating time-consuming tasks, significantly increasing efficiency in document drafting, legal research, and due diligence. AI tools help law firms stay competitive, reduce human error, and deliver faster, more cost-effective services to clients. However, AI has fallacies.

Common issues that arise in all AI systems include poor or outdated data. Garbage in results in garbage out. AI is only as good as the data it is trained on. Poor-quality, incomplete, or biased datasets lead to inaccurate responses. Inherited bias is another. Large Language Models (LLM) inherit and amplify societal biases in their data. Lack of adaptability. AI systems excel only within the narrow parameters they were trained on. For example, Elon Musk’s Tesla Full Self Driving AI can’t play chess. AI systems struggle to apply knowledge from one domain to another. AI is inherently rigid. AI cannot adapt to novel, rapidly changing, unpredictable environments that fall outside its pre-defined rulesets. AI lacks understanding of context. Pattern recognition is not cognition. AI processes data patterns; it does not think, it does not understand underlying meaning or subtle context. It lacks common sense and human reasoning. Many AI systems live in the dark. Nearly all deep learning models operate as “black boxes,” making it difficult to understand how or why the AI reached a specific conclusion. This erodes trust. All AI systems degrade over time. We live in the information age. Things change rapidly. Models become outdated quickly and become less accurate. Models need to be continuously retrained with new relevant data. AI often fails to understand subtle nuances, leading to errors in complex fields like legal analysis or medical diagnostics.

It is fair to say that the list of AI benefits is long, and some may say that the benefits far outweigh the risks, but only in number, not in the cause and effect. Most AI systems are glorified search engines. The AI systems most consumers use today are not intelligent. You just need to peer below the surface to see why.

AI systems are easily manipulated using cleverly crafted inputs that are designed to mislead the AI system into disclosing sensitive information, documents, client names, or worse.

Large Language Models require inputs. These inputs are generally via a chat window called a context window where you enter a prompt and the “AI” analyzes your prompt and responds. What is occurring is the AI is guessing what the first word should be in the response. After it has determined what the most logical first word should be in the response, the AI adds the first word of the response to the end of your initial prompt and the AI processes the entire prompt again to find the second word and so on and so forth until its work is done. This raises a serious security risk because I or an AI Agent could intercept your prompt and inject commands into the prompt and have the AI return sensitive information that has nothing to do with your original request and you would be none the wiser. This is called AI Prompt Injection. It is one of many risks but an important one to mention.

AI prompt injection is a critical security vulnerability where attackers manipulate LLMs by crafting malicious inputs that override and bypass your instructions, essentially hijacking the commands sent to the LLM model, ignoring the intended instructions and any safety guardrails. This issue is recognized as a top AI security risk; it can lead to data exfiltration, unauthorized code execution, and critical business process hijacking.

The issue arises from the fact that LLMs process both developer instructions called system prompts and user-provided inputs in the same context. One large string of commands. The model cannot reliably distinguish between trusted instructions and untrusted, adversarial input, allowing the malicious inputs to take total control of the AI’s response. Add the fact that AI agents are all the rage these days, the security vulnerability is compounded since AI agents are essentially black boxes that execute commands on the user’s behalf. An AI agent that can read documents on your drive is just one example.

I don’t want to go too deep into the woods here since this blog is about the dangers of using Windows 11 in your practice, but it is important to mention that these security flaws are currently present in all AI systems.

You may be asking yourself, why is the industry not doing anything about this? It should be an easy problem to solve. The answer is because it is too soon. All the vulnerabilities of AI have yet to be discovered, and solutions tend to open a new can of worms. To be fair, it is a new industry; things will get better, but this takes time. It doesn’t help matters when the companies that provide the majority of AI tools today are large corporations like Microsoft that have so much at stake that they really need everyone using AI to justify their huge investments in this technology.

There is no doubt that AI will make all our lives easier, but the flip side of the coin is that integration of AI into any critical system poses risks, and one of those risks is embedding AI in our operating systems.

The Windows 11 debacle. 

On September 26th, 2023, Microsoft released the first major Windows 11 update featuring integrated AI called Microsoft Copilot. This Microsoft Windows 11 update was released as an optional, non-security update for Windows 11, version 22H2. The operative word here is optional. I will explain, but first, what is Copilot?

According to Microsoft, Copilot is an AI-powered conversational assistant developed by Microsoft. This is misleading. Copilot relies on OpenAI’s LLMs, specifically GPT-4 and GPT-5.

A LLM is a Large Language Model and is technology designed to understand, process, and generate human-like text by analyzing large datasets. They are a subset of machine learning known as deep learning, relying on neural networks to identify complex language patterns, grammar, and context.

Microsoft Copilot was designed to enhance productivity by automating tasks, drafting content, analyzing data, and generating images. It operates across nearly every Windows application in Windows 11, its Edge web browser, Microsoft mobile apps, and Microsoft 365 apps such as Word, Excel, Outlook, and Teams.

In late 2025, Microsoft was reported to have spent a total of $13.75 billion on OpenAI, the majority attributed to licensing and partnership frameworks for the development of Microsoft’s Copilot. In addition, Microsoft has spent massive amounts on AI infrastructure, roughly $88 billion in just fiscal years 2023 through 2024. That is a huge investment and a factor in why and how Microsoft’s AI is putting your law firm at risk.

Back to Microsoft’s claim that the update was optional. This Windows update was far from optional. It was installed silently. The update installed AI into every aspect of the Windows 11 operating system without the user’s knowledge or opt-in. This update was so bad that it also added ads in toolbars and menus without the user’s permission. This was not well received and an obvious push to recoup the billions and billions in their AI investment.

In an effort to allow Microsoft’s AI agents to “see what you see”, they gave their AI and potential third parties access to everything you do as well as the ability to alter settings, read the contents of your screen, and to act on your behalf without your knowledge. This is potentially catastrophic.

The experts are concerned.

The aggressive integration of AI into Windows 11, in particular Copilot and a feature called Recall, has faced significant backlash by cybersecurity experts.

Microsoft’s Recall feature is an AI-powered feature for Windows 11 that acts as photographic memory of all your computer activity. It periodically takes snapshots of your screen every 3 seconds, allowing you to search through your computing history such as apps you have used, websites you have visited, and documents you have viewed or created, leveraging OCR and natural language technologies to find and resume your previous work.

Sounds productive, doesn’t it? So why all the fuss? In Microsoft’s urgent push to recoup their $100 billion plus AI investment, they implemented the AI into Windows 11 sloppily. So much so that all those image snapshots of your screens included passwords, banking, and other sensitive information, but that isn’t the worst of it. Microsoft stored all this information on your drive in a SQL-lite database with no password or protection. Given that Copilot has access to your entire system, it adds a level of risk users are just not prepared for.

Cybersecurity experts state that Windows 11 is a security nightmare, and as a result, fixes are delayed by one year. This is outrageous. Privacy advocates and users raised alarms that Recall stores sensitive, unencrypted data that is accessible to malware and AI agents.

When Microsoft expressed plans to evolve Windows into an “agentic OS” where AI acts on behalf of the user, Microsoft received thousands of negative responses from cybersecurity experts, forcing executives to pause some of these plans.

Users have complained about Copilot being forced into essential and widely used apps like Notepad and Paint. The AI features have been reported to increase background CPU usage and cause a lot of memory consumption, impacting the battery life of laptops.

In early January 2026, Microsoft was being called “Microslop”. Microslop is a derogatory nickname for Microsoft after its AI push. The term is derived from the phrase AI Slop, which is slang for low-quality, mass-produced AI content we have all experienced online. The term was given to Microsoft by its users in protest to the company’s aggressive, forced integration of Copilot across all of Microsoft products. The blowback stems from a combination of privacy, performance, and usability concerns, with users labeling the push as “forced bloat”.

Bloat is an understatement and only the tip of the iceberg. The persistent addition of Copilot buttons appearing across the OS, including in the File Explorer and the taskbar, is seen as unnecessary clutter, leading to the creation of third-party tools to remove AI from the system entirely, but you are still stuck with the unwanted ads throughout the Windows 11 OS.

Ads in Windows 11 appear as app suggestions in the Start menu, promotional banners in the Settings app, and in the bottom-right corner of the Notifications app. Microsoft also displays ads within File Explorer, the Widgets panel, and on the lock screen promoting services like OneDrive, Microsoft 365, and Game Pass. Even the coveted search menu has ads.

In my opinion, forced adoption is a loss of control. A major frustration is that these features were forced by default with no clear or easy way to turn them off, making users feel they are guinea pigs in a forced large-scale proof of concept experiment. Users who paid for their OS are now forced to become Microsoft Windows 11 beta testers. It has been reported that after disabling the AI features, they are re-enabled after a Windows update. This is unacceptable.

Users are right to feel that AI is being pushed on them to serve Microsoft’s partnership with OpenAI and satisfy shareholders’ greed, rather than improving their users’ productivity.

Microsoft’s claims that its integrated AI features promise a revolution in productivity. Sure, seamless search and instant document summarization sound like a great productivity booster, but that is just on the surface. When you dig deeper, the promised productivity boost is just an illusion and largely marketing hype. I have found that the output is less than great and often is full of inaccuracies.

For a law firm, the allure of instant document summarization and seamless search is undeniable. However, these tools introduce profound risks to the foundational pillars of a legal practice. Attorney-client privilege, data sovereignty, and compliance with state and federal laws.

How is your law firm affected?

The erosion of the Walled Garden. The hallmark of legal work is the expectation of absolute confidentiality. Traditionally, data remained within isolated silos. Embedded AI in your OS breaks these walls by reading across the entire operating system to provide context.

Data leakage occurs if an AI model uses local data to learn or if it syncs your metadata to the cloud for processing. The firm has inadvertently granted a third-party vendor access to protected work products.

The recall feature in Windows 11 is a huge problem. This feature takes periodic snapshots of the user’s screen to create a permanent, searchable visual record of everything an attorney views—including sensitive discovery material or private communications—creating a massive target for opposing counsel subpoenas and cyber-attacks.

Discovery risks, especially in litigation, are significant. An opposing party could argue that the AI’s logs and snapshots are discoverable evidence, potentially exposing internal strategies and raw thoughts that were never intended for production or disclosure.

Waiver of Attorney-Client Privilege. Legal privilege is fragile; it can be waived if a communication is shared with a third party that does not have a strict legal requirement for confidentiality.

Third-party intermediaries using the OS-level AI to draft a memo or summarize a client meeting could be legally interpreted as “disclosing” that information to Microsoft.

Inaccurate summaries, an AI agent might misinterpret a nuance in a deposition transcript or miss a note in a contract clause, leading to malpractice lawsuits.

Lack of attribution, because AI often pulls from various local and web-accessible sources simultaneously, makes it difficult for an attorney to verify the source of a specific claim, undermining the duty of candor with the courts.

The American Bar Association and state bars emphasize the duty of competence. Integrated AI can be confidently wrong, a phenomenon known as hallucinations. If you’re not careful, AI can destroy an attorney’s career or practice.

While Windows 11 AI offers a competitive edge in speed, it creates a black box environment that is often incompatible with the strict transparency and security required by the law. For a firm to adopt these tools, they must implement rigorous administrative controls disabling features that record screens, opting out of data sharing for model training, and ensuring all AI processing occurs within a HIPAA/SOC2-compliant trust boundary. Without these safeguards, the AI is not just an assistant; it is a liability to your practice.

The good news is that Windows 11 requires a TPM 2.0 chip for Copilot to operate. TPM stands for Trusted Platform Module and is a hardware-based security processor, essential for Windows 11 Copilot. It manages encryption keys to protect data, user credentials, and boot integrity.

If you have older hardware, you may be safe running Windows 11 for now. A potential problem is when you need to upgrade your hardware in the future? Most new PCs have this security chip. Modern CPUs such as Intel’s Core 8th Gen+, AMD Ryzen 2000/3000 series+, often include this as firmware rather than a physical chip.

So what can you do?

For general use of any AI system, try the following. Sanitize your inputs. Treating all external inputs as untrusted and using delimiters to separate instructions from data. Defensible Instructions. Utilize a “sandwich defense”, place user input between two sets of instructions, and fine-tune your models to recognize and reject hijacking attempts. Limit privileges, ensure any AI agents operate with minimal privilege, and that they have no access to external tools.

What if I have Windows 11 and Copilot?

The simplest and best thing to do is to roll back and use Windows 10. You will have to pay for security updates, but it is better than paying a higher price later on. If rolling back to Windows 10 isn’t an option, here is what to do to secure a Windows 11 environment for legal work.

Your IT department must first and foremost use Group Policy Objects called GPOs and Mobile Device Management called MDMs to ensure these features cannot be re-enabled by end-users or by a Windows 11 software update.

Here is a checklist of high-priority changes to help mitigate most of the risks. I have included the GPO paths, e.g. >User Configuration>Administrative Templates>Windows Components>Windows Recall.

1. Disable Windows Recall (Snapshot Privacy)
Microsoft’s Recall feature takes constant screenshots of the desktop. For a law firm, this is the highest risk factor for data breaches and discovery.

Disable Save snapshots for Windows.
>User Configuration>Administrative Templates>Windows Components>Windows Recall
Set Turn off saving snapshots for Windows to Enabled.
This will prevent the OS from recording any visual history of client files or emails.

2. Disable Copilot (General AI Integration)
System-wide Copilot can scan active windows and documents to provide suggestions, which sends metadata/content to the cloud and Microsoft.

Turn off Windows Copilot.
>User Configuration>Administrative Templates>Windows Components>Windows Copilot
Set Turn off Windows Copilot to Enabled.
This removes the Copilot icon from the taskbar and prevents the sidebar from being invoked.

3. Restrict Cloud Search & Data Collection
Windows 11 often sends local search queries to Bing to provide “enhanced” results. This can accidentally leak client names or case numbers to Microsoft’s search index.

Disable Web Search in Taskbar.
>Computer Configuration>Administrative Templates>Windows Components>Search
Settings Set Do not allow web search to Enabled.
Set Don’t search the web or display web results in Search to Enabled.
This ensures the Windows search bar stays local to the machine’s hard drive only.

4. Adjust Diagnostic & Telemetry Data
By default, Windows may send “Optional Diagnostic Data” to Microsoft, which can include snippets of memory or document content if a crash occurs while an AI feature is active.

Limit Diagnostic Data to Required only.
>Computer Configuration>Administrative Templates>Windows Components>Data Collection and Preview Builds
Set Allow Diagnostic Data to Enabled, and select Diagnostic data off (not recommended) or Required diagnostic data from the dropdown.

5. Disable Tailored Experiences
Microsoft uses diagnostic data to offer personalized tips and suggestions, effectively using AI to analyze user behavior.

Turn off Tailored Experiences.
>User Configuration>Administrative Templates>Windows Components>Cloud Content
Set Turn off Microsoft consumer experiences to Enabled.

In short, the blowback has forced Microsoft to move away from forcing AI into every corner of the OS and toward a more targeted, optional approach. However, the loss in trust of its user base has hurt Microsoft’s credibility; and as a result, many users and law firms have or are planning on moving away from Windows to OS X and Linux.

The legal industry is typically risk-averse, but with the promise of quick solutions, many law firms are jumping in headfirst. In my opinion, the waters are way too shallow at this time. A bit of caution and common sense can avert a tragedy.

Frank A Rivera CEO HoudiniEsq

Best Legal Cloud Software HoudiniEsq

Frank A. Rivera
Software architect and Sun Microsystems Alumni. Frank is responsible for the development of key technologies across several sectors such as banking, intelligence, national security, and practice management. Frank architected and developed the Multi-Level-Gateway of the Trusted Solaris operating system after 9/11 allowing our intelligence agencies to securely share information without exposing credentials to the other agency. Frank also developed a streaming asymmetric block cipher that uses varying block sizes and a 768bit key providing for very strong encryption. Frank architected and developed the first cloud-based legal practice management product for the legal industry. Four years before the term Cloud Computing entered our lexicon. The product was acquired by LexisNexis in 2004.

Google put your Law Firm at risk?

Does the reliance on Google put your Law Firm at risk?

We all assume that Google Services will always be available. Search, Gmail, and YouTube. The online storage of images, websites, and documents. But those services are just the tip of the iceberg. Does the reliance on Google put your Law Firm at risk? Google provides a staggering number of services that make our lives easier. The most popular are services such as drive, docs, maps, analytics, surveys, workspaces, calendars, chat, charts, classroom, fiber, and voice. In addition, millions of organizations large and small rely on Google for single-sign-on and two-factor authentication using Google Authenticator. But is Google good for your Law Firm?

To put things into perspective Google controls over a third of the services on the Internet. Google processes, manages and stores over 306 billion emails daily. Over 300 hours of video is uploaded to Google every minute. There are 5.6 billion searches performed every day and the total number of people who use Google to sign in to products and services daily is estimated to be nearly 1.3 billion. Take into account the many 3rd party service providers that rely on Google and it is billions of users that depend on Google each day.

The Crash
On December 14th, 2020 Google and all of its services became unavailable for nearly an hour. It doesn’t seem like much but at the time the world seemingly came to a standstill. Many were unable to work and many were left in the dark and in the cold literally as Google Home products such as Nest no longer worked. The Wall Street Journal being dependent on Google services had to resort to telephones to collaborate causing productivity to drop tenfold. Many schools that rely on Google Meet had to close for the day. Hospitals couldn’t access physician schedules. Law Firms couldn’t access calendars or email and in many cases access to critical systems was impossible.

In a statement, Google told an India Today Technologist that its services experienced an “authentication system outage” for about 45 minutes due to an internal storage quota issue. The interesting thing about this outage was that it occurred very shortly after the recent Solarwinds hack was reported.

Solarwinds is a Network Performance Monitor that was found to have been compromised by Russian hackers in March 2020 but only detected in mid-December 2020. The Solarwinds hack affected the Pentagon, many intelligence agencies, DOJ, IRS, NASA, several nuclear labs, nearly every telecom company, and many Fortune 500 companies that use the Solarwinds software. Approximately 1,800 clients.

Was the recent outage which coincided with the Solarwinds hack the real cause? More on that in a minute.

As is the case with each outage once the system is back up and running everyone goes on about their business and the outage is soon forgotten. No worries right? Well yes. This isn’t the first time and it certainly won’t be the last.

On November 11th, 2020, another outage occurred affecting streaming services. The outage started at roughly 12:20 UTC and was restored at 04:13 UTC. Nearly four hours. Prior to this outage on August 20th, 2020, over approximately six hours, a global outage abruptly disrupted Google’s services, including Gmail, Drive, Docs, Meet, and Voice.

That is three major outages in a single year. So much for the five-nines of high availability. Five-nines refers to 99.999% of the high availability of services. To achieve five-nines the service must be down no less than 5.26 minutes per year. Four nines 99.99% is considered excellent but means only 52.32 minutes of downtime per year. In one year Google has been down approximately 11 hours. That is roughly 99.8% or so of availability or two-nines in 2020.

Google’s global outages demonstrate the risk Big Tech poses with consolidated online infrastructure. It’s not just the little guys that are at risk. Some of the biggest companies use Google services. Uber, Netflix, Pinterest. Spotify, Airbnb, and Twitter just to name a few. Millions rely on Google to authenticate to other services for example Salesforce and Dropbox. Are we too dependent on Google?

What happens if instead of hours Google is down for an entire day or days? No surprise, Google has planned for this. Google implements SRE which is Site Reliability Engineering. SREs are comprised of software that monitors and responds to critical infrastructure and operations problems. It removes the human component from these sorts of tasks. It is more reliable and efficient and can respond to issues within milliseconds. In theory. The problem is that software engineers make mistakes. Case and point, SREs didn’t prevent any of these recent Google outages.

So is Google secure? Somewhat and better than most. No computer system is 100% secure. The only safe computer is unplugged. Any software that requires login credentials is vulnerable because the weakest link in any computer system is always the user.

Is Google safe from hackers? No. Google has stated that it has paid hackers 6.5 million dollars in 2020 to help keep the Internet safe. These weren’t attacks but competitions run by Google to help identify security deficiencies on their platforms. Google has been running these competitions since 2010. Makes one wonder.

The most recent Google outage was just hours after the US government and many of its agencies reported the Solarwinds cyber attack. This hack was so sophisticated and serious that Congress had a national security meeting on the subject.

Was Google being cautious and in their attempt to patch Solarwinds brought down the entire system? It is only speculation but experts did take note of the recent outage’s timeframe. Even if Google doesn’t use Solarwinds directly, some developers and integrators on Google’s platforms do.

What is concerning about all this is that Google has become a single choke point for many businesses across the globe. If its services become unavailable for long periods of time billions of users are impacted.

So what is one to do? Well, it is prudent to set up alternative services for email and access to critical systems. Having a Yahoo account in addition to a Gmail account ensures that you will be able to communicate in case Google was to go down for a long spell. If you use two-factor authentication it is important to create and save two-factor authentication keys so if Google Authenticator becomes unavailable you can still authenticate and log in to critical services outside Google’s platforms.

These outages and the two-nines, 99.8% of availability make a strong argument for Google to be broken up if not for Antitrust reasons but for security reasons. Currently, the U.S. Dept. of Justice filed a lawsuit against Google along with forty states alleging that Google has a search monopoly. Reminiscent of Microsft’s Antitrust troubles with its Web Browser two decades ago. But Search is just one sliver of the services it provides. Some believe that Google has a monopoly on internet-based services as well. One thing is for sure, Google will continue to create and consolidate internet-based services and if you’re in their ecosystem then you are at risk if you don’t set up alternatives just in case because it isn’t a matter of if but when.

Some say that Google is too big, it controls too much, and poses a threat to every business that uses its services and should be broken up. One thing is for sure the latest cyber-attack is a moment of reckoning for every business large and small.

Frank A Rivera CEO HoudiniEsq

Best Legal Cloud Software HoudiniEsq

Frank A. Rivera
Software architect and Sun Microsystems Alumni. Frank is responsible for the development of key technologies across several sectors such as banking, intelligence, national security, and practice management. Frank architected and developed the Multi-Level-Gateway of the Trusted Solaris operating system after 9/11 allowing our intelligence agencies to securely share information without exposing credentials to the other agency. Frank also developed a streaming asymmetric block cipher that uses varying block sizes and a 768bit key providing for very strong encryption. Frank architected and developed the first cloud-based legal practice management product for the legal industry. Four years before the term Cloud Computing entered our lexicon. The product was acquired by LexisNexis in 2004.