Insecure by Default: Why Secure-by-Default Isn’t Industry Default (Yet)

Introduction: The Paradox of “Secure by Default”

It’s 2025, and we still can’t trust products to be secure out of the box. “Secure by default” has been an industry mantra for years, yet ironically, it’s still not the actual default in most cases. The tech community has known about insecure default configurations since essentially forever—yet even modern platforms like Kubernetes have shipped with insecure defaults:. We’ve all encountered routers with admin/admin logins or software that leaves ports wide open on first install. Despite widespread awareness, manufacturers and software vendors continue to prioritize convenience and compatibility over hardened defaults. Why?

In this post, I’ll take a strategic, industry-wide lens on why insecure defaults persist despite our better knowledge. We’ll dig into the economic and usability pressures that tempt vendors to choose the easy (insecure) path, how regulatory and market gaps reinforce the status quo, and real examples ranging from network gear to consumer IoT to enterprise software. I’ll also share a quick personal anecdote from the field that shows how vendors excuse these choices. Finally, we’ll explore some long-term fixes—like smarter procurement requirements, open-source leadership, and maybe a nudge from regulators.

(As someone who’s spent years in the trenches of infrastructure security, I approach this with a field-proven perspective: no academic fluff, just the real talk on why “secure by default” isn’t default yet, and how we can start changing that.)

Convenience Over Security: Why Insecure Defaults Appeal

One hard truth: security adds friction. Vendors know that any extra setup step or restriction might turn off prospective customers. The initial configuration of most devices or software is aimed squarely at usability rather than security:. This trade-off is intentional. Vendors worry that if they ship a product locked down by default, some percentage of users will struggle to get it working on the first try—and then return it or flood support lines. It’s often “the easy route” to leave default logins and open settings so that things “just work” without effort:.

From an economic standpoint, products that are secure by default can face slower adoption. A security engineer might applaud a router that forces every new user to set a unique strong password, but the marketing team sees a potential drop in sales if customers complain the setup is too complicated. There’s a sort of natural selection at play: products with stronger default security often have a longer learning curve and more initial friction, so they may lose out in the marketplace to less secure-but-easier alternatives:. As one industry expert put it, security features that are not almost transparent to the user will simply be bypassed or prompt users to choose a competitor without those hurdles:. In short, convenience sells, and vendors are acutely aware of this.

Vendor Anecdote: I once confronted a senior product manager about why their company’s new IoT camera still used a universal default password. His candid response: “Look, if we force a password change on setup, our support calls double. Customers just want to plug it in and view the feed. If it’s too hard, they return it.” I’ve heard variations of this excuse at conferences and during security audits. It’s a frustrating insight into vendor calculus: ease-of-use often wins over best practices. In one notorious case, a major vendor actually tried to enforce password changes back in 2012—and faced a full-on customer revolt with spiking support costs, to the point where the internal security team nearly got fired for it:. This war story makes the rounds in our industry circles as a cautionary tale: push security too hard, and users (and your own org) push back.

The result is that many companies stick to insecure defaults intentionally. It simplifies manufacturing and deployment (no custom per-unit credentials to manage, for example:) and avoids scaring off less savvy customers. Everyone assumes someone down the line (the IT admin, the end-user, etc.) will fix it later. Of course, as we know, that assumption often doesn’t hold.

The Reinforcing Loop: Customers, Compatibility, and Lack of Pressure

It’s not just lazy vendors—this problem is propped up by a reinforcing loop of customer behavior, backward compatibility concerns, and weak external pressure.

  • Customers Stick to Defaults: Let’s face it, users are notoriously bad at changing default settings. Study after study shows most people accept whatever defaults a system presents:. Attackers know this, which is why public lists of default passwords and one-size-fits-all exploits are so effective. If an app ships with something open, odds are it’ll stay open. Vendors count on this inertia; they often assume a “reasonable” user would change the password or lock things down, but the reality is that rarely happens without force. One classic example: the Mirai botnet in 2016. It rampaged through hundreds of thousands of IoT devices by simply logging in with factory-default credentials like “admin” and “12345”::. Mirai only underscored what we already knew—defaults matter, because many users never change them.

  • “It Worked in the Lab!” (Compatibility and Legacy): Another reason insecure defaults persist is to ensure things don’t break in real-world deployments. Vendors often justify weak defaults by assuming an ideal environment. Oh, that database will only run internally on a safe network, so disabling authentication by default is fine. These assumptions are dangerous. The MongoDB ransomware fiasco of 2017 is a great case in point. Older versions of MongoDB did not require any password and even listened on all network interfaces by default—because it was assumed admins would deploy it behind a firewall::. Thousands of installs didn’t get the memo, were exposed to the internet, and promptly got hacked and held for ransom. The lesson: if you ship an insecure default, someone will run it in an insecure way. Yet vendors weigh backward compatibility heavily. They worry if they turn off an old protocol or force a new secure mode, some legacy integration will break. For instance, many network devices kept enabling old protocols (like plain HTTP or Telnet for management) by default well into the 2010s, citing customers who still needed them. Backwards compatibility often trumps security in product decisions, especially when big customers complain something “used to work” and now doesn’t. It’s easier for the vendor to leave an insecure option on and say “well, it’s up to the user to disable it if not needed” – thereby passing the buck downstream.

  • No Rules, Little Accountability: In most industries, there’s been minimal regulatory or legal pressure to change this default insecurity. If a data breach happens because a customer left the default password in place, the vendor usually isn’t held liable for shipping that default – the blame falls on the user or the unlucky IT guy. That dynamic is starting to shift ever so slightly: for example, California passed a law in 2020 requiring that any IoT device sold in the state must have a unique preprogrammed password or require the user to set one on first use::. The UK similarly introduced a Code of Practice (now moving into law) mandating no universal default passwords in consumer IoT:. These are positive steps, but they’re narrow in scope (mostly consumer IoT) and only in certain jurisdictions. Globally, there’s no widespread mandate that says “Thou shalt ship secure defaults.” Until such standards exist (or until customers start demanding security up front), vendors face little incentive beyond reputation risk to overhaul their defaults. In the enterprise software world, insecure default configurations aren’t even treated as vulnerabilities by many tracking systems, which means they often fly under the radar:. In short, neither market forces nor regulators have fully cracked down, leaving it largely up to vendor goodwill (or lack thereof).

This loop feeds itself. Customers don’t demand secure-by-default products – in fact, some actively resist them due to added complexity. Vendors, seeing that, continue the status quo, citing “we’re just giving customers what they want.” A decade ago in the industrial control systems (ICS) sector, an industry standards group scrapped the idea of secure defaults precisely because owners cared more about avoiding downtime than security hassles::. Even today, many vendors privately say the same: users won’t accept the trade-offs, so why should we stick our necks out? It’s a frustrating equilibrium where everyone agrees insecure defaults are bad in theory, but each party justifies it in practice.

Real-World Examples: Defaults Gone Wrong

To illustrate how this issue crops up across domains, let’s look at a few real examples (and excuses) from network gear, consumer devices, and software platforms:

  • Network Gear & Enterprise Infrastructure: It’s astonishing how many high-end network devices still arrive with well-known default logins or services enabled. I’ve audited corporate networks where expensive switches and firewalls were running with the manufacturer’s default SNMP community string “public” – essentially an open invitation to any attacker who knows where to look. Why were they left that way? In part because the vendor’s documentation printed “community: public” right on page one, and the busy admins never changed it. Legacy telnet access is another: for years, some enterprise gear kept Telnet on by default (with no encryption) alongside SSH. The reason: “some customers still use automated telnet scripts”. In one case, a vendor rep admitted to me that they wanted to disable Telnet by default, but a few big clients threatened not to upgrade if their old management scripts broke – so Telnet stayed on. Backwards compatibility over security, in a nutshell. The cost is clear: attackers routinely scan for these default openings. There’s even a 1990s-era CVE (CVE-1999-0517) essentially saying “SNMP community name is ‘public’” – a default that’s been known and exploitable for decades.

  • Consumer & IoT Devices: The consumer IoT space might be the poster child for insecure defaults. We’ve all heard the stories of smart cameras, baby monitors, or doorbells getting taken over by creeps – often because the device had a default password or no password at all. The Mirai botnet example earlier was largely DVRs and IP cameras that never had their default creds changed:. In another memorable incident, a family’s Wi-Fi baby monitor was hacked and the hacker was screaming at their infant – the device had come with a default login the parents weren’t even aware of. Why do manufacturers do this? Often, it’s sheer negligence combined with wanting to avoid password reset support calls. A lot of cheap IoT gadgets don’t even offer a way to change the hardcoded password without a firmware update (which the user will never do). Economics play a big role here: IoT margins are slim, so vendors cut costs on everything, including implementing robust onboarding security. Until laws like California’s kicked in, it was common to ship products with a simple shared password across all units. Even now, enforcement is spotty. The IoT world is slowly improving (unique default credentials or QR-code WiFi onboarding are more common now), but it took headline-grabbing attacks to shame some vendors into action.

  • Software & Cloud Platforms: Even software that runs our businesses isn’t immune. Databases, applications, and cloud services have historically had some very insecure out-of-box settings. I already recounted the MongoDB saga – an example where default configuration = insecure configuration. Another classic was early Hadoop and other big-data platforms: by default, no authentication on cluster services (because “it’s a closed cluster, right?”). Admins had to jump through hoops to turn on Kerberos security, so guess what – many didn’t, and some clusters got compromised when exposed. Or consider Kubernetes, which a few years ago infamously had its dashboard and APIs wide open by default if you deployed the vanilla config. Many novice users stood up Kubernetes clusters without realizing they needed to manually lock it down, leading to breaches. Why would a modern project do that? The maintainers assumed a kube cluster would be in a secured network, and they prioritized easy setup for developers over forcing authentication. It’s the same old story: assume the best, ship something that “just works,” and hope users read the manual. Cloud providers have had their missteps too – early on, some cloud storage services allowed broad access by default, though most have corrected those thanks to embarrassing leaks. The pattern in software is that defaults often trail best practices by a few years. It usually takes a security incident or two to prompt project maintainers to change a default setting to something safer. Until then, the onus is on users to harden things.

Long-Term Fixes: Changing the Default Trajectory

Breaking out of the insecure-by-default trap will require effort from multiple angles. Here are some strategic fixes and shifts that could make “secure by default” truly the default in the long run:

  • Proactive Procurement Policies: Big buyers have power. If governments and enterprises start demanding secure defaults in their RFPs and contracts, vendors will listen. For example, a government agency can require that any purchased device must enforce unique credentials and have all unnecessary services disabled out of the box. By baking these demands into procurement, vendors get a clear message: build it secure or lose the sale. Some forward-thinking organizations are already adding such clauses, essentially creating a market incentive for secure-by-default products. Over time, this could tip the balance, as vendors see security as a must-have feature to win business (not just a nice-to-have optional extra).

  • Industry Standards & Open-Source Leadership: Industry groups and open-source projects can lead by example by adopting secure defaults and sharing best practices. The open-source community in particular, which powers a lot of commercial software, can push the envelope. We’ve seen instances where open-source projects flip a default from insecure to secure (for instance, MongoDB binding only to localhost by default in newer versions, after the outcry). These changes often ripple out to industry norms. Standards bodies and security foundations can also define baselines – e.g., an IoT Security Framework that says “no default passwords, period” or an enterprise software benchmark that rates products on secure default configuration. When such standards gain traction, it gives cover to product teams internally to make the case: “We need to do this to meet compliance XYZ or to get certified.” In short, peer and industry pressure can supplement customer pressure.

  • Regulation and Liability: As much as we technologists sometimes hate to admit it, regulation can spur action where market forces fail. Thoughtful regulation – like the laws requiring unique default passwords – removes the cheapest, laziest path (universal defaults) and forces vendors to implement a solution. We don’t want overbearing rules that stifle innovation, but basic security hygiene in defaults is a reasonable expectation. Another angle is exploring vendor liability. If a product ships with egregiously insecure defaults and that leads directly to consumer harm, should the vendor bear some responsibility? It’s a tricky area, but even the threat of liability can make executives prioritize security in product design discussions. We’re already seeing a shift in tone with governments (via agencies like CISA) publishing guidance that urges manufacturers to take ownership of customer security outcomes and eliminate practices like default passwords:. It’s not a giant leap from guidance to eventual enforcement.

  • Designing Security and Usability: Ultimately, the tech industry needs to innovate in making security invisible or painless for end users. The more we can deliver secure-by-default systems that don’t feel harder to use, the less this whole trade-off will be an issue. This means investing in UX for security features – e.g., onboarding flows that guide users through setting up creds in a friendly way, smart defaults that automatically lock things down based on context, and so on. Some vendors are catching on that security can be a competitive advantage if done right (nobody likes dealing with breaches!). The goal should be products where the most secure configuration is also the path of least resistance. Easier said than done, I know – but product teams that crack this will win in both security and market share. As one security strategist succinctly noted, if security controls are intuitive and nearly transparent, users won’t feel the need to bypass them:.

None of these fixes alone will flip the situation overnight. But together, they can start to realign incentives and expectations. I’m encouraged to see even historically change-averse sectors like ICS now talking about secure defaults as a desirable goal, whereas 20 years ago it was dismissed outright. The tide is (slowly) turning.

Conclusion & Checklist for Product Teams

Security professionals (myself included) will continue to bang the drum that “secure by default” should be the norm, not the exception. We’re not there yet, but the combination of enlightened self-interest, customer demand, and maybe a nudge of regulation can make a difference. If you’re building products or systems, there are practical steps you can take today to avoid contributing to the insecure-default problem. To wrap up, here’s a short checklist for product teams and engineers:

  • Eliminate Universal Defaults: Never ship a product with a universal default password or credential. If a default login is unavoidable, ensure it’s unique per device or instance (and ideally force a change on first use)::.
  • Secure First Boot/Install Experience: Design the out-of-box setup to enable security features by default (network services off, least privilege access, encryption on). Guide users through any necessary setup in a friendly way, but don’t default to “open” for convenience.
  • Make Security the Path of Least Resistance: Whenever possible, make the secure configuration also the easiest one. For example, provide hassle-free automatic updates, turn on secure protocols by default, and supply sane defaults so the user isn’t tempted to weaken settings.
  • Provide Backward Compatibility via Opt-In: If legacy support is needed, ship it disabled by default. Allow advanced users to enable that old protocol or weak cipher if absolutely necessary, but don’t present it unless someone goes looking. This flips the script: security by default, with compatibility as an opt-in for the edge cases.
  • Document and Educate: Clearly highlight in documentation which defaults are set for security. If you must ship something open (due to a specific use case), warn the user conspicuously. Make it easy to find and change default creds or settings. A well-informed user is more likely to configure things right.
  • Embrace Standards and Pledges: Align your product with emerging security benchmarks (like the CISA Secure by Design/Default guidance). Publicly commit to secure-by-default principles – it holds your team accountable and earns trust with savvy customers.

It’s past time for our industry to stop treating security as an optional add-on. Secure by default cannot just be a slogan; it has to become the baseline expectation for products and software. That shift starts with all of us—vendors, buyers, and even us as end-users—demanding better and refusing to accept the old excuses. Let’s make the “easy path” also the safe path, so we can finally retire the ironic truth that gave this post its title.


← Back to blog