Tag Archives: Opinions

A Security Issue in Android That Remains Unfixed – Pull-down Menu On Lock Screen

от Божидар Божанов
лиценз CC BY

Having your phone lying around when your kids are playing with everything they find is a great security test. They immediately discover new features and ways to go beyond the usual flow.

This is the way I recently discovered a security issue with Android. Apparently, even if the phone is locked, the pull-down menu with quick settings works. Also, volume control works. Not every functionality inside the quick settings menu works fully while unlocked, but you can disable mobile data and Wi-Fi, you can turn on your hotspot, you can switch to Airplane mode.

While this has been pointed out on Google Pixel forums, on reddit and Stack Exchange, it has not been fixed in stock Android. Different manufacturers seem to have acknowledged the issue in their custom ROMs, but that’s not a reliable long-term solution.

Let me explain why this is an issue. First, it breaks the assumption that when the phone is locked nothing works. Breaking user assumptions is bad by itself.

Second, it allows criminals to steal your phone and put in in Airplane mode, thus disabling any ability to track the phone – either through “find my phone” services, or by the police through mobile carriers. They can silence the phone, so that it’s not found with “ring my phone” functionality. It’s true that an attacker can just take out the SIM card, but having the Wi-Fi on still allows tracking using wifi networks through which the phone passes.

Third, the hotspot (similar issues go with Bluetooth). Allowing a connection can be used to attack the device. It’s not trivial, but it’s not impossible either. It can also be used to do all sorts of network attacks on other devices connected to the hotspot (e.g. you enable the hotspot, a laptop connects automatically, and you execute an APR poisoning attack). The hotspot also allows attackers to use a device to commit online crimes and frame the owner. Especially if they do not steal the phone, but leave it lying where it originally was, just with the hotspot turned on. Of course, they would need to get the password for the hotspot, but this can be obtained through social engineering.

The interesting thing is that when you use Google’s Family Link to lock a device that’s given to a child, the pull-down menu doesn’t work. So the basic idea that “once locked, nothing should be accessible” is there, it’s just not implemented in the default use-case.

While the things described above are indeed edge-cases and may be far fetched, I think they should be fixed. The more functionality is available on a locked phone, the more attack surface it has (including for the exploitation of 0days).

The post A Security Issue in Android That Remains Unfixed – Pull-down Menu On Lock Screen appeared first on Bozho's tech blog.

The Lack Of Native MFA For Active Directory Is A Big Sin For Microsoft

от Божидар Божанов
лиценз CC BY

Active Directory is dominant in the enterprise world (as well as the public sector). From my observation, the majority of organization rely on Active Directory for their user accounts. While that may be changing in recent years with more advanced and cloud IAM and directory solutions, the landscape in the last two decades is a domination of Microsoft’s Active Directory.

As a result of that dominance, many cyber attacks rely on exploiting some aspects of Active Directory. Whether it would be weaknesses of Kerberos, “pass the ticket”, golden ticket, etc. Standard attacks like password spraying, credential stuffing and other brute forcing also apply, especially if the Exchange web access is enabled. Last, but not least, simply browsing the active directory once authenticated with a compromised account, provides important information for further exploitation (finding other accounts, finding abandoned, but not disabled accounts, finding passwords in description fields, etc).

Basically, having access an authentication endpoint which interfaces the Active Directory allows attackers to gain access and then do lateral movement.

What is the most recommended measures for preventing authentication attacks? Multi-factor authentication. And the sad reality is that Microsoft doesn’t offer native MFA for Active Directory.

Yes, there are things like Microsoft Hello for Business, but that can’t be used in web and email context – it is tied to the Windows machine. And yes, there are third-party options. But they incur additional cost, and are complex to setup and manage. We all know the power of defaults and built-in features in security – it should be readily available and simple in order to have wide adoption.

What Microsoft should have done is introduce standard, TOTP-based MFA and enforce it through native second-factor screens in Windows, Exchange web access, Outlook and others. Yes, that would require Kerberos upgrades, but it is completely feasible. Ideally, it should be enabled by a single click, which would prompt users to enroll their smart phone apps (Google Authenticator, Microsoft Authenticator, Authy or other) on their next successful login. Of course, there may be users without smartphones, and so the option to not enroll for MFA may be available to certain less-privileged AD groups.

By not doing that, Microsoft exposes all on-premise AD deployments to all sorts of authentication attacks mentioned above. And for me that’s a big sin.

Microsoft would say, of course, that their Azure AD supports many MFA options and is great and modern and secure and everything. And that’s true, if you want to chose to migrate to Azure and use Office365. And pay for subscription vs just the Windows Server license. It’s not a secret that Microsoft’s business model is shifting towards cloud, subscription services. And there’s nothing wrong with that. But leaving on-prem users with no good option for proper MFA across services, including email, is irresponsible.

The post The Lack Of Native MFA For Active Directory Is A Big Sin For Microsoft appeared first on Bozho's tech blog.

Open APIs – Public Infrastructure in the Digital Age

от Божидар Божанов
лиценз CC BY

When “public infrastructure” is mentioned, typically people think of roads, bridges, rails, dams, power plants, city lights. These are all enablers, publicly funded/owned/managed (not necessarily all of these), which allow the larger public to do business and to cover basic needs. Public infrastructure is sometimes free, but not always (you pay electricity bills and toll fees; and of course someone will rightly point out that nothing is free, because we pay it through taxes, but that’s not the point).

In the digital age, we can think of some additional examples to “public infrastructure”. The most obvious one, which has a physical aspects, is fiber-optic cables. Sometimes they are publicly owned (especially in rural areas), and their goal is to provide internet access, which itself is an enabler for business and day-to-day household activities. More and more countries, municipalities and even smaller communities invest in owning fiber-optic cables in order to make sure there’s equal access to the internet. But cables are still physical infrastructure.

Something entirely digital, that is increasingly turning into public infrastructure, are open government APIs. They are not fully perceived as public infrastructure, and exist as such only in the heads of a handful of policymakers and IT experts, but in essence they are exactly that – government-owned infrastructure that enables businesses and other activities.

But let me elaborate. Open APIs let the larger public access data and/or modify data that is collected and/or centralized and/or monitored by government institutions (central or local). Some examples:

  • Electronic health infrastructure – the Bulgarian government is building a centralized health record as well as centralized e-prescriptions and e-hospitalization. It is all APIs, where private companies develop software for hospitals, general practitioners, pharmacies, labs. Other companies may develop apps for citizens to help them improve their health or match them with nutrition and sport advice. All of that is based on open APIs (following the FHIR standard) and allows for fair competition, while managing access to sensitive data, audit logs and most importantly – collection in a centralized store.
  • Toll system – we have a centralized road toll system, which offers APIs (unfortunately, via an overly complicated model of intermediaries) which supports multiple resellers to sell toll passes (time-based and distance-based). This allows telecoms (through apps), banks (through e-banking), supermarkets, fleet management companies and others to offer better UI and integrated services.
  • Tax systems – businesses will be happy to report their taxes through their ERP automatically, rather than manually exporting and uploading, or manually filling data in complex forms.
  • E-delivery of documents – Bulgaria has a centralized system for electronic delivery of documents to public institutions. That system has an API, which allows third parties to integrate and send documents as part of more complex services, on behalf of citizens and organizations.
  • Car registration – car registers are centralized, but opening up their APIs would allow car (re)sellers to handle all the paperwork on behalf of their customers, online, by a click of a button in their internal system. Car part owners can fetch data about registered cars per brand and model in order to make sure there are enough spare parts in stock (based on the typical lifecycle of car parts).

Core systems and central registers with open APIs are digital public infrastructure that would allow a more seamless, integrated state. There are a lot of details to be taken into account – access management and authentication (who has the right to read or write certain data), fees (if a system is heavily used, the owning institution might charge a fee), change management and upgrades, zero downtime, integrity, format, etc.

But the policy that I have always followed and advocated for is clear – mandatory open APIs for all government systems. Bureaucracy and paperwork may become nearly invisible, hidden behind APIs, if this principle is followed.

The post Open APIs – Public Infrastructure in the Digital Age appeared first on Bozho's tech blog.

On Disinformation and Large Online Platforms

от Божидар Божанов
лиценз CC BY

This week I was invited to be a panelist, together with other digital ministers, on a side-event organized by Ukraine in Davos, during the World Economic Forum. The topic was disinformation, and I’d like to share my thoughts on it. The video recording is here, but below is not a transcript, but an expanded version.

Bulgaria is seemingly more susceptible to disinformation, for various reasons. We have a majority of the population that has positive sentiments about Russia, for historical reasons. And disinformation campaigns have been around before the war and after the wear started. The typical narratives that are being pushed every day are about the bad, decadent west; the slavic, traditional, conservative Russian government; the evil and aggressive NATO; the great and powerful, but peaceful Russian army, and so on.

These disinformation campaign are undermining public discourse and even public policy. COVID vaccination rates in Bulgaria are one of the lowest in the world (and therefore the mortality rate is one of the highest). Propaganda and conspiracy theories took hold into our society and literally killed our relatives and friends. The war is another example – Bulgaria is on the first spot when it comes to people thinking that the west (EU/NATO) is at fault for the war in Ukraine.

Kremlin uses the same propaganda techniques developed in the cold war, but applied on the free internet, much more efficiently. They use European values of free speech to undermine those same European values.

Their main channels are social networks, who seem to remain blissfully ignorant of the local context as the one described above.

What we’ve seen, and what has been leaked and discussed for a long time is that troll factories amplify anonymous websites. They share content, like content, make it seem like it’s noteworthy to the algorithms.

We know how it works. But governments can’t just block a website, because they think it’s false information. A government may easily go beyond the good intentions and do censorship. In 4 years I won’t be a minister and the next government may decide I’m spreading “western propaganda” and block my profiles, my blogs, my interviews in the media.

I said all of that in front of the Bulgarian parliament last week. I also said that local measures are insufficient, and risky.

That’s why we have to act smart. We need to strike down the mechanisms for weaponzing social networks – for spreading disinformation to large portions of the population, not to block the information itself. Brute force is dangerous. And helps the Kremlin in their narrative about the bad, hypocritical west that talks about free speech, but has the power to shut you down if a bureaucrat says so.

The solution, in my opinion, is to regulate recommendation engines, on a European level. To make these algorithms find and demote these networks of trolls (they fail at that – Facebook claims they found 3 Russian-linked accounts in January).

How to do it? It’s hard to answer if we don’t know the data and the details of how they currently work. Social networks can try to cluster users by IPs, ASs, VPN exit nodes, content similarity, DNS and WHOIS data for websites, photo databases, etc. They can consult national media registers (if they exist), via APIs, to make sure something is a media and not an auto-generated website with pre-written false content (which is what actually happens).

The regulation should make it a focus of social media not to moderate everything, but to not promote inauthentic behavior.

Europe and its partners must find a way to regulate algorithms without curbing freedom of expression. And I was in Brussels last week to underline that. We can use the Digital services act to do exactly that, and we have to do it wisely.

I’ve been criticized – why I’m taking on this task while I can do just cool things like eID and eServices and removing bureaucracy. I’m.doing those, of course, without delay.

But we are here as government officials to tackle the systemic risks. The eID I’ll introduce will do no good if we lose the hearts and minds of people to Kremlin propaganda.

The post On Disinformation and Large Online Platforms appeared first on Bozho's tech blog.

Don’t Reinvent Date Formats

от Божидар Божанов
лиценз CC BY

Microsoft Exchange has a bug that practically stops email. (The public sector is primarily using Exchange, so many of the institutions I’m responsible for as a minister, have their email “stuck”). The bug is described here, and fortunately, has a solution.

But let me say something simple and obvious: don’t reinvent date formats, please. When in doubt, use ISO 8601 or epoch millis (in UTC), or RFC 2822. Nothing else makes sense.

Certainly treating an int as a date is an abysmal idea (it doesn’t even save that much resources). 202201010000 is not a date format worth considering.

(As a side note, another advice – add automate tests for future timestamps. Sometiimes they catch odd behavior).

I’ll finish with Jon Skeet’s talk on dates, strings and numbers.

The post Don’t Reinvent Date Formats appeared first on Bozho's tech blog.

I Have Been Appointed As E-Governance Minister of Bulgaria

от Божидар Божанов
лиценз CC BY

Last week the Bulgarian National assembly appointed the new government. I am one of the appointed ministers – a minister for electronic governance.

The portfolio includes digitizing registers and processes in all government institutions, reducing bureaucracy, electronic identity, cybersecurity, digital skills and more.

Thanks to all my readers for following this blog throughout the years. I will be sharing some digital policy details here from now on while I’m minister. That may include some technical articles, but they are unlikely to be developer-oriented.

I hope to make some important changes and put forward key ideas for e-governance and digital policy that can be used as an example outside my country (last time I was involved in public policy, I helped pass an “open source law”).

I’ve written a few articles about IT people looking for challenges – not just technical challenges. And I think that’s a great challenge where I’ll have to put all my knowledge and skills to work for the common good.

The post I Have Been Appointed As E-Governance Minister of Bulgaria appeared first on Bozho's tech blog.

Simple Things That Are Actually Hard: User Authentication

от Божидар Божанов
лиценз CC BY

You build a system. User authentication is the component that is always there, regardless of the functionality of the system. And by now it should be simple to implement it – just “drag” some ready-to-use authentication module, or configure it with some basic options (e.g. Spring Security), and you’re done.

Well, no. It’s the most obvious thing and yet it’s extremely complicated to get right. It’s not just login form -> check username/password -> set cookie. It has a lot of other things to think about:

  • Cookie security – how to make it so that a cookie doesn’t leak or can’t be forged. Should you even have a cookie, or use some stateless approach like JWT, use SameSite lax or strict?
  • Bind cookie to IP and logout user if IP changes?
  • Password requirements – minimum length, special characters? UI to help with selecting a password?
  • Storing passwords in the database – bcrypt, scrypt, PBKDF2, SHA with multiple iterations?
  • Allow storing in the browser? Generally “yes”, but some applications deliberately hash it before sending it, so that it can’t be stored automatically
  • Email vs username – do you need a username at all? Should change of email be allowed?
  • Rate-limiting authentication attempts – how many failed logins should block the account, for how long, should admins get notifications or at least logs for locked accounts? Is the limit per IP, per account, a combination of those?
  • Captcha – do you need captcha at all, which one, and after how many attempts? Is Re-Captcha an option?
  • Password reset – password reset token database table or expiring links with HMAC? Rate-limit password reset?
  • SSO – should your service should support LDAP/ActiveDirectory authentication (probably yes), should it support SAML 2.0 or OpenID Connect, and if yes, which ones? Or all of them? Should it ONLY support SSO, rather than internal authentication?
  • 2FA – TOTP or other? Implement the whole 2FA flow, including enable/disable and use or backup codes; add option to not ask for 2FA for a particular device for a period of time? Configuring subset of AD/LDAP users to authenticate based on certain group memberships?
  • Force 2FA by admin configuration – implement time window for activating 2FA after a global option is enabled?
  • Login by link – should the option to send a one-time login link be email be supported?
  • XSS protection – make sure no XSS vulnerabilities exist especially on the login page (but not only, as XSS can steal cookies)
  • Dedicated authentication log – keep a history of all logins, with time, IP, user agent
  • Force logout – is the ability to logout a logged-in device needed, how to implement it, e.g. with stateless tokens it’s not trivial.
  • Keeping a mobile device logged in – what should be stored client-side? (certainly not the password)
  • Working behind proxy – if the client IP matters (it does), make sure the X-Forwarded-For header is parsed
  • Capture login timezone for user and store it in the session to adjust times in the UI?
  • TLS Mutual authentication – if we need to support hardware token authentication with private key, we should enable TLS mutual. What should be in the truststore, does the web server support per-page mutual TLS or should we use a subdomain, if there’s a load balancer / reverse proxy, does it support it and how to forward certificate details?
  • Require account activation or let the user login immediately after registration? Require account approval by back-office staff?
  • Initial password setting for accounts created by admins – generate initial password and force changing it on first login? Don’t generate password and start from a password reset flow?
  • Login anomalies – how to detect them and should you inform the user? Should you rely on 3rd party tools (e.g. a SIEM), or have such functionality built-in?

And that’s for the most obvious feature that every application has. No wonder it has been implemented incorrectly many, many times. The IT world is complex and nothing is simple. Sending email isn’t simple, authentication isn’t simple, logging isn’t simple. Working with strings and dates isn’t simple, sanitizing input and output isn’t simple.

We have done a poor job in building the frameworks and tools to help us with all those things. We can’t really ignore them, we have to think about them actively and take conscious, informed decisions.

The post Simple Things That Are Actually Hard: User Authentication appeared first on Bozho's tech blog.

Integrity Guarantees of Blockchains In Case of Single Owner Or Colluding Owners

от Божидар Божанов
лиценз CC BY

The title may sound as a paper title, rather than a blogpost, because it was originally an idea for such, but I’m unlikely to find the time to put a proper paper about it, so here it is – a blogpost.

Blockchain has been touted as the ultimate integrity guarantee – if you “have blockchain”, nobody can tamper with your data. Of course, reality is more complicated, and even in the most distributed of ledgers, there are known attacks. But most organizations that are experimenting with blockchain, rely on a private network, sometimes having themselves as the sole owner of the infrastructure, and sometimes sharing it with just a few partners.

The point of having the technology in the first place is to guarantee that once collected, data cannot be tampered with. So let’s review how that works in practice.

First, we have two define two terms – “tamper-resistant” (sometimes referred to as tamper-free) and “tamper-evident”. “Tamper-resistant” means nobody can ever tamper with the data and the state of the data structure is always guaranteed to be without any modifications. “Tamper-evident”, on the other hand, means that a data structure can be validated for integrity violations, and it will be known that there have been modifications (alterations, deletions or back-dating of entries). Therefore, with tamper-evident structures you can prove that the data is intact, but if it’s not intact, you can’t know the original state. It’s still a very important property, as the ability to prove that data is not tampered with is crucial for compliance and legal aspects.

Blockchain is usually built ontop of several main cryptographic primitives: cryptographic hashes, hash chains, Merkle trees, cryptographic timestamps and digital signatures. They all play a role in the integrity guarantees, but the most important ones are the Merkle tree (with all of its variations, like a Patricia Merkle tree) and the hash chain. The original bitcoin paper describes a blockchain to be a hash chain, based on the roots of multiple Merkle trees (which form a single block). Some blockchains rely on a single, ever-growing merkle tree, but let’s not get into particular implementation details.

In all cases, blockchains are considered tamper-resistant because their significantly distributed in a way that enough number of members have a copy of the data. If some node modifies that data, e.g. 5 blocks in the past, it has to prove to everyone else that this is the correct merkle root for that block. You have to have more than 50% of the network capacity in order to do that (and it’s more complicated than just having them), but it’s still possible. In a way, tamper resistance = tamper evidence + distributed data.

But many of the practical applications of blockchain rely on private networks, serving one or several entities. They are often based on proof of authority, which means whoever has access to a set of private keys, controls what the network agree on. So let’s review the two cases:

  • Multiple owners – in case of multiple node owners, several of them can collude to rewrite the chain. The collusion can be based on mutual business interest (e.g. in a supply chain, several members may team up against the producer to report distorted data), or can be based on security compromise (e.g. multiple members are hacked by the same group). In that case, the remaining node owners can have a backup of the original data, but finding out whether the rest were malicious or the changes were legitimate part of the business logic would require a complicated investigation.
  • Single owner – a single owner can have a nice Merkle tree or hash chain, but an admin with access to the underlying data store can regenerate the whole chain and it will look legitimate, while in reality it will be tampered with. Splitting access between multiple admins is one approach (or giving them access to separate nodes, none of whom has access to a majority), but they often drink beer together and collusion is again possible. But more importantly – you can’t prove to a 3rd party that your own employees haven’t colluded under orders from management in order to cover some tracks to present a better picture to a regulator.

In the case of a single owner, you don’t even have a tamper-evident structure – the chain can be fully rewritten and nobody will understand that. In case of multiple owners, it depends on the implementation. There will be a record of the modification at the non-colluding party, but proving which side “cheated” would be next to impossible. Tamper-evidence is only partially achieved, because you can’t prove whose data was modified and whose data hasn’t (you only know that one of the copies has tampered data).

In order to achieve tamper-evident structure with both scenarios is to use anchoring. Checkpoints of the data need to be anchored externally, so that there is a clear record of what has been the state of the chain at different points in time. Before blockchain, the recommended approach was to print it in newspapers (e.g. as an ad) and because it has a large enough circulation, nobody can collect all newspapers and modify the published checkpoint hash. This published hash would be either a root of the Merkle tree, or the latest hash in a hash chain. An ever-growing Merkle tree would allow consistency and inclusion proofs to be validated.

When we have electronic distribution of data, we can use public blockchains to regularly anchor our internal ones, in order to achieve proper tamper-evident data. We, at LogSentinel, for example, do exactly that – we allow publishing the latest Merkle root and the latest hash chain to Ethereum. Then even if those with access to the underlying datastore manage to modify and regenerate the entire chain/tree, there will be no match with the publicly advertised values.

How to store data on publish blockchains is a separate topic. In case of Ethereum, you can put any payload within a transaction, so you can put that hash in low-value transactions between two own addresses (or self-transactions). You can use smart-contracts as well, but that’s not necessary. For Bitcoin, you can use OP_RETURN. Other implementations may have different approaches to storing data within transactions.

If we want to achieve tamper-resistance, we just need to have several copies of the data, all subject to tamper-evidence guarantees. Just as in a public network. But what a public network gives is is a layer, which we can trust with providing us with the necessary piece for achieving local tamper evidence. Of course, going to hardware, it’s easier to have write-only storage (WORM, write once, ready many). The problem with it, is that it’s expensive and that you can’t reuse it. It’s not so much applicable to use-cases that require short-lived data that requires tamper-resistance.

So in summary, in order to have proper integrity guarantees and the ability to prove that the data in a single-owner or multi-owner private blockchains hasn’t been tampered with, we have to send publicly the latest hash of whatever structure we are using (chain or tree). If not, we are only complicating our lives by integrating a complex piece of technology without getting the real benefit it can bring – proving the integrity of our data.

The post Integrity Guarantees of Blockchains In Case of Single Owner Or Colluding Owners appeared first on Bozho's tech blog.

Hypotheses About What Happened to Facebook

от Божидар Божанов
лиценз CC BY

Facebook was down. I’d recommend reading Cloudflare’s summary. Then I recommend reading Facebook’s own account on the incident. But let me expand on that. Facebook published announcements and withdrawals for certain BGP prefixes which lead to removing its DNS servers from “the map of the internet” – they told everyone “the part of our network where our DNS servers are doesn’t exist”. That was the result of a backbone self-inflicted failure due to a bug in the auditing tool that checks whether the commands executed aren’t doing harmful things.

Facebook owns a lot of IPs. According to RIPEstat they are part of 399 prefixes (147 of them are IPv4). The DNS servers are located in two of those 399. Facebook uses a.ns.facebook.com, b.ns.facebook.com, c.ns.facebook.com and d.ns.facebook.com, which get queries whenever someone wants to know the IPs of Facebook-owned domains. These four nameservers are served by the same Autonomous System from just two prefixes – 129.134.30.0/23 and 185.89.218.0/23. Of course “4 nameservers” is a logical construct, there are probably many actual servers behind that (using anycast).

I wrote a simple “script” to fetch all the withdrawals and announcements for all Facebook-owned prefixes (from the great API of RIPEstats). Facebook didn’t remove itself from the map entirely. As CloudFlare points out, it was just some prefixes that are affected. It can be just these two, or a few others as well, but it seems that just a handful were affected. If we sort the resulting CSV from the above script by withdrawals, we’ll notice that 129.134.30.0/23 and 185.89.218.0/23 are the pretty high up (alongside 185.89 and 123.134 with a /24, which are all included in the /23). Now that perfectly matches Facebook’s account that their nameservers automatically withdraw themselves if they fail to connect to other parts of the infrastructure. Everything may have also been down, but the logic for withdrawal is present only in the networks that have nameservers in them.

So first, let me make three general observations that are not as obvious and as universal as they may sound, but they are worth discussing:

  • Use longer DNS TTLs if possible – if Facebook had 6 hour TTL on its domains, we may have not figured out that their name servers are down. This is hard to ask for such a complex service that uses DNS for load-balancing and geographical distribution, but it’s worth considering. Also, if they killed their backbone and their entire infrastructure was down anyway, the DNS TTL would not have solved the issue.
  • We need improved caching logic for DNS. It can’t be just “present or not”; DNS caches may keep “last known good state” in case of SERVFAIL and fallback to that. All of those DNS resolvers that had to ask the authoritative nameserver “where can I find facebook.com” knew where to find facebook.com just a minute ago. Then they got a failure and suddenly they are wondering “oh, where could Facebook be?”. It’s not that simple, of course, but such cache improvement is worth considering. And again, if their entire infrastructure was down, this would not have helped.
  • Have a 100% test coverage on critical tools, such as the auditing tool that had a bug. 100% test coverage is rarely achievable in any project, but in such critical tools it’s a must.

The main explanation is the accidental outage. This is what Facebook engineers explain in the blogpost and other accounts, and that’s what seems to have happened. However, there are alternative hypotheses floating around, so let me briefly discuss all of the options.

  • Accidental outage due to misconfiguration – a very likely scenario. These things may happen to everyone and Facebook is known for it “break things” mentality, so it’s not unlikely that they just didn’t have the right safeguards in place and that someone ran a buggy update. The scenarios why and how that may have happened are many, and we can’t know from the outside (even after Facebook’s brief description). This remains the primary explanation, following my favorite Hanlon’s razor. A bug in the audit tool is absolutely realistic (btw, I’d love Facebook to publish their internal tools).
  • Cyber attack – It cannot be known by the data we have, but this would be a sophisticated attack that gained access to their BGP administration interface, which I would assume is properly protected. Not impossible, but a 6-hour outage of a social network is not something a sophisticated actor (e.g. a nation state) would invest resources in. We can’t rule it out, as this might be “just a drill” for something bigger to follow. If I were an attacker that wanted to take Facebook down, I’d try to kill their DNS servers, or indeed, “de-route” them. If we didn’t know that Facebook lets its DNS servers cut themselves from the network in case of failures, the fact that so few prefixes were updated might be in indicator of targeted attack, but this seems less and less likely.
  • Deliberate self-sabotage1.5 billion records are claimed to be leaked yesterday. At the same time, a Facebook whistleblower is testifying in the US congress. Both of these news are potentially damaging to Facebook reputation and shares. If they wanted to drown the news and the respective share price plunge in a technical story that few people understand but everyone is talking about (and then have their share price rebound, because technical issues happen to everyone), then that’s the way to do it – just as a malicious actor would do, but without all the hassle to gain access from outside – de-route the prefixes for the DNS servers and you have a “perfect” outage. These coincidences have lead people to assume such a plot, but from the observed outage and the explanation given by Facebook on why the DNS prefixes have been automatically withdrawn, this sounds unlikely.

Distinguishing between the three options is actually hard. You can mask a deliberate outage as an accident, a malicious actor can make it look like a deliberate self-sabotage. That’s why there are speculations. To me, however, by all of the data we have in RIPEStat and the various accounts by CloudFlare, Facebook and other experts, it seems that a chain of mistakes (operational and possibly design ones) lead to this.

The post Hypotheses About What Happened to Facebook appeared first on Bozho's tech blog.

Digital Transformation and Technological Utopianism

от Божидар Божанов
лиценз CC BY

Today I read a very interesting article about the prominence of Bulgarian hackers (in the black-hat sense) and virus authors in the 90s, linking that to the focus on technical education in the 80s, lead by the Bulgarian communist party in an effort to revive communism through technology.

Near the end of the article I was pleasantly surprised to read my name, as a political candidate who advocates for digital e-government and transformation of the public sector. The article then ended with something that I’m in deep disagreement with, but that has merit, and is worth discussing (and you can replace “Bulgaria” with probably any country there):

Of course, the belief that all the problems of a corrupt Bulgaria can be solved through the perfect tools is not that different to the Bulgarian Communist Party’s old dream that central planning through electronic brains would create communism. In both cases, the state is to be stripped back to a minimum

My first reaction was to deny ever claiming that the state would be stripped back to a minimum, as it will not (risking to enrage my libertarian readers), or to argue that I’ve never claimed there are “perfect tools” that can solve all problems, nor that digital transformation is the only way to solve those problems. But what I’ve said or written has little to do with the overall perception of techno-utopianism that IT people-turned-policy makers are usually struggling with.

So I decided to clearly state what e-government and digital transformation of the public sector is about.

First, it’s just catching up to the efficiency of the private sector. Sadly, there’s nothing visionary about wanting to digitize paper processes and provide services online. It’s something that’s been around for two decades in the private sector and the public sector just has to catch up, relying on all the expertise accumulated in those decades. Nothing grandiose or mind-boggling, just not being horribly inefficient.

When the world grows more complex, legislation and regulation grows more complex, the government gets more and more functions and more and more details to care about. There are more topics to have policy about (and many to take an informed decision to NOT have a policy about). All of that, today, can’t rely on pen-and-paper and a few proverbial smart and well-intentioned people. The government needs technology to catch up and do its job. It has had the luxury to not have competition and therefore it lagged behind. When there are no market forces to drive the digital transformation, what’s left is technocratic politicians. This efficiency has nothing to do with ideology, left or right. You can have “small government” and still have it inefficient and incapable of making sense of the world.

Second, technology is an enabler. Yes, it can help solve the problems with corruption, nepotism, lack of accountability. But as a tool, not as the solution itself. Take open data, for example (something I’ve been working on five years ago when Bulgaria jumped to the top of the EU open data index). Just having the data out there is an important effort, but by itself it doesn’t solve any problem. You need journalists, NGOs, citizens and a general understanding in society what transparency means. Same for accountability – it’s one thing to have every document digitized, every piece of data – published and every government official action leaving an audit trail; it’s a completely different story to have society act on those things – to have the institutions to investigate, to have the public pressure to turn that into political accountability.

Technology is also a threat – and that’s beyond the typical cybersecurity concerns. It poses the risk of dangerous institutions becoming too efficient; of excessive government surveillance; of entrenched interests carving their ways into the digital systems to perpetuate their corrupt agenda. I’m by no means ignoring those risks – they are real already. The Nazis, for example, were extremely efficient in finding the Jewish population in the Netherlands because the Dutch were very good at citizen registration. This doesn’t mean that you shouldn’t have an efficient citizen registration system. It means that it’s not good or bad per se.

And that gets us to the question of technological utopianism, of which I’m sometimes accused (though not directly in the quoted article). When you are an IT person, you have a technical hammer and everything may look like a binary nail. That’s why it’s very important to have a glimpse on humanities sides as well. Technology alone will not solve anything. And my blockchain skepticism is a hint in that direction – many blockchain enthusiasts are claiming that blockchain will solve many problems in many areas of life. It won’t. At least not just through clever cryptography and consensus algorithms. I once even wrote a sci-fi story about exactly the aforementioned communist dream of a centralized computer brain that solves all social issues while people are left to do what they want. And argued that no matter how perfect it is, it won’t work in a non-utopian human world. In other words, I’m rather critical of techno-utopianism as well.

The communist party, according to the author, saw technology as a tool by which the communist government would achieve its ideological goal.

My idea is quite different. First, technology necessary for “catching up” of the public sector, and second, I see technology as an enabler. What for – whether it’s for accountability or surveillance, fight with corruption or entrenching corruption even further – it’s our role as individuals, as society, and (in my case) as politicians, to formulate and advocate for. We have to embed our values, after democratic debate, into the digital tools (e.g. by making them privacy-preserving). But if we want to have good governance, and to be good at policy-making in the 21st century, we need digital tools, fully understanding their pitfalls and without putting them on a pedestal.

The post Digital Transformation and Technological Utopianism appeared first on Bozho's tech blog.