> Modern clang and gcc won't compile the LLVM used back then (C++ has changed too much)
Is this due to changing default values for the standard used, and would be "fixed" by adding "std=xxx" to the CXXFLAGS?
I've successfully built ~2011 era LLVM with no issues with the compiler itself (after that option change) using gcc last year - there were a couple of bugs in the llvm code though that I had to workaround (mainly relying on transitive includes from the standard library, or incorrect LLVM code that is detected by the newer compilers)
One of the big pain points I have with c++ is the dogmatic support of "old" code, I'd argue to the current version's detriment. But because of that I've never had an issue with code version backwards compatibility.
LegionMammal978 12 hours ago [-]
Even -fpermissive is no longer sufficient for some of the things that appear in the old LLVM codebase. It's mostly related to syntax issues that older compilers accepted even though the standard never permitted them.
o11c 12 hours ago [-]
Well, one thing I've noticed about LLVM is that it blatantly and intentionally relies on UB. The particular example I encountered probably isn't what causes the version breakage, but it's certainly a bad indicator.
That said, failures in building old software are very often due to one of:
* transitive headers (as you mentioned)
* typedef changes (`siginfo_t` vs `struct siginfo` comes to mind)
* macros with bad names (I was involved in the zlib `ON` drama)
* changes in library arrangement (the ncurses/tinfo split comes to mind, libcurl3/4 conditional ABI change, abuse of `dlopen`)
Most of these are one-line fixes if you're willing to patch the old code, which significantly increases the range of versions supported and thus reduces the number of artifacts you need to build for bootstrapping all the way to a modern version.
ummonk 8 hours ago [-]
Rather ironic it relies on UB given the extent to which Clang + LLVM insists on interpreting UB in the most creative way possible to optimize code…
viraptor 3 hours ago [-]
> zlib `ON` drama
Could you link to something about it? It's the first time I hear about it.
LegionMammal978 12 hours ago [-]
I've done this project myself, based on Ubuntu 20.04 and a whole lot of patchsets [0]. I got up to the 2014-01-20 snapshot before running into weird LLVM stack issues that I couldn't figure out how to resolve. One big annoyance is that the snapshot file refers to some commit hashes that do not appear to point to any surviving public repo, so it takes a fair bit of effort to reconstruct which commits must have been included in the missing commits.
> the snapshot file refers to some commit hashes that do not appear to point to any surviving public repo
That sounds a bit worrying from a "reflections on trusting trust" perspective. Who's to say that those non-public commits didn't introduce a compiler backdoor? But I guess the more likely explanation is that somebody did some last-minute hotfixes that were later reworked before inclusion in the permanent record.
jasonthorsness 13 hours ago [-]
The difficulty in reproducing builds and steps even from a time as recent as 2011 is somewhat disturbing; will technology stabilize or is this going to get even worse? At what point do we end up with something in-use that we can’t make anymore?
jcranmer 12 hours ago [-]
I'd imagine that it's going to end up both getting somewhat better and somewhat worse.
2011 is around the time that programmers start taking undefined behavior seriously as an actual bug in their code and not in the compiler, especially as we start to see the birth of tools to better diagnose undefined behavior issues the compilers didn't (yet) take advantage of. There's also a set of major, language-breaking changes to the C and C++ standards that took effect around the time (e.g., C99 introduced inline with different semantics from gcc's extension, which broke a lot of software until gcc finally switched the default from C89 to C11 around 2014). And newer language versions tend to make obsolete hacky workarounds that end up being more brittle because they're taking advantage of unintentional complexity (e.g., constexpr-if removes the need for a decent chunk of template metaprogramming that relied on SFINAE, a concept which is difficult to explain even to knowledgeable C++ programmers). So in general, newer code is likelier to be substantially more compatible with future compilers and future language changes.
But on the other hand, we've also seen a greater tend towards libraries with less-well-defined and less stable APIs, which means future software is probably going to have a rougher time with getting all the libraries to play nice with each other if you're trying to work with old versions. Even worse, modern software tends to be a lot more aggressive about dropping compatibility with obsolete systems. Things like (as mentioned in the blog post) accessing the modern web with decade-old software is going to be incredibly difficult, for example.
lmm 9 hours ago [-]
The telephone network was famously thought to be impossible to bootstrap even 50 years ago. We won't ever be able to "black start" our computers unless someone cares enough to put money and effort into it. (Also all technological civilisation is somewhat self-dependent e.g. do you think it would be possible to make microprocessors without running computers?). Possibly reproducible build efforts and things like Guix will make it happen.
endgame 4 hours ago [-]
Last time I tried to build guix without substituters, I got hash mismatches in several downloaded files and openssl-1.1.1l failed to build because the certificates in its test suite have all expired. Bootstrapping is really hard, really valuable, and (it turns out) really unstable.
bee_rider 13 hours ago [-]
I think we must have some software in use for which the compiler or the source code just isn’t around anymore. It probably isn’t a massive problem. There’s just a slow trickle of tech we can’t economically reproduce, but we replace it with better stuff. Or, if it was really crucial, it would become worth paying for, right?
Complete speculation: They might not have had it in the first place or might not have had legal license to modify it themselves. The About Box shown in the article implies Microsoft just licensed MathType from Design Sciences, Inc. DSI got acquired by WIRIS just a few months before that in 2017 which may also have had something to do with it: https://en.wikipedia.org/wiki/MathType
skissane 12 hours ago [-]
I think with advances in AI-assisted decompilation, we may soon end up in the situation where given a binary you can produce realistic-looking source (sane variable and function names, comments even) which compiles to the same binary, even though non-identical to the original source code
bee_rider 11 hours ago [-]
Could be, although I don’t think that’ll give them any more HDL to train on (unless they also get access to a whole lot of high end microscopes!)
Is there, or could there be, a simple implementation of a compiler for the latest full Rust language (in C, Python, Scheme/Racket, or anything except Rust) that is greatly simplified because, although it accepts the latest full Rust language as input, it assumes the input is correct?
Could this simple non-checking Rust implementation transliterate the real Rust compiler's code, to unchecked C, that is good enough for that minimal-steps, sustainable bootstrapping?
This simple non-checking compiler only has to be able to compile one program, and only under controlled conditions, possibly only on hardware with a ton of memory.
No it can't. Not for RISC-V/musl, so I'm sure that must be true for other platforms too.
JoshTriplett 9 hours ago [-]
Once you've compiled it for one platform, you've re-bootstrapped it, at which point you can use the real compiler to cross-compile for another platform.
yjftsjthsd-h 10 hours ago [-]
So.... It can, just not for a particular target platform? Or am I missing your point?
neilv 11 hours ago [-]
`mrustc` might be exactly what I wanted. Thank you.
colonial 6 hours ago [-]
To some extent, sure - but Rust leans heavily on static analysis even for "simple" code. Something as fundamental as File::open is still generic over "types that can be coerced into a &Path" - which is obviously useful, but it probably means you would need to implement a lot of the type system (+ stubbed out borrow/reference semantics?) just to get rustc's parser bootstrapped.
This is actually tenable for C, though - so maybe you could cook up some sort of C -> C++ -> LLVM -> rustc bootstrap.
12 hours ago [-]
charcircuit 11 hours ago [-]
Rust can selfbootstrap by compiling the rust code for the compiler.
gregorvand 8 hours ago [-]
Why do I have to use a VPN and pick a US server to access this article?
ycombinatrix 3 hours ago [-]
What do you see otherwise?
gregorvand 2 hours ago [-]
A 403 forbidden message
fcoury 14 hours ago [-]
Not sure why, but I am getting 403 Forbidden, so if you are getting the same here's an archive.is link https://archive.is/UH5fg
CaptainFever 6 hours ago [-]
Same. Usually when this happens I just don't visit the website; there's better things to do than fighting a website's anti-bot (I'm a sentient bot). The Internet is huge and full of alternatives.
In case others can't access the archive link:
Elsewhere I've been asked about the task of replaying the bootstrap process for rust. I figured it would be fairly straightforward, if slow. But as we got into it, there were just enough tricky / non-obvious bits in the process that it's worth making some notes here for posterity.
context
Rust started its life as a compiler written in ocaml, called rustboot. This compiler did not use LLVM, it just emitted 32-bit i386 machine code in 3 object file formats (Linux PE, macOS Mach-O, and Windows PE).
We then wrote a second compiler in Rust called rustc that did use LLVM as its backend (and which, yes, is the genesis of today's rustc) and ran rustboot on rustc to produce a so-called "stage0 rustc". Then stage0 rustc was fed the sources of rustc again, producing a stage1 rustc. Successfully executing this stage0 -> stage1 step (rather than just crashing mid-compilation) is what we're going to call "bootstrapping". There's also a third step: running stage1 rustc on rustc's sources again to get a stage2 rustc and checking that it is bit-identical to the stage1 rustc. Successfully doing that we're going to call "fixpoint".
Shortly after we reached the fixpoint we discarded rustboot. We stored stage1 rustc binaries as snapshots on a shared download server and all subsequent rust builds were based on downloading and running that. Any time there was an incompatible language change made, we'd add support and re-snapshot the resulting stage1, gradually growing a long list of snapshots marking the progress of rust over time.
time travel and bit rot
Each snapshot can typically only compile rust code in the rust repository written between its birth and the next snapshot. This makes replay of replaying the entire history awkward. We're not going to do that here. This post is just about replaying the initial bootstrap and fixpoint, which happened back in April 2011, 14 years ago.
Unfortunately all the tools involved -- from the host OS and system libraries involved to compilers and compiler-components -- were and are moving targets. Everything bitrots. Some examples discovered along the way:
Modern clang and gcc won't compile the LLVM used back then (C++ has changed too much)
Modern gcc won't even compile the gcc used back then (apparently C as well!)
Modern ocaml won't compile rustboot (ditto)
14-year-old git won't even connect to modern github (ssh and ssl have changed too much)
debian
We're in a certain amount of luck though:
Debian has maintained both EOL'ed docker images and still-functioning fetchable package archives at the same URLs as 14 years ago. So we can time-travel using that. A VM image would also do, and if you have old install media you could presumably build one up again if you are patient.
It is easier to use i386 since that's all rustboot emitted. There's some indication in the Makefile of support for multilib-based builds from x86-64 (I honestly don't remember if my desktop was 64 bit at the time) but 32bit is much more straightforward.
So: docker pull --platform linux/386 debian/eol:squeeze gets you an environment that works.
You'll need to install rust's prerequisites also: g++, make, ocaml, ocaml-native-compilers, python.
rust
The next problem is figuring out the code to build. Not totally trivial but not too hard. The best resource for tracking this period of time in rust's history is actually the rust-dev mailing list archive. There's a copy online at mail-archive.com (and Brian keeps a public backup of the mbox file in case that goes away). Here's the announcement that we hit a fixpoint in April 2011. You kinda have to just know that's what to look for. So that's the rust commit to use: 6daf440037cb10baab332fde2b471712a3a42c76. This commit still exists in the rust-lang/rust repo, no problem getting it (besides having to copy it into the container since the container can't contact github, haha).
LLVM
Unfortunately we only started pinning LLVM to specific versions, using submodules, after bootstrap, closer to the initial "0.1 release". So we have to guess at the LLVM version to use. To add some difficulty: LLVM at the time was developed on subversion, and we were developing rust against a fork of a git mirror of their SVN. Fishing around in that repo at least finds a version that builds -- 45e1a53efd40a594fa8bb59aee75bb0984770d29, which is "the commit that exposed LLVMAddEarlyCSEPass", a symbol used in the rustc LLVM interface. I bootstrapped with that (brson/llvm) commit but subversion also numbers all commits, and they were preserved in the conversion to the modern LLVM repo, so you can see the same svn id 129087 as e4e4e3758097d7967fa6edf4ff878ba430f84f6e over in the official LLVM git repo, in case brson/llvm goes away in the future.
Configuring LLVM for this build is also a little bit subtle. The best bet is to actually read the rust 0.1 configure script -- when it was managing the LLVM build itself -- and work out what it would have done. But I have done that and can now save you the effort: ./configure --enable-targets=x86 --build=i686-unknown-linux-gnu --host=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu --disable-docs --disable-jit --enable-bindings=none --disable-threads --disable-pthreads --enable-optimized
So: configure and build that, stick the resulting bin dir in your path, and configure and make rust, and you're good to go!
On my machine I get: 1m50s to build stage0, 3m40s to build stage1, 2m2s to build stage2. Also stage0/rustc is a 4.4mb binary whereas stage1/rustc and stage2/rustc are (identical) 13mb binaries.
While this is somewhat congruent with my recollections -- rustboot produced code faster, but its code ran slower -- the effect size is actually much less than I remember. I'd convinced myself retroactively that rustboot was produced abysmally worse code than rustc-with-LLVM. But out-of-the-gate LLVM only boosted performance by 2x (and cost of 3x the code size)! Of course I also have a faster machine now. At the time bootstrap cycles took about a half hour each (according to this: 15 minutes for the 2nd stage).
Of course you can still see this as a condemnation of the entire "super slow dynamic polymorphism" model of rust-at-the-time, either way. It may seem funny that this version of rustc bootstraps faster than today's rustc, but this "can barely bootstrap" version was a mere 25kloc. Today's rustc is 600kloc. It's really comparing apples to oranges.
superkuh 14 hours ago [-]
You're not the only one getting blocked. I emailed dreamwidth about this in the past and they say it's something their upstream network host does and they cannot even fix it if their site users wanted to fix it. They're a somewhat limited and broken host partially repackaging some other company's services.
>Dreamwidth Studios Support: I'm sorry about the frustrations you're having. The "semi-randomly selected to solve a CAPTCHA" interstitial with a visual CAPTCHA is coming from our hosting provider, not from us: ... and we don't have any control over whether or not someone from a particular network is shown a CAPTCHA or not because we aren't the ones who control the restriction.
This also applies to the 403's.
neilv 12 hours ago [-]
This needs to be a catchy name, but I don't have a good one. CloudFlaritis? CloudFlareup? (CloudFlareDown?)
Regardless of whether Cloudflare is the particular infra company, the company who uses them responds to blocked people: "We don't know why some users can't access our Web site, and we don't even know the percentage of users who get blocked, but we're just cargo-culting our jobs here, so sux2bu."
The outsourced infra company's response is: "We're running a business here, and our current solution works well enough for that purpose, so sux2bu."
o11c 11 hours ago [-]
Hmm, "cloudfail" is already in use, and "cloudfuckyou" while descriptive is profane enough that it will cause unnecessary friction with certain people, and "clownflare" is too vague/silly (and is less applicable to other service providers).
So I propose "cloudfart" - just rude enough it can't be casually dismissed, but still tolerable in polite company. "I can't access your website (through the cloudfart |, it's just cloudfarting at me)."
Other names (not all applicable for this exact use): cloudfable, cloudunfair, cloudfalse, cloudfarce, cloudfault, cloudfear, cloudfeeble, cloudfeudalism, cloudflake, cloudfluke, cloudfreeze, cloudfuneral.
neilv 11 hours ago [-]
Would be nice if the name punished a perpetrator's brand.
Not just sound like we're taking in stride an unavoidable fact of nature.
Want people to stop saying "ClouldFlareup" (like a social disease)? Stop causing it.
tmtvl 7 hours ago [-]
I'd say Clownflare, but that sits too close to Clown Care, who do really great work.
gregorvand 6 hours ago [-]
It works just using a VPN and picking a US server. The internet is becoming one giant reverse firewall.
15155 12 hours ago [-]
I imagine you just need to update CA certs and the known_hosts file to get GitHub communication working again.
oasisbob 12 hours ago [-]
A few more hurdles might involve expectations of SHA-1 cert signing, and TLS1.0 deprecation
eptcyka 12 hours ago [-]
Can’t say I’m a fan of Nix evangelists pointing their finger at any problem and yelling how it would be solved better by using Nix, but in this case, one could pin a nixpkgs version and all the sources for llvm, gcc and ocaml, and thus have a reproducible bootstrap. Ultimately, it wouldn’t do anything different to what was done manually here, but pinning commits will save the archaelogical burden for the next bootstrapper.
chubot 12 hours ago [-]
Does re bootstrapping Rust like this actually work? How much work is it?
LegionMammal978 10 hours ago [-]
Lots of work, you need hundreds of steps across the snapshots, and patches for each one to get them to work. (E.g., the makefile had hardcoded -Werror for ages.) Not to mention that if you want to make it portable, you must always start with the i686 version and cross-compile from there. (Preferably leaving x86 as late as possible: the old LLVM versions are full of architecture-specific quirks.)
neilv 12 hours ago [-]
> Debian has maintained both EOL'ed docker images and still-functioning fetchable package archives at the same URLs as 14 years ago.
Is this due to changing default values for the standard used, and would be "fixed" by adding "std=xxx" to the CXXFLAGS?
I've successfully built ~2011 era LLVM with no issues with the compiler itself (after that option change) using gcc last year - there were a couple of bugs in the llvm code though that I had to workaround (mainly relying on transitive includes from the standard library, or incorrect LLVM code that is detected by the newer compilers)
One of the big pain points I have with c++ is the dogmatic support of "old" code, I'd argue to the current version's detriment. But because of that I've never had an issue with code version backwards compatibility.
That said, failures in building old software are very often due to one of:
* transitive headers (as you mentioned)
* typedef changes (`siginfo_t` vs `struct siginfo` comes to mind)
* macros with bad names (I was involved in the zlib `ON` drama)
* changes in library arrangement (the ncurses/tinfo split comes to mind, libcurl3/4 conditional ABI change, abuse of `dlopen`)
Most of these are one-line fixes if you're willing to patch the old code, which significantly increases the range of versions supported and thus reduces the number of artifacts you need to build for bootstrapping all the way to a modern version.
Could you link to something about it? It's the first time I hear about it.
[0] https://github.com/LegionMammal978/rust-from-ocaml
That sounds a bit worrying from a "reflections on trusting trust" perspective. Who's to say that those non-public commits didn't introduce a compiler backdoor? But I guess the more likely explanation is that somebody did some last-minute hotfixes that were later reworked before inclusion in the permanent record.
2011 is around the time that programmers start taking undefined behavior seriously as an actual bug in their code and not in the compiler, especially as we start to see the birth of tools to better diagnose undefined behavior issues the compilers didn't (yet) take advantage of. There's also a set of major, language-breaking changes to the C and C++ standards that took effect around the time (e.g., C99 introduced inline with different semantics from gcc's extension, which broke a lot of software until gcc finally switched the default from C89 to C11 around 2014). And newer language versions tend to make obsolete hacky workarounds that end up being more brittle because they're taking advantage of unintentional complexity (e.g., constexpr-if removes the need for a decent chunk of template metaprogramming that relied on SFINAE, a concept which is difficult to explain even to knowledgeable C++ programmers). So in general, newer code is likelier to be substantially more compatible with future compilers and future language changes.
But on the other hand, we've also seen a greater tend towards libraries with less-well-defined and less stable APIs, which means future software is probably going to have a rougher time with getting all the libraries to play nice with each other if you're trying to work with old versions. Even worse, modern software tends to be a lot more aggressive about dropping compatibility with obsolete systems. Things like (as mentioned in the blog post) accessing the modern web with decade-old software is going to be incredibly difficult, for example.
Could this simple non-checking Rust implementation transliterate the real Rust compiler's code, to unchecked C, that is good enough for that minimal-steps, sustainable bootstrapping?
This simple non-checking compiler only has to be able to compile one program, and only under controlled conditions, possibly only on hardware with a ton of memory.
This is actually tenable for C, though - so maybe you could cook up some sort of C -> C++ -> LLVM -> rustc bootstrap.
In case others can't access the archive link:
Elsewhere I've been asked about the task of replaying the bootstrap process for rust. I figured it would be fairly straightforward, if slow. But as we got into it, there were just enough tricky / non-obvious bits in the process that it's worth making some notes here for posterity.
context
Rust started its life as a compiler written in ocaml, called rustboot. This compiler did not use LLVM, it just emitted 32-bit i386 machine code in 3 object file formats (Linux PE, macOS Mach-O, and Windows PE).
We then wrote a second compiler in Rust called rustc that did use LLVM as its backend (and which, yes, is the genesis of today's rustc) and ran rustboot on rustc to produce a so-called "stage0 rustc". Then stage0 rustc was fed the sources of rustc again, producing a stage1 rustc. Successfully executing this stage0 -> stage1 step (rather than just crashing mid-compilation) is what we're going to call "bootstrapping". There's also a third step: running stage1 rustc on rustc's sources again to get a stage2 rustc and checking that it is bit-identical to the stage1 rustc. Successfully doing that we're going to call "fixpoint".
Shortly after we reached the fixpoint we discarded rustboot. We stored stage1 rustc binaries as snapshots on a shared download server and all subsequent rust builds were based on downloading and running that. Any time there was an incompatible language change made, we'd add support and re-snapshot the resulting stage1, gradually growing a long list of snapshots marking the progress of rust over time.
time travel and bit rot
Each snapshot can typically only compile rust code in the rust repository written between its birth and the next snapshot. This makes replay of replaying the entire history awkward. We're not going to do that here. This post is just about replaying the initial bootstrap and fixpoint, which happened back in April 2011, 14 years ago.
Unfortunately all the tools involved -- from the host OS and system libraries involved to compilers and compiler-components -- were and are moving targets. Everything bitrots. Some examples discovered along the way:
debianWe're in a certain amount of luck though:
rustThe next problem is figuring out the code to build. Not totally trivial but not too hard. The best resource for tracking this period of time in rust's history is actually the rust-dev mailing list archive. There's a copy online at mail-archive.com (and Brian keeps a public backup of the mbox file in case that goes away). Here's the announcement that we hit a fixpoint in April 2011. You kinda have to just know that's what to look for. So that's the rust commit to use: 6daf440037cb10baab332fde2b471712a3a42c76. This commit still exists in the rust-lang/rust repo, no problem getting it (besides having to copy it into the container since the container can't contact github, haha).
LLVM
Unfortunately we only started pinning LLVM to specific versions, using submodules, after bootstrap, closer to the initial "0.1 release". So we have to guess at the LLVM version to use. To add some difficulty: LLVM at the time was developed on subversion, and we were developing rust against a fork of a git mirror of their SVN. Fishing around in that repo at least finds a version that builds -- 45e1a53efd40a594fa8bb59aee75bb0984770d29, which is "the commit that exposed LLVMAddEarlyCSEPass", a symbol used in the rustc LLVM interface. I bootstrapped with that (brson/llvm) commit but subversion also numbers all commits, and they were preserved in the conversion to the modern LLVM repo, so you can see the same svn id 129087 as e4e4e3758097d7967fa6edf4ff878ba430f84f6e over in the official LLVM git repo, in case brson/llvm goes away in the future.
Configuring LLVM for this build is also a little bit subtle. The best bet is to actually read the rust 0.1 configure script -- when it was managing the LLVM build itself -- and work out what it would have done. But I have done that and can now save you the effort: ./configure --enable-targets=x86 --build=i686-unknown-linux-gnu --host=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu --disable-docs --disable-jit --enable-bindings=none --disable-threads --disable-pthreads --enable-optimized
So: configure and build that, stick the resulting bin dir in your path, and configure and make rust, and you're good to go!
root@65b73ba6edcc:/src/rust# sha1sum stage*/rustc 639f3ab8351d839ede644b090dae90ec2245dfff stage0/rustc 81e8f14fcf155e1946f4b7bf88cefc20dba32bb9 stage1/rustc 81e8f14fcf155e1946f4b7bf88cefc20dba32bb9 stage2/rustc
Observations
On my machine I get: 1m50s to build stage0, 3m40s to build stage1, 2m2s to build stage2. Also stage0/rustc is a 4.4mb binary whereas stage1/rustc and stage2/rustc are (identical) 13mb binaries.
While this is somewhat congruent with my recollections -- rustboot produced code faster, but its code ran slower -- the effect size is actually much less than I remember. I'd convinced myself retroactively that rustboot was produced abysmally worse code than rustc-with-LLVM. But out-of-the-gate LLVM only boosted performance by 2x (and cost of 3x the code size)! Of course I also have a faster machine now. At the time bootstrap cycles took about a half hour each (according to this: 15 minutes for the 2nd stage).
Of course you can still see this as a condemnation of the entire "super slow dynamic polymorphism" model of rust-at-the-time, either way. It may seem funny that this version of rustc bootstraps faster than today's rustc, but this "can barely bootstrap" version was a mere 25kloc. Today's rustc is 600kloc. It's really comparing apples to oranges.
Regardless of whether Cloudflare is the particular infra company, the company who uses them responds to blocked people: "We don't know why some users can't access our Web site, and we don't even know the percentage of users who get blocked, but we're just cargo-culting our jobs here, so sux2bu."
The outsourced infra company's response is: "We're running a business here, and our current solution works well enough for that purpose, so sux2bu."
So I propose "cloudfart" - just rude enough it can't be casually dismissed, but still tolerable in polite company. "I can't access your website (through the cloudfart |, it's just cloudfarting at me)."
Other names (not all applicable for this exact use): cloudfable, cloudunfair, cloudfalse, cloudfarce, cloudfault, cloudfear, cloudfeeble, cloudfeudalism, cloudflake, cloudfluke, cloudfreeze, cloudfuneral.
Not just sound like we're taking in stride an unavoidable fact of nature.
Want people to stop saying "ClouldFlareup" (like a social disease)? Stop causing it.
Debian FTW.
https://snapshot.debian.org/