In the articles and talks from that time people often take the perspective of what the whole society (with its organizations) wants from the "automatic computers" and programmers as a profession. Compare also something like the 1982 Grace Hopper's talk on YT. Now I think it's mostly the perspective of companies, teams, the industry. This shift happened in the 1990s? I'm guessing here.
I guess there is still something left here from there from the concept of programming language as a tool for top-down shaping and guiding the thinking of its users. Pascal being the classic example. Golang tries to be like that. I get how annoying it can be. I don't know how JS/TypeScript constructs evolve, but I suspect this is more Fortran-style committee planning than trying to "enlighten" people into doing the "right" things. Happy to be corrected on this.
Maybe the hardest to interpret in hindsight is the point that in the sixties programming has been an overpaid profession, the hardware costs will be dropping and software costs cannot stay the same (You cannot expect society to accept this, and therefore we must learn to program an order of magnitude more effectively). Yeah, in some sense, what paying for software even is anymore.
But interestingly, the situation now is kind of similar to the very old days: bunch of mainframe ("cloud") owners paying programmers to program and manage their machines. And maybe the effectiveness really has gone up dramatically. There's relatively little software running in comparison to the crazy volume of metal machines, even though the programmers for that scale are still paid a lot. It's not like you get a team of 10 guys for programming each individual server.
> Secondly, we have got machines equipped with multi-level stores, presenting us problems of management strategy that, in spite of the extensive literature on the subject, still remain rather elusive.
NUMA only got more complicated over time. The range of latency differences is more extreme than ever. We've got L1 running at nanosecond delay, and on the other end we've got cold tapes that can take a whole day to load. Which kind of memory/compute to use in a heterogeneous system (cpu/gpu) is also something that can be difficult to figure out. Multi core is likely the most devastating dragon to arrive since this article was written.
Premature optimization might be evil, but it's the only way to efficiently align the software with the memory architecture. E.g., in a Unity application, rewriting from game objects to ECS is basically like starting over.
If you could only focus on one aspect, I would keep the average temperature of L1 in mind constantly. If you can keep it semi-warm, nothing else really matters. There are very few problems that a modern CPU can't chew through ~instantly assuming the working set is in L1 and there is no contention with other threads.
This is the same thinking that drives some of us to use SQLite over hosted SQL providers. Thinking in terms of not just information, but the latency domain of the information, is what can unlock those bananas 1000x+ speed ups.
mitch_said 3 hours ago [-]
For the "I haven't read it before and I ain't reading all that" crowd, I made a top-down, Q&A-based mind map summary of Dijkstra's argument:
> The sooner we can forget that FORTRAN has ever existed, the better, for as a vehicle of thought it is no longer adequate: it wastes our brainpower, is too risky and therefore too expensive to use.
Apparently the ISO/IEC 1539-1:2023 [1] committee didn't get the memo.
Modern Fortran is quite neat, and much better than having to deal with Python + rewriting code into C and C++.
enord 4 hours ago [-]
It’s a real shame Dijkstra rubbed so many people the wrong way.
Maybe his incisive polemic, which I greatly enjoy, was all but pandering to a certain elitist sensibility in the end.
To make manageable programs, you have to trade off execution speed both on the cpu and in the organization. His rather mathematized prescriptions imply we should hire quarrelsome academics such as him to reduce performance and slow down product development[initially…] all in the interest of his stratified sensibilities of elegance and simplicity.
I guess there is still something left here from there from the concept of programming language as a tool for top-down shaping and guiding the thinking of its users. Pascal being the classic example. Golang tries to be like that. I get how annoying it can be. I don't know how JS/TypeScript constructs evolve, but I suspect this is more Fortran-style committee planning than trying to "enlighten" people into doing the "right" things. Happy to be corrected on this.
Maybe the hardest to interpret in hindsight is the point that in the sixties programming has been an overpaid profession, the hardware costs will be dropping and software costs cannot stay the same (You cannot expect society to accept this, and therefore we must learn to program an order of magnitude more effectively). Yeah, in some sense, what paying for software even is anymore.
But interestingly, the situation now is kind of similar to the very old days: bunch of mainframe ("cloud") owners paying programmers to program and manage their machines. And maybe the effectiveness really has gone up dramatically. There's relatively little software running in comparison to the crazy volume of metal machines, even though the programmers for that scale are still paid a lot. It's not like you get a team of 10 guys for programming each individual server.
NUMA only got more complicated over time. The range of latency differences is more extreme than ever. We've got L1 running at nanosecond delay, and on the other end we've got cold tapes that can take a whole day to load. Which kind of memory/compute to use in a heterogeneous system (cpu/gpu) is also something that can be difficult to figure out. Multi core is likely the most devastating dragon to arrive since this article was written.
Premature optimization might be evil, but it's the only way to efficiently align the software with the memory architecture. E.g., in a Unity application, rewriting from game objects to ECS is basically like starting over.
If you could only focus on one aspect, I would keep the average temperature of L1 in mind constantly. If you can keep it semi-warm, nothing else really matters. There are very few problems that a modern CPU can't chew through ~instantly assuming the working set is in L1 and there is no contention with other threads.
This is the same thinking that drives some of us to use SQLite over hosted SQL providers. Thinking in terms of not just information, but the latency domain of the information, is what can unlock those bananas 1000x+ speed ups.
https://app.gwriter.io/#/mindmap/view/7df03a6f-d1b8-4a0a-bb5...
Apparently the ISO/IEC 1539-1:2023 [1] committee didn't get the memo.
[1] https://www.iso.org/standard/82170.html
Maybe his incisive polemic, which I greatly enjoy, was all but pandering to a certain elitist sensibility in the end.
To make manageable programs, you have to trade off execution speed both on the cpu and in the organization. His rather mathematized prescriptions imply we should hire quarrelsome academics such as him to reduce performance and slow down product development[initially…] all in the interest of his stratified sensibilities of elegance and simplicity.
Sucks to be right when that’s the truth.