Is this going to be re-posted every month?
Anyway, I’ve come to know since then that the proposal was not a part of a damage control campaign, but rather a single person’s attempt at proposing a theoretical real solution. He misguidedly thought that there was actually an interest in some real solutions. There wasn’t, and there isn’t.
The empire are continuing with the strategy of scamming people into believing that they will produce, at some unspecified point, complete magical
mushroomsguidelines and real specified and implemented profiles.The proposal is destined to become perma-vaporware. The dreamy guidelines are going to be perma-WIP, the magical profiles are going to be perma-vapordocs (as in they will never actually exist, not even in theoretical form), and the bureaucracy checks will continue to be cashed.
So not only there was no concrete strike back, it wasn’t even the empire that did it.
The inherent problem with this kind of solution is that if you don’t break backwards compatibility, you don’t get rid off all the insecure code.
And if you do break backwards compatibility, there’s not much reason to stick to C++ rather than going for Rust with its established ecosystem…
Given how long and widely C++ has been a dominant language, I don’t think anyone can reasonably expect to get rid of all the unsafe code, regardless of approach. There is a lot of it.
However, changing the proposition from “get good at Rust and rewrite these projects from scratch” to “adopt some incremental changes using the existing tooling and skills you already have” would lower the barrier to entry considerably. I think this more practical approach would be likely to reach far more projects.
There’s been plenty of interop options between C++ and just about anything for decades. If languages like D, that made it piss easy, weren’t gonna change people’s minds, nothing can. Ditching C++ is the only way forward.
wake me up when Rust fixes its’ supply chain attacks susceptibility (solid stdlib and rejecting external crates, including transitive deps
I’ve done a bit of C++ coding in my time. The feature list of the language is so long at this point that it is pretty much impossible for anyone new to learn C++ and grok the design decisions anymore. I don’t know if this is a good thing or not to keep adding and extending or whether C++ should sail into the sunset like Fortran and others before it.
Fortran is still a good language for some purposes I think.
And I feel the same way, C++ tries to solve the problem of having too many features by adding more features.
Don’t get me wrong. There is still a time and a place for Fortran. And this will also likely always be the case for C++. But I’m not sure it is entirely wise to choose it if you’re creating a new project anymore.
I’m barely competent at programming. What is the use case for Fortran, besides maintaining ancient code?
A lot of computational heavy tasks for science were done in Fortran at least ten years ago (and I think still are). I was told that’s mainly because Fortran has a good deal of libraries for just that, and it was widely taught in academia so this is a common ground between the older and newer generations.
I think it may be gradually superseded by Python, but I don’t know if it is
The big downside is that, for backwards compatibility, the default must still be unsafe code. Ideally this could be toggled with a compiler flag, rather than having to wrap most code in “safe” blocks (like rust, but backwards).
One potential upside that people don’t seem to be discussing is that the safe subset could also be the place to finally start cutting down the bloat of C++. We could encourage most developers to write exclusively in the safe subset, and aim to make that the “much smaller and cleaner language” trying to get out of C++.
I’m a bit skeptical that a borrow checker in C++ can be as powerful as in rust, since C++ doesn’t have lifetime annotations. Without lifetime annotations, you have to do a whole program analysis to get the equivalent checks which isn’t even possible if you’re e.g. loading dynamic libraries, and prohibitively slow otherwise. Without that you can only really do local analysis which is of course good but not that powerful.
Lifetime annotations in the type system is the right call, since it allows library authors to impose invariants related to ownership on their consumers. I doubt C++ will add it to their typesystem though.
Read the proposal: Lifetimes annotations, the rust standard library (incl. basic types like Vec, ARc, …), first class tuples, pattern matching, destructive moves, unsafe, it is all in there.
The proposal is really to bolt on Rust to the side of C++, with all the compatibility problems that brings by necessity.
Google started work on Carbon due to the difficulty of getting the C++ standards committee to accept any real, fundamental changes to the language. If Google, a grandmaster at manipulating standards committees, couldn’t get something passed, I don’t foresee this proposal getting anywhere.
The C++ standards committee don’t see memory safety or UB as a problem. If they did they wouldn’t keep introducing new footguns, e.g. forgetting
return_void()
in a coroutine. They still think everyone should just learn the entire C++ spec and not make mistakes.
On one hand, I’m pleased that C++ is answering the call for what I’ll call “safety as default”, since as The Register and everyone else since pointed out, if safety constructs are “bolted on” like an afterthought, then of course it’s not going to have very high adoption. Contrast this to Rust and its “unsafe” keyword that marks all the places where the minimum safety of the language might not hold.
On the other hand, while this Safe C++ proposal adopts a similar notion of an “unsafe” context, it also adds a “safe” keyword, to specify that a function will conform to compile-time safety checks. But as the proposal readily admits:
Rust’s functions are safe by default. C++’s are unsafe by default.
While the proposal will surely continue to evolve before being implemented, I forsee a similar situation as in C where code that lacked initial const-correctness will struggle to work with newer code and libraries. In this case, it would be the “unsafe” keyword that proliferates everywhere just to call older, unsafe code from newer, safe callers.
Rust has the advantage that there isn’t much/any legacy Rust to upkeep, and that means the volume of unsafe code in Rust proframs is minimal, making them safer overall today. But for Safe C++ code, there’s going to be a lot of unsafe legacy C++ code and that reduces the safety benefit for programs overall, for the time being
Even as this proposal progresses, the question of whether to start rewriting some code anew in Rust remains relevant. But this is still exciting as a new option to raise the bar in memory safety in C++.
Null safety is orders of magnitude simpler than memory safety. Kotlin is a null safe language by default. Java is infamously not. Anyone who has worked on a mixed-language Kotlin project can tell you how quickly null safety becomes a pain once guarantees break down - and that’s in a language where these issues are flagged instantly and you can “fix” the problem in a couple of characters! Mixed memory safe/unsafe codebases would be a nightmare in comparison.
Also, C++'s ecosystem consists of deeply entrenched libraries with ancient codebases. Safe C++ might be useful in a decade or two if library maintainers could be pushed to make the switch (good luck with that, if it’s half as much of a paradigm shift as Rust), but by then there will probably be multiple competing language features that claim to solve the same problem. It’s the C++ Way™.