I’m working through the vulkan tutorial and came across GLFW_TRUE and GLFW_FALSE. I presume there’s a good reason but in looking at the docs it’s just defining 1 and 0, so I’m sorta at a loss as to why some libraries do this (especially in cpp?).
Tangentially related is having things like vk_result which is a struct that stores an enum full of integer codes.
Wouldn’t it be easier to replace these variables with raw int codes or in the case of GLFW just 1 and 0?
Coming mostly from C, and having my caps lock bound to escape for vim, the amount of all caps variables is arduous for my admittedly short fingers.
Anyway hopefully one of you knows why libraries do this thanks!
I work with young people starting out in IT, so I’m used to getting screenshots, and I’m so used to screenshots made with a phone instead of just capturing the screen, that I’ve stopped complaining… But come on! At least evaluate the result of the first picture and maybe do another if it’s illegible.
I love the description as well. “One.” “Zero.”
I found the comments/answers about backwards compatibility of not defined booleans and negative true interesting and plausible.
What I first thought of was that TRUE and FALSE can be redefined, so it serves as ensurance that within the library consistent values are being used no matter what other libs and callers do with their typing and definitions.
My brain is so used to seeing political content that I read “why do liberals define their own true and false” and was already like “what kind of shit take am I going to have fun reading today”
I can give you a shit take if you want one, but I don’t have shiitake mushrooms.
It’s for the extra helpful documentation. You see, in this fantastic example, after the author set GLFW_TRUE to 1, he explained the deep and profound meaning of the value. This exemplifies that the number 1 can also be written as a word: “One”! Some people might be able to figure this out, but the author clearly went above and beyond to make the code accessible to the open source community, encouraging contributions from anyone who’s considering improving the code. Furthermore, this follows the long held tradition of man pages - explaining the nuance of the code, in preparation for telling others to RTFM when they arrogantly ask a question.
It’s because the Booleans sometimes are flipped in display-server technology from the 1980s, particularly anything with X11 lineage, and C didn’t have Boolean values back then. More generally, sometimes it’s useful to have truthhood be encoded low or 0, as in common Forths or many lower-level electrical-engineering protocols. The practice died off as popular languages started to have native Boolean values; today, about three quarters of new developers learn Python or ECMAScript as their first language, and FFI bindings are designed to paper over such low-level details. You’ll also sometimes see newer C/C++ libraries depending on newer standards which add native Booleans.
As a fellow vim user with small hands, here are some tricks. The verb
gU
will uppercase letters but not underscores or hyphens, so sentences likegUiw
can be used to uppercase an entire constant. The immediate action~
which switches cases can be turned into a verb by:set tildeop
, after which it can be used in a similar way togU
. If constants are all namespaced with a prefix followed by something unique like an underscore, then the prefix can be left out of new sections of code and added back in with a macro or a:%s
replacement.In VisualBasic “true” would be represented as -1 when converted to an int because it’s all 1’s in twos complement.
My boss insisted, before I arrived at the company, that everything in the database be coded so that 1 = Yes and 2 = No, because that’s the way he likes to think of it. It causes us daily pain.
why not just take it a step further and make true = “Yes” and false = “No”
It would probably carry less risk, but in terms of bytes used this would be even worse. And we have other problems there that I’d tell you about but it would make me too sad.
Microsoft SQL Server has a bit type and you always use 0 and 1 and cast/convert them. No native bool type. It’s a hassle.
Well that would be ok, because any standard tool for interfacing with the database would transparently treat bit in the DB as bool in the code. I think many DBs call it a bit rather than a bool.
that assumes you don’t write any SQL
If that is something your boss is managing, get the fuck out of there.
Does your boss frequently browse the database table records outside the API?
Oh you have no idea. There is no teaching this guy.
Something like
if (stupid_bool & 0x01)
should work for those.I imagine this would still lead to a never ending stream of subtle logic errors.
from bossland import billysbool, billysand from geography import latlong def send_missile_alert(missiles_incoming: billysbool, is_drill: billysbool, target:latlong) -> billysbool: if billysand(missiles_incoming, not is_drill): for phone in phone.get_all_residents(): phone.send_alert("Missiles are inbound to your location")
Can you spot the bug?
The conventional ‘not’ would not behave differently for the two non-zero values. Insidious.
Correct! I made a number of other mistakes (edited away now due to shame), but that’s the one I made on purpose.
Yeah of course we convert, but it effectively means you need this little custom conversion layer between every application and its database. It’s a pain.
Now I have heard everything. What is zero? Missing value?
Zero is something you always have to watch out for and handle, because he likes to use NULL for “don’t know”. I should really have deleted the database while it was still young, before they had backups.
Some languages define True as -1, which is NOT False…
which is NOT False…
You really didn’t need this; I would have just assumed that you were speaking the truth.
CONST False = 0, True = NOT False
NOT as in the binary operator. What’s NOT of 0 in a 32 bit space? 0xFFFFFFFF, which is -1, which is ≠ 1
Different languages, and even different programmers might interpret the concept and definition of True and False differently, so to save any ambiguity and uncertainty, defining your own critical constants in your own library helps make sure your code is robust.
So… all that is NOT False either, I presume?
Probably readability. Correct typing maybe too. Also better error checking.
I’m not sure I understand readability? I guess is disambiguates numeric variables if you used 1 and 0. But with true and false available that would seemingly do the same thing. You still have to know what the arguments your passing are for regardless.
A function call of “MyFunction(parameter: GLFW_TRUE)” is more readable than “MyFunction(parameter: 1)”. Not by much, mind you, but if given the choice between these two, one is clearly better. It requires no assumptions about what the reader may or may not already know about the system.It communicates intent without any ambiguity.
Does C have a logical type these days? Never use to.
I guess in reading not until c99(see other comment); they just used integers in place of Booleans, in which case your readability statement makes more sense given the historical context
stdbool.h’s true and false are macros that expand to integers 1 and 0
C23 adds a proper bool type
Only 50 years after it’s creation.
For what it is worth. I learned C in 1990. Switched largely to Python in 1998.
This is often done for backward compatibility, as stdbool.h which provides true and false wasn’t standard before C99 and even though that’s more than 25 years ago now a lot of old habits die hard.
Yeah in the late 90’s I was coding in C++ and I’m pretty sure I had to define true and false manually.
I seem to recall using the true and false literals C++ in the late '90s … looks like they were in the C++98 standard, but it’s not clear which pre-standard compilers might have supported them.
Also, plenty of embedded systems don’t use the C standard library.
Ahh this makes some sense