typedef struct { bool a: 1; bool b: 1; bool c: 1; bool d: 1; bool e: 1; bool f: 1; bool g: 1; bool h: 1; } __attribute__((__packed__)) not_if_you_have_enough_booleans_t;
Or just
std::bitset<8>
for C++. Bit fields are neat though, it can store weird stuff like a 3 bit integer, packed next to booleansThat’s only for C++, as far as I can tell that struct is valid C
You beat me to it!
This was gonna be my response to OP so I’ll offer an alternative approach instead:
typedef enum flags_e : unsigned char { F_1 = (1 << 0), F_2 = (1 << 1), F_3 = (1 << 2), F_4 = (1 << 3), F_5 = (1 << 4), F_6 = (1 << 5), F_7 = (1 << 6), F_8 = (1 << 7), } Flags; int main(void) { Flags f = F_1 | F_3 | F_5; if (f & F_1 && f & F_3) { // do F_1 and F_3 stuff } }
Why not
if (f & (F_1 | F_3)) {
? I use this all the time in embedded code.edit: never mind; you’re checking for both flags. I’d probably use
(f & (F_1 | F_3)) == (F_1 | F_3)
but that’s not much different than what you wrote.
I set all 8 bits to 1 because I want it to be really true.
01111111 = true
11111111 = negative true = false
00001111 = maybe
10101010 = I don’t know
100001111 = maybe not
0011 1111 = could you repeat the question
00000001 00000000 00001111 10101010
Is this quantum computing? 😜
Schrödingers Boolean
What if it’s an unsigned boolean?
Cthulhu shows up.
Common misconception… Unsigned booleans (ubool) are always 16-bits.
Super true.
Could also store our bools as floats.
00111111100000000000000000000000
is true and10111111100000000000000000000000
is negative true.Has the fun twist that true & false is true and true | false is false .
negative true = negative non-zero = non-zero = true.
Why do alternative facts always gotta show up uninvited to the party? 🥳
So all this time true was actually false and false was actually true ?
Depends on if you are on a big endian or little endian architecture.
Come on man, I’m not gonna talk about my endian publicly
TIL, 255 is the new 1.
Aka -1 >> 1 : TRUE
But only if you really mean it. If not, it’s a syntax error and the compiler will know.
I was programming in assembly for ARM (some cortex chip) and I kid you not the C program we were integrating with required 255, with just 1 it read it as false
You jest, but on some older computers, all ones was the official truth value. Other values may also have been true in certain contexts, but that was the guaranteed one.
Depending on the language
And compiler. And hardware architecture. And optimization flags.
As usual, it’s some developer that knows little enough to think the walls they see around enclose the entire world.
Fucking lol at the downvoters haha that second sentence must have rubbed them the wrong way for being too accurate.
deleted by creator
I don’t think so. Apart from dynamically typed languages which need to store the type with the value, it’s always 1 byte, and that doesn’t depend on architecture (excluding ancient or exotic architectures) or optimisation flags.
Which language/architecture/flags would not store a bool in 1 byte?
things that store it as word size for alignment purposes (most common afaik), things that pack multiple books into one byte (normally only things like bool sequences/structs), etc
things that store it as word size for alignment purposes
Nope. bools only need to be naturally aligned, so 1 byte.
If you do
struct SomeBools { bool a; bool b; bool c; bool d; };
its 4 bytes.
sure, but if you have a single bool in a stack frame it’s probably going to be more than a byte. on the heap definitely more than a byte
Apart from dynamically typed languages which need to store the type with the value
You know that depending on what your code does, the same C that people are talking upthread doesn’t even need to allocate memory to store a variable, right?
How does that work?
I think he’s talking about if a variable only exists in registers. In which case it is the size of a register. But that’s true of everything that gets put in registers. You wouldn’t say
uint16_t
is word-sized because at some point it gets put into a word-sized register. That’s dumb.
Wait until you hear about alignment
The alignment of the language and the alignment of the coder must be similar on at least one metric, or the coder suffers a penalty to develop for each degree of difference from the language’s alignment. This is penalty stacks for each phase of the project.
So, let’s say that the developer is a lawful good Rust
zealotPaladin, but she’s developing in Python, a language she’s moderately familiar with. Since Python is neutral/good, she suffers a -1 penalty for the first phase, -2 for the second, -3 for the third, etc. This is because Rust (the Paladin’s native language) is lawful, and Python is neutral (one degree of difference from lawful), so she operates at a slight disadvantage. However, they are both “good”, so there’s no further penalty.The same penalty would occur if using C, which is lawful neutral - but the axis of order and chaos matches, and there is one degree of difference on the axis of good and evil.
However, if that same developer were to code in Javascript (chaotic neutral), it would be at a -3 (-6, -9…) disadvantage, due to 2 and 1 degree of difference in alignment, respectively.
Malbolge (chaotic evil), however, would be a -4 (-8, -12) plus an inherent -2 for poor toolchain availability.
…hope this helps. have fun out there!
string boolEnable = "True";
Violence
Maybe json is named after Jason Voorhees
That looks like directly taken from someone’s code exposed on TheDailyWTF.
Back in the day when it mattered, we did it like
#define BV00 (1 << 0) #define BV01 (1 << 1) #define BV02 (1 << 2) #define BV03 (1 << 3) ...etc #define IS_SET(flag, bit) ((flag) & (bit)) #define SET_BIT(var, bit) ((var) |= (bit)) #define REMOVE_BIT(var, bit) ((var) &= ~(bit)) #define TOGGLE_BIT(var, bit) ((var) ^= (bit)) ....then... #define MY_FIRST_BOOLEAN BV00 SET_BIT(myFlags, MY_FIRST_BOOLEAN)
With embedded stuff its still done like that. And if you go from the arduino functionss to writing the registers directly its a hell of a lot faster.
Okay. Gen z programmer here. Can you explain this black magic? I see it all the time in kernel code but I have no idea what it means.
It’s called bitshifting and is used to select which bits you want to modify so you can toggle them individually.
1 << 0 is the flag for the first bit
1 << 1 for the second
1 << 2 for the third and so onI think that’s correct. It’s been years since I’ve used this technique tbh 😅
The code is a set of preprocessor macros to stuff loads of booleans into one int (or similar), in this case named ‘myFlags’. The preprocessor is a simple (some argue too simple) step at the start of compilation that modifies the source code on its way to the real compiler by substituting #defines, prepending #include’d files, etc.
If myFlags is equal to, e.g. 67, that’s 01000011, meaning that BV00, BV01, and BV07 are all TRUE and the others are FALSE.
The first part is just for convenience and readability. BV00 represents the 0th bit, BV01 is the first etc. (1 << 3) means 00000001, bit shifted left three times so it becomes 00001000 (aka 8).
The middle chunk defines macros to make bit operations more human-readable.
SET_BIT(myFlags, MY_FIRST_BOOLEAN)
gets turned into((myFlags) |= ((1 << 0)))
, which could be simplified asmyFlags = myFlags | 00000001
. (Ignore the flood of parentheses, they’re there for safety due to the loaded shotgun nature of the preprocessor.)Which part?
Edit - oops, responded to wrong comment…
In the industrial automation world and most of the IT industry, data is aligned to the nearest word. Depending on architecture, that’s usually either 16, 32, or 64 bits. And that’s the space a single Boolean takes.
That’s why I primarily use booleans in return parameters, beyond that I’ll try to use bitfields. My game engine’s tilemap format uses a 32 bit struct, with 16 bit selecting the tile, 12 bit selecting the palette, and 4 bit used for various bitflags (horizontal and vertical mirroring, X-Y axis invert, and priority bit).
Bit fields are a necessity in low level networking too.
They’re incredibly useful, I wish more people made use of them.
I remember I interned at a startup programming microcontrollers once and created a few bitfields to deal with something. Then the lead engineer went ahead and changed them to masked ints. Because. The most aggravating thing is that an int size isn’t consistent across platforms, so if they were ever to change platforms to a different word length, they’d be fucked as their code was full of platform specific shenanigans like that.
/rant
Good rant.
I always use stdint.h so that my types are compatible across any mcu. And it makes the data type easily known instead of guessing an i t size
Yeah. I once had to do stuff to code that had bit-fields like that and after a while, realised (by means of StackOverflow) that that part is UB and I had to go with bitwise operations instead.
Undefined Behavior…?
Ok, I recalled wrong, it was unspecified
Or you could just use Rust
Then you need to ask yourself: Performance or memory efficiency? Is it worth the extra cycles and instructions to put 8 bools in one byte and & 0x bitmask the relevant one?
Sounds like a compiler problem to me. :p
A lot of times using less memory is actually better for performance because the main bottleneck is memory bandwidth or latency.
Yep, and anding with a bit ask is incredibly fast to process, so it’s not a big issue for performance.
It’s not just less memory though - it might also introduce spurious data dependencies, e.g. to store a bit you now need to also read the old value of the byte that it’s in.
Could definitely be worse for latency in particular cases, but if we imagine a write heavy workload it still might win. Writing a byte/word basically has to do the same thing: read, modify write of cache lines, it just doesn’t confuse the dependency tracking quite as much. So rather than stalling on a read, I think that would end up stalling on store buffers. Writing to bits usually means less memory, and thus less memory to read in that read-modify-write part, so it might still be faster.
It might also introduce spurious data dependencies
Those need to be in the in smallest cache or a register anyway. If they are in registers, a modern, instruction reordering CPU will deal with that fine.
to store a bit you now need to also read the old value of the byte that it’s in.
Many architectures read the cache line on write-miss.
The only cases I can see, where byte sized bools seems better, are either using so few that all fit in one chache line anyways (in which case the performance will be great either way) or if you are repeatedly accessing a bitvector from multiple threads, in which case you should make sure that’s actually what you want to be doing.
And you may ask yourself: where is my beautiful house? Where is my beautiful wife?
- Soon to be
- That’s me
Talking heads - once in a lifetime
Letting the days go by, let the water hold me down
Wait till you find out about alignment and padding
Tell me the truth, i can handle it
boolean bloat
I first thought you wrote boolean float, not sure if that’s even worse.
boolean root beer float
deleted by creator
This reminds me that I actually once made a class to store bools packed in uint8 array to save bytes.
Had forgotten that. I think i have to update the list of top 10 dumbest things i ever did.
Wait till you here about every ascii letter. . .
what about them?
ASCII was originally a 7-bit standard. If you type in ASCII on an 8-bit system, every leading bit is always
0
.(Edited to specify context)
At least ASCII is forward compatible with UTF-8
Is ascii base-7 fandom’s strongest argument…
Ascii needs seven bits, but is almost always encoded as bytes, so every ascii letter has a throwaway bit.
Let’s store the boolean there then!!
That boolean can indicate if it’s a fancy character, that way all ASCII characters are themselves but if the boolean is set it’s something else. We could take the other symbol from a page of codes to fit the users language.
Or we could let true mean that the character is larger, allowing us to transform all of unicode to a format consisting of 8 bits parts.
Some old software does use 8-Bit ASCII for special/locale specific characters. Also there is this Unicode hack where the last bit is used to determine if the byte is part of a multi-byte sequence.
The 8-bit Intel 8051 family provides a dedicated bit-addressable memory space (addresses 20h-2Fh in internal RAM), giving 128 directly addressable bits. Used them for years. I’d imagine many microcontrollers have bit-width variables.
bit myFlag = 0;
Or even return from a function:
bit isValidInput(unsigned char input) { // Returns true (1) if input is valid, false (0) otherwise return (input >= '0' && input <= '9'); }
Nothing like that in ARM. Even microcontrollers have enough RAM that nobody cares, I guess.
ARM has bit-banding specifically for this. I think it’s limited to M-profile CPUs (e.g. v7-M) but I’ve definitely used this before. It basically creates a 4-byte virtual address for every bit in a region. So the CPU itself can’t “address” a bit but it can access an address backed by only 1 bit of SRAM or registers (this is also useful to atomically access certain bits in registers without needing to use SW atomics).
Tell this to the LPC1114 I’m working with. Did you ever run a multilingual GUI from 2kbytes RAM on a 256x32 pixel display?
I did a multilingual display with an 8031 in 1995 on a 2x16 text LCD. I had 128 bytes of RAM and an EPROM. Did English, Spanish and German.
You kids have it so easy nowadays. 🤣
Last counting was 114 languages on the LPC1114. And yes, with normal LCDs I’ve done similar things on an 8051 before.
We could go the other way as well: TI’s C2000 microcontroller architecture has no way to access a single byte, let alone a bit. A Boolean is stored in 16-bits on that one.
And, you can have pointers to bits!
It’s far more often stored in a word, so 32-64 bytes, depending on the target architecture. At least in most languages.
No it isn’t. All statically typed languages I know of use a byte. Which languages store it in an entire 32 bits? That would be unnecessarily wasteful.
It’s not wasteful, it’s faster. You can’t read one byte, you can only read one word. Every decent compiler will turn booleans into words.
You can’t read one byte
lol what. You can absolutely read one byte: https://godbolt.org/z/TeTch8Yhd
On ARM it’s
ldrb
(load register byte), and on RISC-V it’slb
(load byte).Every decent compiler will turn booleans into words.
No compiler I know of does this. I think you might be getting confused because they’re loaded into registers which are machine-word sized. But in memory a
bool
is always one byte.Sorry, but you’re very confused here.
You said you can’t read one byte. I showed that you can. Where’s the confusion?
Weird how I usually learn more from the humor communities than the serious ones… 😎