I have debugged a problem that was due to a left shift of 1ULL by 64 (in a routine that tried to set the lowest n bits by shifting 1 left and subtracting by 1). I needed to read the Intel docs to find out what actually happens to work out how badly we’d corrupted the customer data and how to write a fix to unmangle it.
What Intel processors do is ignore the upper bits of the shift quantity, so it’s effectively (1ULL << (x % 64)) - 1, therefore the function that should have set all ones for x=64 actually set all zeros.
What Intel processors do is ignore the upper bits of the shift quantity, so it’s effectively (1ULL << (x % 64)) - 1, therefore the function that should have set all ones for x=64 actually set all zeros.