问题描述
据我所知,十进制和十六进制只是表示(比方说) int 。
x 我应该能够打印 x ,如下所示:
- 小数点: printf(%d,x);
- 十六进制: printf(%x,x);
我不明白的是,当 x 超过MAXINT时,这是如何表现的。
例如下面的代码:
#include
int main(int argc,char ** argv){
//定义两个小于MAXINT的数字
int a = 808548400;
int b = 2016424312;
int theSum = a + b; // 2824972712 - >大于MAXINT
printf(%d \,theSum); // -1469994584 - >溢出
printf(%x\\\
,theSum); // A861A9A8 - >正确表示
}
正如我的评论所暗示的那样,十进制数字是一个大于MAXINT的数字。这个数字在打印成小数时已经溢出了(正如我所期望的那样),但是当以十六进制打印时,它看起来非常好。有趣的是,如果我继续添加这个数字,并导致它再次溢出,它返回到正确表示十进制数。十六进制数字总是正确的。
任何人都可以解释为什么是这种情况。
TIA
引用n1570(最新的C11草案),§7.21.6.1p8 :
在这里重用错误类型的转换说明符。实际上,你的签名溢出是未定义的行为,但是你的实现在你的系统上显然使用2的补码来表示负数,而溢出结果是一个环绕的,所以结果就是与正确的无符号号码具有相同的表示形式。
如果您使用%u %x ,您会看到十进制符号中的相同数字。再次,这将是未定义行为的结果,即发生在您的系统上正确。总是避免签名溢出,结果被允许为 。
As far as I am aware, decimal and hexadecimal are simply representations of (let's say) an int.
This means that if I define an integer, x I should be able to print x as:
- a decimal: printf("%d", x);
- a hexadecimal: printf("%x", x);
What I don't understand is how this behaves when x exceeds MAXINT.
Take the below code for example:
#include<stdio.h> int main(int argc, char** argv) { // Define two numbers that are both less than MAXINT int a = 808548400; int b = 2016424312; int theSum = a + b; // 2824972712 -> larger than MAXINT printf("%d\n", theSum); // -1469994584 -> Overflowed printf("%x\n", theSum); // A861A9A8 -> Correct representation }
As my comments suggest, the sum of these two decimal numbers is a number larger than MAXINT. This number has overflowed when printed as a decimal (as I would expect), but when printed as hexadecimal it appears to be perfectly fine.
Interestingly, if I continue adding to this number, and cause it to overflow again, it returns to representing the decimal number correctly. The hexadecimal number is always correct.
Could anyone explain why this is the case.
TIA
Citing n1570 (latest C11 draft), §7.21.6.1 p8:
So, in a nutshell, you're using a conversion specifier for the wrong type here. Actually, your signed overflow is undefined behavior, but your implementation obviously uses 2's complement for negative numbers and the overflow results in a wraparound on your system, so the result is exactly the same representation as the correct unsigned number.
If you used %u instead of %x, you would see the same number in decimal notation. Again, this would be the result of undefined behavior that happens to be "correct" on your system. Always avoid signed overflows, the result is allowed to be anything.
这篇关于为什么我可以用十六进制表示一个大于MAXINT的int,但不是十进制?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!