问题描述
我是新手,我需要将16位二进制数转换为十六进制的帮助.我已经完成了大部分代码,但是在几件事上我需要帮助.
- 如何使它只接受输入中的0和1,而忽略其余的数字和字母?
- 转换后,我得到的十六进制数字错误.我做错了什么?
示例输入:
预期输出:
当前输出:
这是我的代码:
.模型小.STACK 1000小时.数据title db'将BIN转换为HEX:.',13,10,'$'HEX_Map DB'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'HEX_Out DB"00",13,10,'$';带有换行符和'$'终止符的字符串.代码主要PROCmov ax,@DATA;初始化DSmov ds,axmov ah,0mov 3,清除整数10hmov ah,9lea dx,标题int 21h;显示标题mov dx,0loop16:mov cx,16;循环进入16次,因为我需要16位二进制输入mov bx,0;这里我正在检查输入数字是否为0或1,但是它不能按我的要求工作读:mov ah,10h整数16hcmp al,"0"jb读cmp al,"1"ja read10阅读10:mov ah,0eh整数10hsub al,48; conversion,sub 48 from ascii,因为0在ascii中排在第48位,但是我不确定这部分是否必须jmp end_loopend_loop:mov ah,0; ah = 0,所以我们可以将ax添加到bx加bx,ax循环读取push bx;这里我将bx压入堆栈,bx是我的输入数字电影,13岁mov ah,0eh整数10h电影10mov ah,0eh整数10hmov di,OFFSET HEX_Out;第一个参数:指针pop bx;这里我从堆栈中获取输入数字mov ax,bx调用IntegerToHexFromMap;带参数调用mov ah,09h;Int 21h/09h:将字符串写入STDOUTmov dx,OFFSET HEX_Out;指向以"$"结尾的字符串的指针int 21h;呼叫MS-DOSmov ah,10h整数16hmov ax,4C00h;Int 21h/4Ch:终止程序(退出代码= 00h)int 21h;呼叫MS-DOS主要ENDPIntegerToHexFromMap PROCmov si,OFFSET Hex_Map;指向十六进制字符表的指针mov bx,ax;BX =参数AX和bx,00FFh;清除BH(为了安全起见)shr bx,1shr bx,1shr bx,1shr bx,1;隔离高半字节(即4位)mov dl,[si + bx];从表中读取十六进制字符mov [di + 0],dl;将字符存储在输出字符串的第一位mov bx,ax;BX =参数AX和bx,00FFh;清除BH(为了安全起见)shr bx,1shr bx,1shr bx,1shr bx,1;隔离高半字节(即4位)mov dl,[si + bx];从表中读取十六进制字符mov [di + 1],dl;将字符存储在输出字符串的第一位mov bx,ax;BX =参数AX和bx,00FFh;清除BH(为了安全起见)shr bx,1shr bx,1shr bx,1shr bx,1;隔离高半字节(即4位)mov dl,[si + bx];从表中读取十六进制字符mov [di + 2],dl;将字符存储在输出字符串的第一位mov bx,ax;BX =参数AX(为了安全起见)和bx,00FFh;清除BH(为了安全起见)和bl,0Fh;隔离低位半字节(即4位)mov dl,[si + bx];从表中读取十六进制字符mov [di + 3],dl;将字符存储在输出字符串的第二位退回IntegerToHexFromMap ENDPIntegerToHex计算的PROCmov si,OFFSET Hex_Map;指向十六进制字符表的指针mov bx,ax;BX =参数AXshr bl,1shr bl,1shr bl,1shr bl,1;隔离高半字节(即4位)cmp bl,10;十六进制"A"-"F"?jl .1;否:跳过下一行加bl,7;是:调整数字以进行ASCII转换.1:加bl,30h;转换为ASCII字符mov [di + 0],bl;将字符存储在输出字符串的第一位mov bx,ax;BX =参数AXshr bl,1shr bl,1shr bl,1shr bl,1;隔离高半字节(即4位)cmp bl,10;十六进制"A"-"F"?jl .2;否:跳过下一行加bl,7;是:调整数字以进行ASCII转换.2:加bl,30h;转换为ASCII字符mov [di + 1],bl;将字符存储在输出字符串的第一位mov bx,ax;BX =参数AXshr bl,1shr bl,1shr bl,1shr bl,1;隔离高半字节(即4位)cmp bl,10;十六进制"A"-"F"?jl .3;否:跳过下一行加bl,7;是:调整数字以进行ASCII转换.3:加bl,30h;转换为ASCII字符mov [di + 2],bl;将字符存储在输出字符串的第一位mov bx,ax;BX =参数AX(为了安全起见)和bl,0Fh;隔离低位半字节(即4位)cmp bl,10;十六进制"A"-"F"?jl .4;否:跳过下一行加bl,7;是:调整数字以进行ASCII转换.4:加bl,30h;转换为ASCII字符mov [di + 3],bl;将字符存储在输出字符串的第二位退回IntegerToHex计算的ENDPEND主要;组装结束,有进入程序
您不能使用当您将位收集到 bx
中时, int 10h(0e)
用于char输出.该 int
调用需要将 bl
设置为文本的前景色,并且将 bh
设置为指向文本页面.
同样在 bx
中,您将计算一个数字,而不是输入数字.在调试器(您的原始代码)中进行尝试,将断点放在 loop
之后,然后输入(盲目,如果未显示),例如"1100110011001100", bx
将为8(如果某些 int
调用销毁 bx
,我可能并没有运行它,只是在我的脑海里.)
因此,要修复您的输入部分,我会使用 int 21h,2 代码>
而不是用于显示字符,就像这样(还可以修复结果在 bx
中的累积):
;从键盘读取16位(仅接受"0"/"1"字符)mov cx,16;循环进行了16次,因为我需要16位二进制输入xor bx,bx;结果编号(初始化为零)读:mov ah,10hint 16小时;从键盘读取字符cmp al,"0"jb阅读;ASCII字符在"0"以下->重新阅读cmp al,"1"ja阅读;ASCII字符在"1"以上->重新阅读mov dl,al;保留ASCII以在DL中输出shr al,1;将ASCII'0'(0x30)/'1'(0x31)转换为CF = 0/1(进位标志)rcl bx,1;从右开始将CF注册到结果中(并向上移动前几位)mov ah,2;在屏幕上以DL输出字符整数21h循环读取;读取16位
我没有检查其余的代码,因为如果愿意的话,我会非常想将其完全重写,所以暂时只保留输入部分.
调试器应该允许您每次单步执行一条指令(或在任何行上放置断点,然后运行直到断点为止).
因此,您可以在每一步之后检查寄存器和内存中的值.
例如,如果您要在原始代码的 add bx,ax
之前放置断点,则您应该能够在调试器中读取(按"1"键,并且调试器在添加
):
ax
为1(根据所按下的键),而 bx
从0到"1"次按键的计数(在进一步的迭代中).
在按了四个"1"键之后,您应该很明显地发现, bx
等于 4
(二进制为 0100
与 1111
距离很远,因此某些内容无法按您期望的方式工作,因此您必须从我想在那里写的内容"重新调整为我真正写的内容",再次阅读您的代码并理解需要进行哪些更改才能获得预期的结果.
例如,在您的情况下,在 add
之前添加指令 shl bx,1
会解决这种情况(将旧位向上"移动一个位置,剩下最低有效位设置为零,即准备添加斧头".
继续努力尝试调试器的工作,如果不弄清楚调试器,几乎不可能在Assembly中做任何事情.或继续在这里问,您看到什么,不了解什么.对于Assembly编程,这绝对是绝对必要的.
其他选择只是模仿"您的头脑中的CPU,并在屏幕上运行带有帮助说明的指令(我强烈建议使用纸张,PC不适用于我).这比使用调试器要困难和乏味.可能需要数周/数月才能开始模拟"而不会出现太多错误,因此通常会在第一次尝试时发现错误.从好的方面来说,这将使您对CPU的工作方式有更深入的了解.
关于第二部分(数字到十六进制字符串的转换).
我将尽力帮助您了解您手头上的内容,并从原始代码中发现一些错误,以演示如何使用它.
所以您有16位数字,例如:
1010 1011 1100 1101(无符号十进制43981)
我在每组4位(很少称为半字节")之间放置空格,因为有一个有趣的事实.每个半字节都精确地形成一个十六进制数字.所以上面的数字是十六进制的:
A B C D(10、11、12、13)
检查每个六位数如何与上面的4位相对应.
因此,您想要的是将原始16b值分解为四个4位数字,从最高有效位到最低有效位(b12-b15,b8-b11,b4-b7,b0-b3 => 16位数字中的特定位:"b15 b14 b13 ... b2 b1 b0").
每个这样的数字将是0-15的值(因为它们是4位,并且使用所有可能的组合),因此您想将其转换为ASCII字符'0'
-'9'
表示0-9,'A'
-'F'
表示10-15.
每个转换后的值都在下一个字节位置存储到内存缓冲区中,因此最后它们形成字符串"ABCD".
这听起来似乎显而易见",但这是对第2部分内部计算的完整描述,因此请确保您确实了解每个步骤,以便您可以随时对此进行检查并寻找差异.
现在,我将向您展示我在第二部分中看到的一些错误,尝试将其与上面的理论"联系起来.
首先使用数据和结构:
HEX_Out DB"00",13,10,'$'
这将编译为字节:'0','0',13、10,'$'
(或当以十六进制字节查看时为 30 30 0D 0A 24
)
如果您在上面写上'A','B','C','D'
,您能找出问题所在吗?
现在有关 IntegerToHexFromMap
,从代码中看,您似乎不了解 and
和 shr
的作用(搜索按位操作说明).
您从 bx(ax的副本)
的前三个字符中提取相同的b4-b7位,然后为第四个字母提取b0-b3位.因此,这是您尝试将8位转换代码扩展为16位,但是您没有提取正确的位.
我将尽力对它的第一部分进行广泛评论,以使您了解您的所作所为.
;bx = 16位值,从a0到a15将每个位标记为"a#"和bx,00FFh;原始:a15 a14 a13 ... a2 a1 a0位得到;与(AND):0 0 0 ... 1 1 1;结果变成bx ="a7到a0仍然存在,其余部分被清除为0"shr bx,1;将bx右移一位,将0插入最高位;bx = 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 a2 a1(a0在CF中)shr bx,1;进一步移动;bx = 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 a2(a1在CF中)shr bx,1;bx = 0 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3(a2 ...)shr bx,1;bx = 0 0 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4;因此,如果bx开头的值为0x1234,则现在bx = 0x0003;转换为ASCII并写入即可.
因此,您将b4-b7位用作第一个字符,但您需要b12-b15位.我希望您能完全理解这一点,我知道一开始可能会混淆哪一点,为什么有时在右边然后在左边有些东西.
位通常从最低有效位(值2 = 1,所以我称之为"b0")到最高有效位(值2 = 32768)命名的16位数字,我称之为"b15").
但是出于数字原因,位是从最高有效位到最低有效位(以二进制数)写入的,因此左侧"的位以b15开头,而右侧"的位以b0结尾.
向右移动意味着将 b_i 移至 b_(i-1),这实际上将其值减半,因此 shr value,1
也可以视为除以2的无符号除法.
向左移动是从 b_i 到 b_(i + 1),将值有效地乘以2(指令 shl
和 sal
,两者都产生相同的结果,因为b0都设置为0).
sar
是算术"右移,保持最高有效位(符号位)的值不变,因此对于 -1
(所有位均为1),它将产生再次 -1
,对于所有其他数字,该符号将被除以2.
顺便说一句,因为80286 CPU,您可以使用 shr bx,4
(也可以看作是16除以2 = 2 * 2 * 2 * 2).您是否真的被迫为8086编写代码?那么可能有必要将 cl
加载为4并执行 shr bx,cl
,而不是四个 shr bx,1
.四个相同的词使我烦恼不已.
此外,如果您已经了解和
的功能,那么现在对您来说这很可笑:
和bx,00FFh;为什么不是0Fh已经在这里???和bl,0Fh
现在考虑了一会儿如何提取第一个字符的b12-b15位以及如何修复您的 IntegerToHexFromMap
.
最后,我将向您展示如何重写它以使代码非常简短,我的意思是源代码,还包括二进制大小.(出于性能考虑,我会编写不同的代码,而不是8086,但是这一代码应该可以在8086上工作):
警告-尝试通过上述建议自行修复您的版本.只有当您拥有固定版本时,再看一下我的代码,以启发新的想法30年前如何编写某些东西.另外,如果您正在做学校活动,请确保您可以说出有关 XLAT 指令的所有内容头,因为作为讲师,我会对任何使用此语言的学生高度怀疑,因为它是全部历史,并且由于编译器不使用它,因此很明显,该代码是人为编写的,并且可能是有经验的.
IntegerToHexFromMap PROC;ax =要转换的数字,di =要写入的字符串缓冲区;修改:ax,bx,cx,dx,di;要转换的数字副本(将使用AX进行计算)mov dx,ax;循环前初始化其他有用的值mov bx,OFFSET HEX_Map;指向十六进制字符表的指针mov cx,00404h;用于旋转位和循环计数器;cl = 4,ch = 4(!)十六进制格式允许我;将两个"4"轻松定位在单个16b值中.FourDigitLoop :;我将循环使用相同的代码处理每个数字;将DX中的下一个半字节(=六位数)移动到b0-b3位置rol dx,cl;将DX b0-b3复制到AL中,清除其他位(AL =值0-15)mov,dl还有0Fh;通过特殊的8086指令将AL中的0-15转换为ASCII字符;设计为完全执行此任务(被C/C ++编译器忽略:)xlat;将其写入字符串,然后将字符串指针移至下一个字符mov [di],alInc Di;循环槽4位数字(16位)十二月jnz FourDigitLoop退回IntegerToHexFromMap ENDP
如果您只是在不了解其工作原理的情况下使用此代码,那么上帝会杀死一只小猫……您不想要那对吧?
最终免责声明:我没有任何16位x86环境,因此我编写了所有代码而没有进行测试(我有时只是尝试对其进行编译,但是语法必须类似于NASM,因此我不打算这样做)此MASM/TASM/emu8086来源).因此,可能会出现一些语法错误(甚至可能是功能错误?:-O),以防万一您无法使其正常运行,请发表评论.
I'm beginner and I need help with converting 16-bit binary number to hex. I have done most of the code, but I need help with a couple of things.
- How to make it only accept 0 and 1 in input and ignore the rest of numbers and letters?
- After conversion process I'm getting wrong number in hex. What did I do wrong?
Example input:
Expected output:
Current output:
Here's my code:
.MODEL SMALL
.STACK 1000h
.DATA
title db 'Convert BIN to HEX:.',13,10,'$'
HEX_Map DB '0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'
HEX_Out DB "00", 13, 10, '$' ; string with line feed and '$'-terminator
.CODE
main PROC
mov ax, @DATA ; Initialize DS
mov ds, ax
mov ah, 0
mov al, 3 ;clearing
int 10h
mov ah, 9
lea dx, title
int 21h ;displays title
mov dx, 0
loop16:
mov cx, 16 ;loop goes 16 Times because I need 16 bit binary input
mov bx, 0
;here I'm checking if input numer is 0 or 1, but it doesn't work as I want
read:
mov ah, 10h
int 16h
cmp al, '0'
jb read
cmp al, '1'
ja read10
read10:
mov ah, 0eh
int 10h
sub al, 48 ;conversion, sub 48 from ascii since 0 is on 48th place in ascii, but I'm not sure if this part is must to be or not
jmp end_loop
end_loop:
mov ah, 0 ;ah=0 so we can add ax to bx
add bx, ax
loop read
push bx ;here I push bx on stack, bx is as my input number
mov al, 13
mov ah, 0eh
int 10h
mov al, 10
mov ah, 0eh
int 10h
mov di, OFFSET HEX_Out ; First argument: pointer
pop bx ;Here I take input number from stack
mov ax, bx
call IntegerToHexFromMap ; Call with arguments
mov ah, 09h ; Int 21h / 09h: Write string to STDOUT
mov dx, OFFSET HEX_Out ; Pointer to '$'-terminated string
int 21h ; Call MS-DOS
mov ah, 10h
int 16h
mov ax, 4C00h ; Int 21h / 4Ch: Terminate program (Exit code = 00h)
int 21h ; Call MS-DOS
main ENDP
IntegerToHexFromMap PROC
mov si, OFFSET Hex_Map ; Pointer to hex-character table
mov bx, ax ; BX = argument AX
and bx, 00FFh ; Clear BH (just to be on the safe side)
shr bx, 1
shr bx, 1
shr bx, 1
shr bx, 1 ; Isolate high nibble (i.e. 4 bits)
mov dl, [si+bx] ; Read hex-character from the table
mov [di+0], dl ; Store character at the first place in the output string
mov bx, ax ; BX = argument AX
and bx, 00FFh ; Clear BH (just to be on the safe side)
shr bx, 1
shr bx, 1
shr bx, 1
shr bx, 1 ; Isolate high nibble (i.e. 4 bits)
mov dl, [si+bx] ; Read hex-character from the table
mov [di+1], dl ; Store character at the first place in the output string
mov bx, ax ; BX = argument AX
and bx, 00FFh ; Clear BH (just to be on the safe side)
shr bx, 1
shr bx, 1
shr bx, 1
shr bx, 1 ; Isolate high nibble (i.e. 4 bits)
mov dl, [si+bx] ; Read hex-character from the table
mov [di+2], dl ; Store character at the first place in the output string
mov bx, ax ; BX = argument AX (just to be on the safe side)
and bx, 00FFh ; Clear BH (just to be on the safe side)
and bl, 0Fh ; Isolate low nibble (i.e. 4 bits)
mov dl, [si+bx] ; Read hex-character from the table
mov [di+3], dl ; Store character at the second place in the output string
ret
IntegerToHexFromMap ENDP
IntegerToHexCalculated PROC
mov si, OFFSET Hex_Map ; Pointer to hex-character table
mov bx, ax ; BX = argument AX
shr bl, 1
shr bl, 1
shr bl, 1
shr bl, 1 ; Isolate high nibble (i.e. 4 bits)
cmp bl, 10 ; Hex 'A'-'F'?
jl .1 ; No: skip next line
add bl, 7 ; Yes: adjust number for ASCII conversion
.1:
add bl, 30h ; Convert to ASCII character
mov [di+0], bl ; Store character at the first place in the output string
mov bx, ax ; BX = argument AX
shr bl, 1
shr bl, 1
shr bl, 1
shr bl, 1 ; Isolate high nibble (i.e. 4 bits)
cmp bl, 10 ; Hex 'A'-'F'?
jl .2 ; No: skip next line
add bl, 7 ; Yes: adjust number for ASCII conversion
.2:
add bl, 30h ; Convert to ASCII character
mov [di+1], bl ; Store character at the first place in the output string
mov bx, ax ; BX = argument AX
shr bl, 1
shr bl, 1
shr bl, 1
shr bl, 1 ; Isolate high nibble (i.e. 4 bits)
cmp bl, 10 ; Hex 'A'-'F'?
jl .3 ; No: skip next line
add bl, 7 ; Yes: adjust number for ASCII conversion
.3:
add bl, 30h ; Convert to ASCII character
mov [di+2], bl ; Store character at the first place in the output string
mov bx, ax ; BX = argument AX (just to be on the safe side)
and bl, 0Fh ; Isolate low nibble (i.e. 4 bits)
cmp bl, 10 ; Hex 'A'-'F'?
jl .4 ; No: skip next line
add bl, 7 ; Yes: adjust number for ASCII conversion
.4:
add bl, 30h ; Convert to ASCII character
mov [di+3], bl ; Store character at the second place in the output string
ret
IntegerToHexCalculated ENDP
END main ; End of assembly with entry-procedure
You can't use int 10h (0e)
for char output when you collect bits into bx
. That int
call requires bl
set to foreground colour of text and bh
to point to text page.
Also in bx
you will count number of ones, not the input number. Try it in debugger (your original code), put breakpoint after loop
and enter (blindly, if it doesn't show) for example "1100110011001100", bx
will be 8 (I may be wrong if some int
call destroy bx
, I didn't run it, just in my head).
So to fix your input part I would go for int 21h, 2
instead for displaying the chars, like this (also fixes the accumulation of result in bx
):
; read 16 bits from keyboard ('0'/'1' characters accepted only)
mov cx, 16 ; loop goes 16 Times because I need 16 bit binary input
xor bx, bx ; result number (initialized to zero)
read:
mov ah, 10h
int 16h ; read character from keyboard
cmp al, '0'
jb read ; ASCII character below '0' -> re-read it
cmp al, '1'
ja read ; ASCII character above '1' -> re-read it
mov dl,al ; keep ASCII for output in DL
shr al,1 ; turn ASCII '0'(0x30)/'1'(0x31) into CF=0/1 (Carry Flag)
rcl bx,1 ; enrol that CF into result from right (and shift previous bits up)
mov ah,2 ; output character in DL on screen
int 21h
loop read ; read 16 bits
I didn't check the rest of the code, because if I would, I would have strong itch to rewrite it completely, so let stick with the input part only for the moment.
The debugger should allow you to step one instruction per time (or to put breakpoints on any line, and run up till it).
So you can examine values in registers and memory after each step.
If you will for example put breakpoint ahead of your add bx,ax
in original code, you should be able to read in debugger (after hitting "1" key and debugger breaking on the add
) that:
ax
is 1 (according to key pressed), and bx
goes from 0 to the count of "1" key presses (in further iterations).
After doing like four "1" key presses it should be obvious to you, that bx
equal to 4
(0100
in binary) is far off from 1111
, thus something doesn't work as you wanted and you have to readjust from "what I wanted to wrote there" to "what I really wrote", read your code again and understand what needs to be changed to get expected result.
In your case for example adding instruction shl bx,1
ahead of add
would fix the situation (moving old bits by one position "up", leaving least significant bit set to zero, ie. "ready for add ax").
Keep trying the debugger stuff hard, it's almost impossible to do anything in Assembly without figuring out debugger. Or keep asking here, what you see and what you don't understand. It's really absolutely essential for Assembly programming.
Other option is just to "emulate" CPU in your head and run the instructions from the screen with help notes (I suggest strongly paper, PC somehow doesn't work well for me). This is much more difficult and tedious, than using debugger. May take weeks/months before you start to "emulate" without too many mistakes, so you will spot bugs usually on first try. On the bright side this would give you deep understanding of how CPU works.
About second part (number to hexadecimal string conversion).
I will try to help you understand what you have at hand, and pick up some mistakes from original code to demonstrate how to work with it.
So you have 16 bit number, like:
1010 1011 1100 1101 (unsigned decimal 43981)
I put spaces between each group of 4 bits (rarely called as "nibble"), because there's a funny fact. Each nibble forms single hexadecimal digit, exactly. So the number above is in hexadecimal:
A B C D (10, 11, 12, 13)
Check how each hexa digit corresponds with the 4 bits above.
So what you want is to break the original 16b value into four 4 bit numbers, from most significant to least significant (b12-b15, b8-b11, b4-b7, b0-b3 => particular bits from 16 bit number: "b15 b14 b13 ... b2 b1 b0").
Each such number will be of value 0-15 (because they are 4 bits, and using all possible combinations), so then you want to turn that into ASCII character '0'
-'9'
for values 0-9, and 'A'
-'F'
for values 10-15.
And each converted value is stored into memory buffer, on next byte position, so in the end they form string "ABCD".
This may sound "obvious", but it's complete description of inner-calculation of part 2, so make sure you really understand each step, so you can check your code against this any time and search for differences.
Now I will show you some of the bugs I see in second part, trying to connect it to the "theory" above.
Data and structures first:
HEX_Out DB "00", 13, 10, '$'
This compiles to bytes: '0', '0', 13, 10, '$'
(or 30 30 0D 0A 24
when viewed as hexadecimal bytes).
If you write 'A', 'B', 'C', 'D'
over it, can you spot the problem?
Now about IntegerToHexFromMap
, from the code it looks like you don't understand what and
and shr
does (search for the bitwise operations explanation).
You extract for first three characters the same b4-b7 bits from bx (copy of ax)
, then for the fourth letter you extract bits b0-b3. So this is your try to extend 8 bit conversion code to 16 bit, but you don't extract the correct bits.
I will try to extensively comment the first part of it, to give you idea what you did.
; bx = 16 bit value, mark each bit as "a#" from a0 to a15
and bx, 00FFh
; the original: a15 a14 a13 ... a2 a1 a0 bits get
; AND-ed by: 0 0 0 ... 1 1 1
; resulting into bx = "a7 to a0 remains, rest is cleared to 0"
shr bx, 1
; shifts bx to right by one bit, inserting 0 into top bit
; bx = 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 a2 a1 (a0 is in CF)
shr bx, 1
; shifts it further
; bx = 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 a2 (a1 is in CF)
shr bx, 1
; bx = 0 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4 a3 (a2 ...)
shr bx, 1
; bx = 0 0 0 0 0 0 0 0 0 0 0 0 a7 a6 a5 a4
; so if bx was value 0x1234 at the beginning, now bx = 0x0003
; conversion to ASCII and write is OK.
So you take bits b4-b7 for first character, but you need bits b12-b15. I hope you fully get this one, I know it can be confusing at start which bit is which and why sometimes there is something on right and then left.
Bits are usually named from least significant (value 2 = 1, so I call it "b0") to most significant (value 2 = 32768 in case of 16 bit number, I call it "b15").
But for numeric reasons bits are written from most significant to least significant (in binary numbers), so bits on "left" starts with b15, and bits on "right" end with b0.
Shifting to right means to move b_i to b_(i-1), which effectively halves it's value, so shr value,1
can be viewed also as unsigned division by two.
Shifting to left is from b_i to b_(i+1), effectively multiplies the value by two (instructions shl
and sal
, both producing same result, as b0 is set to zero with both).
sar
is "arithmetic" shift right, keeping value of most significant bit intact (sign bit), so for -1
(all bits are 1) it will produce again -1
, for all other numbers it works as signed division by two.
BTW since 80286 CPU you can use shr bx,4
(which can be also seen as divide by 16 = 2*2*2*2). Are you really forced to code for 8086? Then it may be worth to load cl
with 4 and do shr bx,cl
, instead of four shr bx,1
. That annoys hell out of me, four identical lines.
Also if you already understand what and
does, this must look ridiculous to you now:
and bx, 00FFh ; why not 0Fh already here???
and bl, 0Fh
Now contemplate for a while how to extract bits b12-b15 for first character and how to fix your IntegerToHexFromMap
.
And ultimately I will show you how I would rewrite it to have the code very short, I mean source, but also binary size. (for performance I would write different code, and not for 8086, but this one should work on 8086):
WARNING - try to fix your version on your own by above advices. Only when you will have fixed version, then look at my code, as an inspiration for new ideas how some things were written 30 years ago. Also if you are doing school assigment, make sure you can say everything about XLAT instruction from head, because as a lector I would be highly suspicious about any student using this one, it's total history and as compilers don't use it, it's obvious the code was written by human, and probably experienced one.
IntegerToHexFromMap PROC
; ax = number to convert, di = string buffer to write to
; modifies: ax, bx, cx, dx, di
; copy of number to convert (AX will be used for calculation)
mov dx, ax
; initialize other helpful values before loop
mov bx, OFFSET HEX_Map ; Pointer to hex-character table
mov cx, 00404h ; for rotation of bits and loop counter
; cl = 4, ch = 4 (!) Hexadecimal format allows me
; to position the two "4" easily in single 16b value.
FourDigitLoop: ; I will do every digit with same code, in a loop
; move next nibble (= hexa digit) in DX into b0-b3 position
rol dx, cl
; copy DX b0-b3 into AL, clear other bits (AL = value 0-15)
mov al, dl
and al, 0Fh
; convert 0-15 in AL into ASCII char by special 8086 instruction
; designed to do exactly this task (ignored by C/C++ compilers :))
xlat
; write it into string, and move string pointer to next char
mov [di],al
inc di
; loop trough 4 digits (16 bits)
dec ch
jnz FourDigitLoop
ret
IntegerToHexFromMap ENDP
If you will just use this code without understanding how it works, god will kill a kitten... you don't want that, right?
Final disclaimer: I don't have any 16bit x86 environment, so I wrote all the code without testing (I only try to compile it sometimes, but the syntax must be NASM-like, so I don't do that for this MASM/TASM/emu8086 sources). Thus some syntax bugs may be there (maybe even functional bug? :-O ), in case you will be unable to make it work, comment.
这篇关于在组装中将垃圾箱转换为十六进制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!