Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Learning 64-bit ASM (amd64) on Linux
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Portage & Programming
View previous topic :: View next topic  
Author Message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Fri Oct 18, 2013 3:48 pm    Post subject: Learning 64-bit ASM (amd64) on Linux Reply with quote

Hi,

I often hear people on IRC ask how to learn assembler, which is IMO an essential knowledge to have for any programmer: even if you don't actually write assembly-code on a daily basis in your career, if you have ever coded asm, then nothing computers do is ever that mysterious to you. You gain a much better intuitive understanding of what computers really do, which informs all of your other code. As such it is something that I believe people should learn before they learn C, C++ or Java. To my mind it should be your second language, once you've understood the basics of typing in text, which a computer processes to decide what to do, and how variables work. If you're completely new to coding, I highly recommend the awkbook. Buy a copy when you can, you will not regret it: "The Awk Programming Language" (Aho, Kernighan and Weinberger, 1988) -- it is the best introduction to programming I have ever read, and it covers Core Computing much better than any course at a University, afaic.

So let's assume you know a bit about programming, and you're a Gentoo user which means you know your way around a terminal, so commands don't scare you. First off, the best book for learning asm that I've been able to find, is the second edition of "Introduction to 64 bit Intel Assembly Language Programming for Linux" Seyfarth (2012) http://rayseyfarth.com/asm

If you want to learn asm, you will need to buy a copy. Basically you can take it from there, just by reading the book, and working through every exercise. The author provides a nice build environment called ebe which you can download from his site. Hmm looks like it's on sourceforge now, so maybe we should look at an ebuild. I've also set up kate to do exactly the same thing, but in my usual work environment: I'll explain after you've setup your assembler (the most vital part.)

To actually assemble, you'll need yasm on Gentoo, which is a nasm-compatible assembler that works well for 64-bit coding too, and provides correct debugging info for gdb.

However you should also install nasm too, with the doc USE flag set on in package.use. That gives you the html docs for nasm, which you'll need to understand the syntax etc of yasm.

For me this was as simple as update -ia nasm yasm and then hit 'e' to Edit the list, and set the use for nasm. I didn't want the python flag for yasm, but you might. I don't know what it does (I imagine it's useful in conjunction with gdb python though), and I'm not interested at this point. Alternatively:
Code:
echo 'dev-lang/nasm doc' >> /etc/portage/package.use
emerge -av nasm yasm

I have DOC_SYMLINKS_DIR="/doc" in /etc/portage/make.conf which means the html docs are symlinked from /doc/dev-lang/nasm
Code:
$ ls -ld /doc/dev-lang/nasm
lrwxrwxrwx 1 root root 32 Oct 17 13:36 /doc/dev-lang/nasm -> /usr/share/doc/nasm-2.10.07/html

I did this yesterday when I found that man yasm doesn't explain the syntax or usage: it refers you to nasm(1); turned out I already had the doc USE flag set, so I must have installed it before then removed it when I thought I only needed yasm.
I don't really need to do asm at the moment: I've done it before many years ago; I bought the Seyfarth book for my kid, but she ofc is not interested heh. I just wanted a bit of a downtime thing to play with, to take my mind off other code, and insanely long bash scripts ;)

Since I love kate so much, I decided just to start with the first chapter, in order to see whether I could use kate easily enough. I tried ebe a few months ago when I got the book, and while I like it, and may use it at some point, I'm already used to the Build Plugin and the GDB Plugin in for C code. So I'll document how to do it with kate, for those of us who use it: feel free to post how to setup the same thing in vim, emacs, jupp and so on: the idea is to get into asm, not argue about editor choices which is about as useful as arguing about what desktop you prefer. It doesn't improve anything: so don't please. :-)

If you don't know the Build Plugin in kate, I highly recommend it. You can set up multiple "Targets" per-session, each with their own working directory and separate command line scriptlets for Build, Clean and Quick Compile. I have Build short-cutted as Ctrl-Shift-B and Quick Compile as Ctrl-Shift-C and I run Clean from the menubar. If you leave the working directory empty, kate uses the directory of the current file, which is what we want for small asm files; for a larger project Build would run make in the main project directory, and for pretty much any project, Compile would pass the name of the source file, transformed to eg make foo.o

For my small asm session though, I just wanted to compile the current file. If you look at Seyfarth, there are two small asm files we can start with, exit.asm the most minimal program you could think of, in Section 1.4, which just exits with a value of 5 (returned to the shell as $?) and fp.asm in Section 2.4 which is 7 float values in the data segment.

This is exit.asm
Code:
; exit: exit(5): Seyfarth, 1.4
segment   .text
global   _start

_start:
   mov eax, 1   ; syscall number
   mov ebx, 5   ; param
   int 0x80

; kate: replace-tabs off
; kate: syntax Intel x86 (NASM)

The last mode line is required to get kate to open with the correct Syntax Highlighter (you can use hl instead of syntax but the latter is clearer) in any session. I recommend you also set the highlighter from Tools menu, as otherwise on save you get a little flicker/glitch as it goes from standard mimetype in 6502 asm and then picks up on the modeline. If it's set from the menubar, then kate saves that setting for the file in the session, and never glitches. I'll ask #kate about that; some of you may actually prefer it as it's a visual indicator of save. I found it a bit annoying, personally. The nasm highlighter needs some stuff added to it, perhaps as a yasm.xml, eg SSE2 insns, and some of it could do with a tweak, so we'll sort that out soon (patches welcome;)

Now the Build Plugin, which you enable from Settings -> Configure Kate -> Plugins: while you're there add the GDB Plugin.
I also recommend the Multiline tab bar, which you have to setup from the spanner icon once it's loaded; eg I use 2 tablines, order by Document Name, set the width to minimum 75, maximum 150, and add highlighting colours for modified (red) active (green) and previous (blue): note you should turn the opacity up (I go to half) to make the highlighting clearly visible. After that you can highlight other tabs with custom colours by right-clicking on them. eg in a busier session, I sometimes highlight central header files with purple, or the makefile orange etc.

So first off, click on the Build Plugin's Target Setting tab, and type Run into the drop-down in the top left corner, just above the 3 small New Copy and Delete buttons: we'll set up a target to run the above program, which we can use with any asm file we want to build as a main program and run.
You can leave the working directory empty, or click the directory folder on the right, and it will start you off in the directory for the current file: if that's exit.asm above, which I saved to ~/code/amd64 then that's fine: you may prefer simply to stick to leaving it blank. Either way, if you hover over the Quick Compile bar, you'll see a tooltip about %f %n and %d: these paths all use the absolute value, which is a bit annoying for the listing files you get. So we'll tweak %f (source filename) and %n (filename without suffix, similar in intent to $* in make.)

Here's what I have in the Quick Compile bar:
Code:
f='%f'; f=${f##*/};  n=${f%.*}; yasm -f elf64 -g dwarf2 -l "$n.lst" "$f" && ld -o "$n" "$n.o" && { ./"$n"; echo "$f: $n: $?" >&2; }
With the above exit.asm file, Ctrl-Shift-C (or select it from menu Build) assemble and links it, runs the output file and displays the exit status on stderr, which shows in both the Output tab and the Errors & Warnings tab. From the Filesystem Browser (I have that on left, and moved my Documents view to the right) you can then simply click on exit.lst to see the listing file. You'll notice that it says
Code:
%line 1+1 exit.asm
at the top; if you run yasm using %f instead of $f above, you get the absolute path there instead.

Within a project we would not want to run the actual output, just build it, so you'd remove the last part: && { .. ; } above for a target Prog (in an assembler session.)

Here's fp.asm
Code:
; fp.asm: Seyfarth, 2.4
; float data file
segment   .data

zero  dd     0.0
one   dd     1.0
neg1  dd    -1.0
a     dd     1.75
b     dd   122.5
d     dd     1.1
e     dd  10000000000.0

; kate: syntax Intel x86 (NASM)

Now to build a data file we just want the standard yasm command to run, iow we're doing the normal Quick Compile, to build foo.o.
So click either the new or copy button in the Target Settings tab, and call it Asm (or something else you prefer.) Then in Quick compile you just want:
Code:
f='%f'; f=${f##*/};  n=${f%.*}; yasm -f elf64 -g dwarf2 -l "$n.lst" "$f"
Then click on fp.lst to see the output.

That's enough to get you started: I'm sure you can see how to use yasm yourself, and all you need to do is work through Seyfarth. If you don't know binary and hexadecimal, be sure to spend enough time on 2.1 and 2.2: using a pencil and paper to work through examples is the best way. Note that he covers SSE2 and AVX instructions, as well as how to do standard things like interfacing with C, structs and pure assembly hash-tables. ;-)

As stated, please do post how to setup vim, emacs, joe, jupp, geany, et al to do the same setup, and tie in with gdb, for yasm coding on Gentoo, or ask if you're having an issue understanding something in the book. Between us, we know (or can find out;) just about all we need to help :-) And don't forget ##asm on IRC:chat.freenode.net, and I'm always around in #friendly-coders when online: my nick is igli.

HTH,
steveL.
_________________
creaker wrote:
systemd. It is a really ass pain

update - "a most excellent portage wrapper"

#friendly-coders -- We're still here for you™ ;)


Last edited by steveL on Sat Oct 19, 2013 11:16 am; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32099
Location: 56N 3W

PostPosted: Fri Oct 18, 2013 6:24 pm    Post subject: Reply with quote

steveL,

I'm a hardware guy, I had to design and build my CPU before I could program it, assember would have been a luxury :)

I see knowing assembler as a double edged sword. It depends on where your software output is targeted.
If you are writing target hardware independent code, you have to trust the compiler, so it makes little difference if you know assembler or not.
There is even an argument that says not knowing assembler is best.

If you write a high level language for hardware that you understand at the machine level, you tend to know how your compiler of choice plants code, so you pick some code constructs over others to help the compiler to produce leaner meaner code. It may still be portable too but not as effiecient on other hardware when the compiler does things differently.

Don't get me started on byte code based interpreters, where the 'machine' doesn't actually exist.

I don't program much any more, its not as much fun now as it was on 8 bit micros, where every instruction you squeezed out made it faster.
Assembler for PPC made my head hurt - all that out of order execution, which you had to write by hand, now in Intel/AMD too.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Fri Oct 18, 2013 7:38 pm    Post subject: Reply with quote

NeddySeagoon wrote:
I'm a hardware guy, I had to design and build my CPU before I could program it, assember would have been a luxury :)

Heh: and it shows OM ;-)
Quote:
I see knowing assembler as a double edged sword. It depends on where your software output is targeted.
If you are writing target hardware independent code, you have to trust the compiler, so it makes little difference if you know assembler or not.

The first part of your statement is true. As a whole however it is a non-sequitur. The difference it makes is the perspective it gives you as a programmer: as I said at the beginning: "if you have ever coded asm, then nothing computers do is ever that mysterious to you. You gain a much better intuitive understanding of what computers really do, which informs all of your other code."

You are falling into the trap of conflating "inform" with some other idea about making you write code in a particular language to suit some CPU you might know. But you just said above that you have no influence at all over what the compiler outputs: and indeed most novices are amazed at what the compiler outputs. At least, the ones who actually know asm in the first place. The rest have no clue what a machine really does.

A hardware guy does, sure. But we're not talking about hardware nor EEng, we're talking about programmers. For programmers to be able to appreciate what a compiler does, they need to know asm in the first place, or its output might as well be a net layout.
Quote:
There is even an argument that says not knowing assembler is best.

"An argument can be made" is the biggest crap I've heard: in general an argument can be made for anything. So let's discuss the argument itself. Sorry: I react badly to that phrase, even when implicit, since it was used by the Attorney-General in his rewritten-to-Bliar's-order "advice" on the Iraq War. Where he had stated unequivocally that it would be illegal, in the suppressed Opinion, the rewrite was all about "an argument can be made," an argument everyone involved knew to be specious.
Quote:
If you write a high level language for hardware that you understand at the machine level, you tend to know how your compiler of choice plants code, so you pick some code constructs over others to help the compiler to produce leaner meaner code. It may still be portable too but not as effiecient on other hardware when the compiler does things differently.

Those are all classic mistakes of someone who isn't really a programmer. If you were a programmer, and not a hardware guy, you'd know damn well by now to keep your C clean and portable first and foremost, and never to do anything as idiotic as what you suggest. With all your years of experience, you'd have read UPE many decades ago, in fact you'd embody it since the philosophy is very much your modus operandi, and K&R would be at your desktop, along with several other books. And you would have programmed across several architectures, with an awareness that different machines do things very differently: which is why 99% of a *nix is in C.

"Knowledge comes from experience. And experience comes from making mistakes," and learning from them.
Quote:
I don't program much any more, its not as much fun now as it was on 8 bit micros, where every instruction you squeezed out made it faster.
Assembler for PPC made my head hurt - all that out of order execution, which you had to write by hand, now in Intel/AMD too.

Read the book: it's by a guy of about your age. In fact he makes it clear to his students that they should not replicate looping constructs in the same way, but use the standard C for. And remember: some of us were around back then too. ;-)

Essentially, everyone's gone back to "RISC" style coding, and further the memory model means all our instincts are spot-on; I'd argue they always were, it just took the Marketing^W ICT people who bury everything in layers, 20 years to realise it, same as it took C++ people 30 years to emulate LISP, badly. So enjoy your golden years: you're the old road-warrior all the youngsters can learn from, and this time they realise it. :-)


Last edited by steveL on Sat Mar 15, 2014 12:11 am; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32099
Location: 56N 3W

PostPosted: Fri Oct 18, 2013 8:03 pm    Post subject: Reply with quote

steveL,

Heh - like I said out the outset, I'm a hardware guy.

steveL wrote:
For programmers to be able to appreciate what a compiler does, they need to know asm in the first place, or its output might as well be a net layout.
:) if you want to upset the FPGA or HDML guys call them programmers :)

steveL wrote:
Essentially, everyone's gone back to "RISC" style coding

I hadn't thought of it that way but its very true.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Sat Oct 19, 2013 12:06 am    Post subject: Reply with quote

NeddySeagoon wrote:

Assembler for PPC made my head hurt - all that out of order execution, which you had to write by hand, now in Intel/AMD too.

This sort of caught my attention, I've never programmed PPC, is the OOO exposed to the assembly writer? I suppose that the "coding rules" of VLIW in ia64 (meaning, the fixed instruction templates) is a kind of OOO exposure, but even on the 2-banger processors, code compiled for the 1-banger will work just fine. Yes, it may not run as quickly as if it were scheduled best but it still should work fine.

As hardware designers we all should have studied the MIPS R2000 pipeline exposure of instructions back to the assembly writer ("Delayed Branch") and we all knew this was great at the time, but horrible for future processors.

For backwards compatibility, I thought x86/x86_64 instructions didn't matter if it were running on a in-order or OOO machine, the machine handles it. Granted those CPU core designers have to deal with it...

And yes, what do you call Verilog coders... And I have my beef on Verilog as well, the current extensions though great for reusing code quickly, I'm sure adds gate bloat in the synthesized model...
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Sat Oct 19, 2013 10:19 am    Post subject: Reply with quote

If you mean VHDL/RTL, we call those people: microcoders. (note: not micro-coders.)
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Sat Oct 19, 2013 3:00 pm    Post subject: Reply with quote

Microcode is yet another aspect of chip/design, gets even more confusing there.

Who uses VHDL these days, is it still popular? I thought most people have switched over to Verilog. Not sure though; I've seen a bit of switchover though...
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32099
Location: 56N 3W

PostPosted: Sat Oct 19, 2013 7:18 pm    Post subject: Reply with quote

eccerr0r,

The out of order execution was exposed in the dim and distant past. If you wrote in a high level language, the compiler did it.
True - it was never essential to get the right answers.

I don't know how its handled on modern PPC.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Sun Oct 20, 2013 7:08 am    Post subject: Reply with quote

eccerr0r wrote:
Who uses VHDL these days, is it still popular? I thought most people have switched over to Verilog. Not sure though; I've seen a bit of switchover though...

No idea: I'm a software guy ;)
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Sun Oct 20, 2013 11:24 pm    Post subject: Reply with quote

I suppose i/o instructions and memory instructions may be dumped out onto the bus out of order, that may be the only externally visible architectural issue with OOO machines. But that is sort of a problem with caches in general, too.

I'll always remember where the EIEIO instruction was for...and it was indeed for the PPC. X86/ia64 had special uncacheable bits and fences to force things to exit the CPU in proper order... but mostly it was for writeback cache issues.
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
John R. Graham
Administrator
Administrator


Joined: 08 Mar 2005
Posts: 7838
Location: Somewhere over Atlanta, Georgia

PostPosted: Mon Oct 21, 2013 2:19 am    Post subject: Reply with quote

I never really understood what those transistors all the hardware guys were talking about did until I studied physics.

- John
_________________
This space intentionally left blank.
Back to top
View user's profile Send private message
miket
Apprentice
Apprentice


Joined: 28 Apr 2007
Posts: 225
Location: Gainesville, FL, USA

PostPosted: Mon Oct 21, 2013 4:26 am    Post subject: Reply with quote

John R. Graham wrote:
I never really understood what those transistors all the hardware guys were talking about did until I studied physics.
Funny, I was a physics major before I switched to computer science. I really loved it when I got to the electrical-engineering courses we were required to take--they filled in the gap between transistors and logic gates. After that, I never thought of processors as being full of magic.

Microprocessors back then were very straightforward: fetch the opcode, fetch immediate operands as needed, fetch operands from memory as needed, execute the operation, store the result if that's what the operation required. There were no execution pipelines or branch prediction, and there was only one processor mode. There weren't many addressing modes. There was no floating point, and there were no multiplication or division instructions. On machines like this, if you needed to eke out performance, you'd pay attention to the processor spec sheets to find the number of cycles it took to perform each instruction and paid attention to the amount of storage you had. Now this was a great starting point for learning to use assember! I hand-assembled a lot of Z-80 code.

I did more of assembler when I moved to 16 bits. The 8086 had a much larger address space and new addressing modes to go with it, but (like them or not), it also had segment registers. Oh my, though, that was assembler in all its glory. With the stack-relative addressing mode on that processor, I could easily set up and use automatic variables (allocated on the stack) just as well as I could use them in C. This meant that I could do painless linkage between C functions and functions I wrote in assember--a good thing for the places where I needed to improve performance. Even by the 80286, I was mindful of instruction timings.

But the fun stopped with the 80386. Compilers were generating better code, and the multiplicity of word-size modes was getting hairy to deal with. The thing really started to get me, though, were the concepts of pipelining and out-of-order execution. I was coming to understand that my choice of instruction ordering could cause pipeline stalls and suboptimal branches that could make my hand-written assembler perform more poorly than the output of a good compiler. Bummer.

Oh well. Concerns like the need for portability to different architectures also militated against most projects' using assembler code.


I do indeed see steveL's point, though. Just as I often say that many programmers today are impoverished by never having programmed in C and having to worry about memory allocation and freeing, array addressing, and guarding against dereferencing null pointers, they are also missing out on not having written in assember and worrying about instruction timings, bit-shift operations, and addressing modes. Just as we can talk about taking away the mystery surrounding how the hardware works, we should clear away the mystery of the foundation of the software. (I also think that programmers should have taken a course on compilers.)

While setting up an environment for writing assembler that targets the architecture of the machine you're using has the merit of being able to link with other programs and libraries on the machine and do something useful, I don't think that you should count on writing typical user-level programs this way. Though it would make a nice learning environment or test bed for writing device drivers or specialized functions for unusual cases, most of the time--even though you might do a better job doing branch prediction than the compiler--I wouldn't be surprised if the compiler wouldn't come out better than you in terms of optimal instruction ordering. Bigger gains would come from improvements in the algorithm.

It would, however, be profitable to examine compiler output and note what it has to do for the funky things you might write in C.

Where I think you'd really get good bang for your buck as a learning environment would be 80286 assembler to run in an emulator. Pipelining was too primitive in that processor to be much of a crutch. Also, this is an enviroment where you could kick the compiler's butt.

Having seen the kind of output a C compiler might generate for a 64-bit machine, you'd get a basic idea of what a C compiler does to set up and use a stack frame for a function. You'd find that going from 64- to 16-bit code involves both complication and simplification. The biggest simplification is having to deal with only 16-bit values and pointers, not 64-bit ones.

In any event, using '286 assembler gets you closer faster to a lot of things that matter when seeing what kinds of instructions you are generating. Bit-shift operations, for example, are a big deal to me. I had to be very familiar with them way back in those early days, and I still use them a lot now. Here's one example of a very pratical use for bit shifting when programming in assembler: being able to address entries an array of structures. For quite a long time (and probably even now, even though I'm not entirely sure about the complete set of amd64 addressing modes), it has been much more efficient to multiply the structure size by the the index using bit-shift-and-add operations than with an integer multiply instruction. I wrote macros that did multiply-by-N operations just for this purpose.

What I don't have, alas, are recommendations for setting up such an environment or even for those tie-ins that steveL is looking for for using vim (which is the only editor I'd want to use).
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Mon Oct 21, 2013 8:55 am    Post subject: Reply with quote

miket wrote:
Funny, I was a physics major before I switched to computer science. I really loved it when I got to the electrical-engineering courses we were required to take--they filled in the gap between transistors and logic gates. After that, I never thought of processors as being full of magic.

Yeah EEng is essential: unfortunately what they call "Computer Science" nowadays doesn't include it: I had to take first-year EEng myself, for no credit, and that was in 96. Seems to me that most CS depts now are really ICT which is a Marketing term for Business Studies, and even many professors have never coded. Which sucks for their students, afaic. We see the effects on IRC, especially in ##c when someone comes in with completely idiotic advice from their "teacher."
Quote:
Microprocessors back then were very straightforward: fetch the opcode, fetch immediate operands as needed, fetch operands from memory as needed, execute the operation, store the result if that's what the operation required. There were no execution pipelines or branch prediction, and there was only one processor mode. There weren't many addressing modes. There was no floating point, and there were no multiplication or division instructions. On machines like this, if you needed to eke out performance, you'd pay attention to the processor spec sheets to find the number of cycles it took to perform each instruction and paid attention to the amount of storage you had. Now this was a great starting point for learning to use assember! I hand-assembled a lot of Z-80 code.

Yeah, I loved Z80; could have lost half its insns quite happily, but that's the price for 8080 compat. Much nicer to work with than 6502 though. DJNZ and LDIR were lovely, for example. Nowadays it's ARM for me.
Quote:
I do indeed see steveL's point, though. Just as I often say that many programmers today are impoverished by never having programmed in C and having to worry about memory allocation and freeing, array addressing, and guarding against dereferencing null pointers, they are also missing out on not having written in assember and worrying about instruction timings, bit-shift operations, and addressing modes. Just as we can talk about taking away the mystery surrounding how the hardware works, we should clear away the mystery of the foundation of the software. (I also think that programmers should have taken a course on compilers.)

Exactly (underlined for effect); as for compiler theory, the awkbook teaches it much better than Core Computing ever did. Not surprising when you consider who the authors are; they basically invented what Core Computing 101 teaches. After that, once you have K&R done, you work through UPE (linked above) for implementation, along with the Dragon book for theory.
Quote:
While setting up an environment for writing assembler that targets the architecture of the machine you're using has the merit of being able to link with other programs and libraries on the machine and do something useful, I don't think that you should count on writing typical user-level programs this way. Though it would make a nice learning environment or test bed for writing device drivers or specialized functions for unusual cases, most of the time--even though you might do a better job doing branch prediction than the compiler--I wouldn't be surprised if the compiler wouldn't come out better than you in terms of optimal instruction ordering. Bigger gains would come from improvements in the algorithm.

Oh yeah totally: algorithmic improvement opens the door for other optimisations as well. "The Practice of Programming" (Kernighan & Pike, 1999) which is the other book I recommend to beginners, lays this out well. "Once you have chosen the right algorithm, performance optimization is generally the last thing to worry about as you write a program."

Quote:
It would, however, be profitable to examine compiler output and note what it has to do for the funky things you might write in C.

Where I think you'd really get good bang for your buck as a learning environment would be 80286 assembler to run in an emulator. Pipelining was too primitive in that processor to be much of a crutch. Also, this is an enviroment where you could kick the compiler's butt.

To my mind the combination of a decent editor you are comfortable with, and the Seyfarth book is all that's needed in this context. We're not trying to raise a generation of hardcode asm coders like we used to be, so "kicking the compiler's butt" is both a non-goal, and in fact a dangerous game to play, since it encourages them to think they can out-do the compiler, which nowadays is simply not feasible, except for very controlled, very specific things like interrupt handling. Instruction sets change too fast, and even if we can sometimes, the goal really is to get them to understand that a computer is a really simple thing: it works at a very basic level and is effectively a very quick idiot.

Once they get that, the insecurity of not really knowing what's happening, goes away. That in turn leads to more self-confident, and less elitist coders, since they don't need the crutch of snobbery for other languages; they also learn to be more careful, since a crash really is a crash, and at every stage you know w/e it did is exactly what you told it to: it would not have jumped to that bad address if you hadn't fed it garbage. Grokking GIGO is really vital, and much less appreciated nowadays, afaict.

amd64 feels a lot more like ARM than x86 ever did to me though I've never coded x86 and never wanted to, since you have more registers and things like cmov.
Quote:
In any event, using '286 assembler gets you closer faster to a lot of things that matter when seeing what kinds of instructions you are generating. Bit-shift operations, for example, are a big deal to me. I had to be very familiar with them way back in those early days, and I still use them a lot now. Here's one example of a very pratical use for bit shifting when programming in assembler: being able to address entries an array of structures. For quite a long time (and probably even now, even though I'm not entirely sure about the complete set of amd64 addressing modes), it has been much more efficient to multiply the structure size by the the index using bit-shift-and-add operations than with an integer multiply instruction. I wrote macros that did multiply-by-N operations just for this purpose.

You should check out "Hacker's Delight" (Warren) there's a 2nd edition out now as well, not really worth buying both unless you're a compiler-writer; essentially multiplication by small constant is a standard compiler optimisation nowadays.
Quote:
What I don't have, alas, are recommendations for setting up such an environment or even for those tie-ins that steveL is looking for for using vim (which is the only editor I'd want to use).

Well the vim link in main post is actually to Kate vi input mode ;-) so you could try that.. Nah just kidding, if you have vim setup for make then you can use that, or just tell it to run the same commands as given for kate Quick Compile; apart from that you just need a syntax highlighter for nasm, which must be available already.
Back to top
View user's profile Send private message
creaker
l33t
l33t


Joined: 14 Jul 2012
Posts: 651

PostPosted: Mon Oct 21, 2013 2:07 pm    Post subject: Reply with quote

It was relevant at the time when my box has 16kB RAM only. It was a real treat - fit the program into 16 kB.
Last time I wrote in assembler for my old Athlon XP 2400. It was at 2008, then I could still (with great difficulty) to compete in the optimization with gcc and vc. Since I switched to Core2Duo, I use it very rarely, for random tasks like PIC or Atmega programming.
Though if masm32 can be used in linux, maybe...
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Mon Oct 21, 2013 2:30 pm    Post subject: Reply with quote

creaker wrote:
It was relevant at the time when my box has 16kB RAM only. It was a real treat - fit the program into 16 kB.
Last time I wrote in assembler for my old Athlon XP 2400. It was at 2008, then I could still (with great difficulty) to compete in the optimization with gcc and vc. Since I switched to Core2Duo, I use it very rarely, for random tasks like PIC or Atmega programming.
Though if masm32 can be used in linux, maybe...

I don't understand why you'd want to code in x86 when you can use amd64/x86_64, where SSE is guaranteed and you're not register-starved. At least on an 8-bit machine it was understandable, and even then you could bank-switch with EXX on the Z80, which was the "CISC" chip compared to 6502 "RISC" in those days.

Got a nice email from Ray Seyfarth (I wrote to him to ask about using exit.asm and fp.asm) which relates to the OoO execution discussion:
Seyfarth wrote:

Thanks for your write-up on the Gentoo forum. You're very kind.

Have you tried my new ebe based on Qt? (qtebe.sf.net) It has a lot of
worthwhile features. My 2 favorites are the "toy box" and the "bit
bucket". These 2 are things which are not normally part of an IDE. The
rest is mostly as expected in an IDE.

I had some fun figuring out how to get some efficiency out of the AVX
instructions. By using separate registers and unrolling, I managed to get
some benefit from the out-of-order execution and the multiple pipelines.
My Core i7 produced about 6 double precision floating point results per
machine cycle. I doubt that I would have hung in there if I had to design
the actual flow as one would have done on a vector processor years ago. I
viewed this as "giving the CPU some freedom in instruction ordering".

The AVX correlation function in chapter 19 mentions this rate; I imagine that's what he meant.

As for "you're very kind" that's referring to my summary of his book:
All in all, a remarkable work: concise and comprehensive, while taking the reader step-by-step through how to build effective applications, from the beginning. It stands well alongside Kernighan's work, and is as useful for the modern toolchain, imo.
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Mon Oct 21, 2013 3:55 pm    Post subject: Reply with quote

I always thought of the register bank switch opcode on the Z80 was kind of pointless except in a true embedded application where you won't even run general purpose code... Being when I coded assembly on Z80, it was on a general purpose computer (TRS-80) - that's how I got this impression...

Pretty much programs have gotten too complicated for a mere human to write in assembly anymore because there's too many variables to keep track of - yes, it makes it easier when you have a large register file but the problems get even larger. Plus it seems that abstracting data structures tend to make it easier for humans to write code faster...

I try not to write in assembly anymore mainly because it's very difficult to maintain, reusing it from one program to another. It does help save memory though. I was thinking about writing a nonstandard 24-bit floating point library for AVR to save register memory, but trying to write it in C it failed pretty badly.
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Mon Oct 21, 2013 5:14 pm    Post subject: Reply with quote

eccerr0r wrote:
I always thought of the register bank switch opcode on the Z80 was kind of pointless except in a true embedded application where you won't even run general purpose code... Being when I coded assembly on Z80, it was on a general purpose computer (TRS-80) - that's how I got this impression...

The "application where you won't even run general purpose code" was any game you wanted to have a chance of selling, ime. But then I had to hand-draw and mask every pixel; man did I want a C64.. ;-)
Quote:
Pretty much programs have gotten too complicated for a mere human to write in assembly anymore because there's too many variables to keep track of - yes, it makes it easier when you have a large register file but the problems get even larger. Plus it seems that abstracting data structures tend to make it easier for humans to write code faster...

Agreed: that's what high-level languages like C are for; and as I stated above there are reasons why 99% of a Unix is in C.
Quote:
I try not to write in assembly anymore mainly because it's very difficult to maintain, reusing it from one program to another. It does help save memory though. I was thinking about writing a nonstandard 24-bit floating point library for AVR to save register memory, but trying to write it in C it failed pretty badly.

Yup: if you're writing that 3% of code that isn't a premature optimisation, then nothing beats it. But nothing will save you from an exponential algorithm, or you wouldn't be trying to sweat it out with asm. N is not usually large, but when it is the fundamental thing to get right is the algorithm, and the data structures: everything else flows from there (or not.)
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32099
Location: 56N 3W

PostPosted: Mon Oct 21, 2013 8:20 pm    Post subject: Reply with quote

eccerr0r,

The register bank switch on the Z-80 was really only useful for interrupt routines.
No need to push things onto the stack and pop them off afterwards.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Mon Oct 21, 2013 9:05 pm    Post subject: Reply with quote

Right, the bank was great for interrupt routines that have other interrupts masked (and hope that NMI doesn't happen), which means you can't use them in user applications if you don't know when interrupts (like clock or i/o) happens and overwrite your values. So general purpose applications can't use them. Or are you going to be disabling interrupts whenever using the other bank and pray for no NMI?

Or will people end up pushing everything to stack anyway...
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 32099
Location: 56N 3W

PostPosted: Tue Oct 22, 2013 5:55 pm    Post subject: Reply with quote

eccerr0r,

You just connect the NMI pin on the Z-80 to +5v
If you get an NMI then, you have bigger things to worry about :)
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Wed Oct 23, 2013 5:11 am    Post subject: Reply with quote

Well not sure what the firmware did with NMI on the CPC-464, but I just used to DI at the start and get on with it.

I doubt it did much, since you could bank-switch it out (and we did: for video; to the best of my recollection: this was 1982 i think so bear with me;) and iirc it was only jump vectors in page 0. Interrupts weren't a concern in any case, for me. Sync was to the frame refresh, which I polled once everything had been done for the current cycle, and it was all about making that deadline. That's the sense in which I mean this was not a general-purpose application as you described, eccerr0r: interrupts were always disabled. With hindsight, it's what would be called hard-realtime now I guess, albeit with no real-world disastrous consequence: if you missed that pulse, you'd failed, since your program stalled until the next one, which was ages in terms of what got done. At the time I'd never even heard the word "realtime", nor "embedded", but there was a hard constraint on game-engine performance. That's just the way it was, and it wasn't considered anything special: just game coding.

I certainly never did anything with clock, and the only i/o was a tape drive which at the time didn't seem a big deal; it was only used to load your game (though cassette copy-protection was a fun diversion when I finally got a disk-drive and the games companies refused reasonable upgrade..;) Oh keyboard, display and audio, ofc, but those go without saying. I spent most time, in every sense of the word, on graphics and never did audio; but envy of the C64 made me weep. ;)
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Wed Oct 23, 2013 4:49 pm    Post subject: Reply with quote

Well, that's exactly the issue at hand. If you were to switch banks, you had to know exactly what interrupts were coming in or not, and how to deal with it. This means that general purpose software can't use bank swaps unless you could disable OS and random housekeeping in the system. Seems most people are pointing Z80s to embedded apps - exactly. If you know your application you can stop using things like NMI and interrupts when they can cause realtime issues.

However it seems that computers have gotten so fast that people don't even care about cycle timing. Even worse, the newer CPUs you just don't know how many cycles it takes when the instruction gets read and when the bus responds to that instruction.
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 2717
Location: The Peanut Gallery

PostPosted: Thu Oct 24, 2013 5:41 pm    Post subject: Reply with quote

eccerr0r wrote:
Well, that's exactly the issue at hand. If you were to switch banks, you had to know exactly what interrupts were coming in or not, and how to deal with it. This means that general purpose software can't use bank swaps unless you could disable OS and random housekeeping in the system. Seems most people are pointing Z80s to embedded apps - exactly. If you know your application you can stop using things like NMI and interrupts when they can cause realtime issues.

Heh, it's funny to look back on it like that: at the time it didn't seem "embedded"; it felt like my whole world.
Yeah as I remember it, we had double buffer in top 32K of RAM, and switched between each 16K half, at refresh.

What kind of thing were you coding? Sounds complex (and a bit grown-up;)
Quote:
However it seems that computers have gotten so fast that people don't even care about cycle timing. Even worse, the newer CPUs you just don't know how many cycles it takes when the instruction gets read and when the bus responds to that instruction.

Man I used to know the number of T states for everything, well the things I used, not the whole insn set(!) ofc, and add them up as I went along; Zaks was my constant companion. I still consider it one of the best Computing books I own.

On the wider point, I see it as similar to processes running on a multi-tasking OS; write simple code as if we are the only process and let the system worry about it. The same thinking that leads to small, tight code using minimal RAM, helps in the world of i-caches and d-caches; favour sequential access for data streams, since that is a case everyone optimises. The processor pre-"compiling" i-cache sequences is quite freaky to me, but not something I have to worry about, just be aware of. I never did like self-modifying code in any case (it's hard enough getting normal code to work.)
Back to top
View user's profile Send private message
eccerr0r
Advocate
Advocate


Joined: 01 Jul 2004
Posts: 4004
Location: USA

PostPosted: Thu Oct 24, 2013 6:10 pm    Post subject: Reply with quote

Me, actually, I didn't do much software. I had this general purpose computer (TRS-80 Model III, 2.5MHz Z-80) when I was 10-ish and that's where I learned BASIC. One program I really wanted to write was a graphics editor that I could paint on the screen (with its hideous graphics resolution). However when it came time to save creations to disk, I wrote it in BASIC.

It took like a few minutes to write it to disk and read back. Awful!

So I looked into assembly language. I had no assembler unfortunately so I had to hand assemble.

The two USR() routines I ended up writing for the TRS-80 were routines to read/write the screen to disk(that took seconds to do!) and one that filled the screen with random characters or an arbitrary character. I did this because I was fascinated by these routines that were written in a few games and wanted to write something equally as fast where BASIC wouldn't do. I had a whiteout routine that someone had written as a template, but still had to write the assembly language, refer to the Z80 spec sheets to hand-assemble to machine code, and then POKE it into a variable... Ah the memories...

I never bankswitched as I didn't need to, it was the foreground task. As the Model III had disk drives and a clock, there'd be plenty of interrupts flying around. I don't remember how DRAM refresh affected user programs but I do recall reading the refresh register to seed my "random" screen routine.

Sometimes I wish SMC was never created. It's the most vile creation ever made, yet it's almost ingenious to save space. I disapprove of SMC because it's a PITA to implement on chips, and seems the only reason for SMC these days is to write viruses. I really hate having to make special RTL code to make sure viruses work. That just seems wrong.
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed to be advocating?
Back to top
View user's profile Send private message
Yamakuzure
Veteran
Veteran


Joined: 21 Jun 2006
Posts: 1402
Location: Bardowick, Germany

PostPosted: Fri Oct 25, 2013 10:06 am    Post subject: Reply with quote

A bit back on topic:

If you do C/C++ (or any other compiled language) programming and sometimes have problems with debuggers being stuck in instructions between your code lines, you'd end up on disassembler output. This looks often rather different than what you'd have written in asm yourself. If you have the hex available along with the dissassembled code (for gdb, use /r modifier on "disas" command.) You might find the opcodes helpful:
http://ref.x86asm.net/

It helped me greatly when working on a JIT compiler. (Inject opcodes directly into memory and execute it.)
_________________
systemd - The biggest fallacies
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Portage & Programming All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum