Even in our business environment our development work was run overnight, so we also got only one shot a day at running a programme. I agree that it is good training. Personally I somehow never acquired the habit of making mistakes in the first place because I did what I was taught and nobody taught me to make them. The first programme that I ever wrote on a training course was immediately flawless and I carried on that way for years. Our managers were scared by the way that I worked, using all the available scheduled time to design my programmes with virtually no time left to correct errors as I didn't plan to make any, but they got used to it. Nowadays trial and error development is so fast that even I resort to it, but mainly because modern computer systems don't necessarily do what they're supposed to in reality or they are inadequately documented. I have to be extremely careful building my H200 as the parts that I have are irreplaceable and burning any out could end the project.
I went on a FORTRAN training course in 1972 because our company normally used COBOL but valuation of our liabilities involved complicated actuarial calculations which would have been inefficient in COBOL and our actuaries used FORTRAN for their research tasks. I discovered that FORTRAN couldn't read or write the large COBOL tape files that we used then, so instead I wrote a COBOL programme to handle the files with an EASYCODER module embedded in it to do the calculations. EASYCODER was the assembly language of the H200 but the brilliant design of the H200 hardware meant that it was halfway between more modern low level assembler languages and something like BASIC, so not too great a strain on the brain. This wasn't anything brand new but just another step in the work already done by IBM, Honeywell and others. By building the replica H200 I can demonstrate the versatility of the H200 machine language better than by simply writing an emulator, which I also need to do anyway.
Computer architecture is a balancing act between cost, performance and complexity. Magnetic core memory was extremely expensive when it was made by hand, so early computer logic did as much as possible in one instruction to keep programmes small. Semiconductor RAM became very cheap and processors became much faster, so instructions could do less and RISC processors became viable. Then processors became so complex that they could do highly specialised tasks again, like the video processors in modern gaming computers. The H200 is an example of where this balancing act started. I have several Honeywell Level 6 and DPS6 minicomputers which I am about to donate to a computer museum. They are an interesting transitional phase in computer architecture, having RISC style bit-slice processors executing microcode on ROM to implement the more complex machine language that the programmes actually use. If you changed the internal plug-in ROMs in a DPS6 it could behave like some other sixteen bit computer, which would be fun. Perhaps one could even be converted into a PC. I believe that they were versatile enough to be used on the Space Shuttle, so I've heard.