• Please review our updated Terms and Rules here

Coding and computer logic for a complete beginner

gladders

Member
Joined
Oct 7, 2015
Messages
35
Location
London
Hi all,

I'm going to dip my toes in to coding. I'm checking out Code Academy as a start, which begins with HTML as a nice simple introduction. I'm hoping I can use these basic coding principles on some of my old machines.

But as more of a curiosity than anything I want to seriously get into, I have to ask, and this may be a real dense question and I apologise for it. I understand that languages like HTML, C and Java and so on are useful as they turn human-intelligible text into binary values that computers can understand and execute.

But what *is* it about all those ones and zeroes that, say, can culminate in something non-numerical like a spaceship on Defender or a drop-down menu on a Mac?

I can picture, for example, those values being applied to things like numerical calculation, as it's numbers. But how does, say, 4K of numbers magically produce Space Invaders or Visicalc?

I realise this may be a massively complex thing to answer that can't be done simply...but anyone want to bite?
 
There are a lot of textbooks covering compiler design. http://www.diku.dk/hjemmesider/ansatte/torbenm/Basics/ has a free PDF text which doesn't seem too bad at a quick glance.

Assemblers are the piece lying between compilers and machine code. The assembly code has readable equivalents for each instruction for each step of the program. Some early compilers didn't produce machine code but instead output assembly instructions that could in turn be turned into machine code. Look for one of those and the specific mechanics became readily visible. It also shows how some constructs will produce poor code.

As a simple example, consider an empty for loop. That would translate to instructions to store the initial values, to add to the counter, to compare the counter to the end value, and a jump back to beginning of the loop if the end value hasn't been reached. A line of high level code could equate to 20 lines of assembly.
 
“[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.” -- Ada Lovelace


A computer pushes one and zeroes around. Programmers assign meaning to the ones and zeros. How to make Visicalc out of ones and zeros? A display is a grid of pixels; each pixel with a numeric brightness. Turn pixels on and off by pushing ones and zeros at the screen. Note that you you don't want to have duplicate code to write to each place on the screen; so abstract a piece of code that has three inputs -- on/off, x, y. Note that you don't want to have duplicate code to display a given digit, so abstract a piece of code that draws a '0' on the screen at x,y; and a piece that draws a '1', etc. Abstract these together to make a piece with three inputs - x, y, n - with n a number between 0 and 9. Add some logic to write a multiple digit value at x,y -- it does 'divide by 10' logic to figure out the digits, and writes them out, adding a value to x so that the digits are next to each other. A small amount of coding and a lot of abstraction, and now you can write arbitrary numbers at arbitrary locations on the screen.

Pressing a key on the keyboard sends a number to the code; the h/w provides an abstraction here; the 'A' key sends the number 65. Allocate two memory locations, assign them the semantics of 'current row' and 'current column'. When the 'up arrow' key is pressed, subtract 1 from the 'current row'; 'down arrow' add 1. 'left' and 'right' subtract or add 1 to current column.

Allocate an array of memory to hold cell values. (Start simple; assume a small, fixed maximum size -- say 10 rows and 10 columns.) When a digit key is pressed on the keyboard, find the word of memory in the array that 'current row' and 'current col' indicate, multiply that value by 10 and add in the value of the digit pressed on the keyboard, and write it on the screen at the appropriate location.

Now you can move around a spreadsheet and enter numbers. It's not Visicalc, but it is code at the heart of it.

You get from ones and zeros to Visicalc with tools --
* The fundamental instructions of the computer -- add, subtract, save to memory, recall to memory, compare and do different things based on the comparison result.
* Semantics -- define what a given collection of ones and zeroes represents -- a key on the keyboard, current row, the number typed in.
* Abstraction -- collect together specific cases into abstract cases -- turning pixels on and off --> writing a digit --> writing a number.
* Using existing tools -- it turns out that you don't need code to turn pixels on and off, these functions are usually support by the host operating system and programming libraries, but if you want to understand how to get from ones and zeroes to Visicalc, it helps to understand what they are doing.

Visicalc as a whole is mind bogglingly complex piece of software, but each piece can be understood as construct that ties together other pieces by extending the semantics of those pieces to sole a particular problem. (In the example above, the code that looks at the arrow keys to move the current location is extending the semantics of the 'up arrow' to mean 'change the current location to the cell above the current location' by subtracting one from the current row. For Visicalc. it also needs to prevent 'going off the edge' of the spreadsheet with range checking', redrawing the spreadsheet offset so as bring a row or column into view, and probably a bunch of other stuff.)

When you read the textbooks, you will learn how to draw boxes on the screen, how to evaluate an equation entered on the keyboard, how to efficiently allocate memory for the spreadsheet data, and hundreds of the other techniques; and how to combine these to build an application.

-- Charles
 
The numbers in the computer fall into two broad categories: code and data. 'Code' numbers are the operation codes that the processor can interpret. 'Data' is pretty much everything else.

Regarding code, the processor has a number of simple instructions (add 2 numbers, compare 2 numbers, jump to a different instruction if the most recent result was zero, and so on). Each instruction is represented by a number, which means a sequence of instruction takes the form of a sequence of numbers.

Likewise, numbers can also be data. For example, a screen showing a picture of a spaceship is composed of pixels, where each pixel is represented by a number stored in a particular memory location (or in certain video modes, by a small group of numbers). The video hardware then generates video output by reading those memory locations in the correct sequence and translating those numbers into screen colors or screen brightnesses.

It's also common to use numbers to represent text, by assigning a number to each letter of the alphabet. Sequences of characters are thus stored as sequences of numbers. Likewise, audio can be encoded as sequences of numbers.

If you have a processor that is able to manipulate numbers, and you have conventions on how to represent text, audio and graphics as numbers, then you have a processor that is able to manipulate all those things as well.

-ken
 
I would not call C human readable. It's human decipherable.

I would not recommend starting with HTML. Many of us started with machine language or BASIC. I would highly recommend the machine language route. There are simple processors with simple machine language, and trainers and emulators for them.

There's nothing in the world that I know of that's better to get started than this thing:

http://www.ebay.com/itm/Science-Fai...119048?hash=item2a83fe79c8:g:qoMAAOSwuLZY4Zc8

Now, those are getting rare, and sometimes you can find other equivalent things, especially from Heath. I have no experience with those, but I assume they are just as good.
 
First, you need to be able to represent things as numbers. A graphical thing
can be a bunch of numbers in an array for instance. Letters can be ASCII
encoded.
Next the most important part that makes a computer something other than
a calculator is the ability to make decisions base on inputs.
For actually creating a complex program it is breaking complex things into
a number of simpler steps.
Start out with a general plan. Divide the plan into a number of easier to understand
pieces.
The most difficult thing is coming up with the actual idea in the first place.
To learn, you can copy someone else's idea. Start with something simple
and then do more as you get some experience.
Something like making digital dice. Read up on how to create random
numbers. Decide how you want to display them and how the user interfaces
to roll the dice.
Look at other peoples source code when you can and see how they
solve particular problems.
I should add that one of the hardest thing is learning that you will make
many, many mistakes.
Debugging ones code that you know you got right is one of the hardest
things.
You have to separate your pride and look at it as not something you
did but just as something that needs fixing and obviously has something
wrong.
Dwight
 
Last edited:
I'm going to dip my toes in to coding. I'm checking out Code Academy as a start, which begins with HTML as a nice simple introduction. I'm hoping I can use these basic coding principles on some of my old machines.

If you want to learn programming for and on your old machines, I recommend checking out one of the "Learn to program in BASIC" books that came out for your platform the first year or two your platform was around. It will gently introduce you to programming concepts, and give you small quick wins on your old system so you are motivated to continue.

But what *is* it about all those ones and zeroes that, say, can culminate in something non-numerical like a spaceship on Defender or a drop-down menu on a Mac?

For a complete beginner, it might be best to think of programming like cooking food. How do you cook food? To cook food, you need raw ingredients, cooking utensils, and recipes. You read the recipe, which tells you which utensils to use on which ingredients and in which order. The end result is a meal. How does this relate to programming? Like this:

Cooking utensils = CPU instructions
Raw ingredients = data
Recipe = A computer program

A program (recipe) is a list of steps that use different CPU instructions (utensils) to process data (ingredients) into a new format (a meal).

Okay, but how does one program turn into something very complex, like a game or a drop-down menu? We can explain this by extending our cooking metaphor: Let's say you had to provide a ton of different, complex meals for a large catered event. To do this, you have several recipes, each producing a different result. For such a large event, it's possible that some recipes exist only to create portions of food that themselves are used in larger recipes for a more complex meal. So the process is no different than making a single meal -- you're just doing it a lot more, with more recipes, for a bigger and more complex result. How does this relate to programming? Like this:

Catered event = A complex thing the user sees on the screen
Many recipes = Many different programs
Smaller recipes that produce food to be fed into the larger recipes = Subroutines, library calls, API calls

Hope this helps. And now, if you'll excuse me, I'll go back to eating my lunch.
 
I learned BASIC by typing in programs from computer magazines back in the day. I always found things I didn't like or thought of enhancements. Debugging typos and customizing these programs gave me a great start. You can buy old books with type in programs in BASIC. It doesn't really matter if you wat the programs, getting them to work and then modifying things to see what happens will give you a good start. From there, I found Pascal to be very easy to learn. It is much like BASIC, but more powerful.
 
If you want to learn programming for and on your old machines, I recommend checking out one of the "Learn to program in BASIC" books that came out for your platform the first year or two your platform was around. It will gently introduce you to programming concepts, and give you small quick wins on your old system so you are motivated to continue.
"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."
Edsger Dijkstra
Read more at: https://www.brainyquote.com/quotes/authors/e/edsger_dijkstra.html
;)
 
He may have well written a paper saying why he hates Pascal, too.
Well, he does also say
"Object-oriented programming is an exceptionally bad idea which could only have originated in California."
and
"APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums."
not to mention
"The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense."

Note the ;) in my OP; I'm actually a big fan of BASIC (and COBOL for that matter...)
 
Dijkstra was full of it.

I agree...I think BASIC is a great first language. When I was in college, the language used in CS101 was Java. It was easy to tell who had no prior programming experience, because they were confused as hell.
 
Choose a language that fits your needs. Needs differ; that's why there are so many.

But nothing will take the place of knowing the basic hardware underneath it all.

That may be a dated idea, however. Consider the steep learning curve of programming "bare iron" of some of the more recent ARM MCUs. I can see where that could be daunting to a beginner.
 
He may have well written a paper saying why he hates Pascal, too.
I remember picking up a copy of From BASIC to Pascal back when I was first starting to learn to program and marveling at just how bone-headed Pascal was (at least in its early incarnations.) Even as a kid, I could see how stupid it was to make the size of an array an immutable part of its type, and I couldn't begin to fathom how anybody was supposed to write practical applications in it. It absolutely boggles my mind that anybody ever used it as a teaching language.

That may be a dated idea, however. Consider the steep learning curve of programming "bare iron" of some of the more recent ARM MCUs. I can see where that could be daunting to a beginner.
This is very true. It's a shame that there isn't a good "beginner board" for machine-language programming out there these days - seems like everything is either 8-bit MCUs (capable, sure, but not great for someone who's just learning the ropes) or 32-bit RISC solutions (which, as you note, require so much setup just to get things running and aren't really designed with machine-language programming in mind.) Maybe somebody who's handy with FPGAs should roll up a basic "home computer" system designed around an LSI-11 clone or something like that...
 
...and, in the case of ARM MCUs, you have another layer to deal with if your program in C or C++--support libraries. Often buggy when furnished by the chip manufacturer--and worse, subject to change. Consider the STMicro line--used to be that you'd code using their Standard Peripheral Libraries. Now deprecated--new designs are to use STM32Cube. You could use a third-party library, such as libopencm, but that's no guarantee of success--and they're often just as buggy and incomplete. Your best bet is to use CMSIS, which is at least a required ARM standard--but it's very low-level and you'd best be programming with your notebook open to the MCU datasheet. Even simple timers have morphed into rather complicated devices.

Definitely not for neophytes.
 
Last edited:
When I write machine code I often look at the library code and
just remove the noise and keep the part I want to use. Seeing
how things are solved at a lower level is worth the additional
effort.
There will always come the day when there is no library to solve
your particular problem. See what is in them and how they
work can often give you the ideas you need to solve the problem.
Even buggy libraries have value.
Dwight
 
The problem, Dwight is that manufacturer-specific libraries don't always have source. See any here? Suppose that you change vendors (Lots of companies, for example, offer ARM-based MCUs)? Given that there's no standard set of library calls, you're out of luck.

CMSIS is about as close to a real standard as there is. It's not a library set, but rather a set of symbol definitions.
 
I'm going to dip my toes in to coding.

A life-long adventure; demanding but well worth the effort.

But what *is* it about all those ones and zeroes that, say, can culminate in something non-numerical like a spaceship on Defender or a drop-down menu on a Mac?

I can picture, for example, those values being applied to things like numerical calculation, as it's numbers. But how does, say, 4K of numbers magically produce Space Invaders or Visicalc?

Any computer is limited in what it can do by two factors: hardware and software. The hardware contains the circuits that define what the machine can theoretically accomplish (what it can do, how much and how fast) and the software defines what it actually can do within those parameters.

A Kaypro II computer, for example, could only show text on its screen, 80 characters per line, 25 lines per screen. There was no ability to light up just one dot on the screen, a feature known as "dot-addressable graphics". The Apple and Commodore machines added this capability with new hardware ("chips") and programmers turned this ability into Defender spaceships and other wonders.

Every machine has as its "core chip" a processor of some sort, and different processors, either from different manufacturers or even the same manufacturer in different eras, have different capabilities, called "instruction sets". And there is more than one way of arranging the circuits in a computer. Since it is really only sequences of high and low voltages, representing the binary digits 0 and 1 (or 1 and 0), that the processor understands and acts upon, "machine language" means just that: the particular pattern of 1s and 0s that the processor understands.

Machine language is hard for people to understand and keep straight in their heads. But that's where all computer programming begins and ends. All other "languages", C, Pascal, Cobol, Assembly, Basic or whatever, perform some sort of translation of the programmer's intentions - as best they can be expressed and as best as the program can understand them - into a series of ones and zeroes as their final or "executable" product.

Which one is best for a beginner? (Tastes great? Less filling?) Basic is probably most accessible; it's easy to see your results immediately. Assembly is probably most powerful, but has a steep learning curve. C and its successor C++ are very popular if you want to work in the Arduino single-board area.

Ultimately it's up to you, and there are new "flavors" coming along all the time. You can't code in the same bitstream twice, as the Zen masters say. But the concepts of programming are the same whatever language you decide upon. Leaning to use them appropriately, concisely and effectively is where the lifetime of study (and practice!) comes in.

Good luck with your endeavors.

-CH-

[/QUOTE]
 
Basic is my least favorite language. Other than assembly for the 8080, Basic
was my first actual language. From there I did PLM on Intel machines.
Later I found Forth. I've piddled as needed in other languages such as
Pascal and C, as needed but not found anything as productive as Forth.
My feelings about Basic is that one will quickly hit a wall. It is just
clumsy.
The world just isn't flat.
Dwight
 
Back
Top