• Please review our updated Terms and Rules here

What makes COBOL well suited for commercial applications?

About the PERFORM statement (there are lots of variations).
Given paragraphs named P1...P10: You can enter any of the paragraphs and execute them in sequence with a GOTO and drop out the bottom (P10), or you can treat them as parts of a subroutine (PERFORM ...THRU...). I'm not aware of any other HLL that allows this sort of thing--of course, you can code workarounds that accomplish the same thing.

COBOL's rules for data conversion can be rather complex. Note also that decimal and binary arithmetic is explicitly supported, as well as fixed-point (not simply integer, but scaled integers with a specified decimal point). PICTURE clauses in the Data Division are particularly powerful. PL/I adopted much of this from COBOL--and you can even see a bit of it in PL/M.

Of course, you can emulate this in any language that claims Turing completeness, even Brainf*ck.
 
About the PERFORM statement (there are lots of variations).
Given paragraphs named P1...P10: You can enter any of the paragraphs and execute them in sequence with a GOTO and drop out the bottom (P10), or you can treat them as parts of a subroutine (PERFORM ...THRU...). I'm not aware of any other HLL that allows this sort of thing...
I'm surprised you've never heard of Python, Ruby, or a host of similar languages. :-)

Here's one version of what the resulting syntax would look like to use a Python perform() function.

Code:
def f1000_display_foo(): print('foo')
def f2000_add_one():     x = x + 1
def f3000_display_x():   print(x)
def f4000_clear():       x = 0

perform('f')
perform('f2000', through='f3000', times=3)
perform('f2000', though='f3000', varying=('x', range(3,8)))
perform('f3000', varying('x', range(1,100,4)), test='before', until=lambda x: x > 10)

This kind of thing is not at all difficult to do in any language with the most basic functional programming techniques (essentially, functions as first-class objects) and introspection. (And you can get by without the introspection if you are willing to make a list of the paragraphs, e.g. def f(): ...; paras = [f, g, h], and doing that is probably more clear anyway.)That's quite a number of of languages since the '90s. (And of course it's been in Lisp since the '60s.)

--of course, you can code workarounds that accomplish the same thing.
I think it's more reasonable not to accept "workarounds" that don't give you essentially the same syntax. If language A gives you significantly better syntax than language B for doing something you need to do, that clearly makes it a better language for that particular purpose. The point of languages is not just to allow you to do something, but to allow you to clearly express what you're wanting to do.

So, when you say,
Of course, you can emulate this in any language that claims Turing completeness, even Brainf*ck.
I disagree; you cannot do this in Brainf*ck. You can substitute an entirely different program that produces the same output, but if the program doesn't express the intention, and it's difficult to modify in the ways you can easily modify the original program, it's clearly vastly inferior.[/plain]
 
def f1000_display_foo(): print('foo') def f2000_add_one(): x = x + 1 def f3000_display_x(): print(x) def f4000_clear(): x = 0
How about the first form that I cited? "goto f1000" with the execution path dropping through f4000 and continuing to the next statement.

I disagree; you cannot do this in Brainf*ck.
Yes you can, even if you have to construct an emulator in Brainf*ck for a machine that can run COBOL. It's not going to be simple, but it can be done. From WikiP:
Brainfuck is an example of a so-called Turing tarpit: it can be used to write any program, but it is not practical to do so because it provides so little abstraction that the programs get very long or complicated. While Brainfuck is fully Turing-complete, it is not intended for practical use but to challenge and amuse programmers. Brainfuck requires one to break down commands into small and simple instructions.

On a tangential note, many years (>50), I worked with a very smart fellow who started at IBM working on the COMTRAN implementation. When he was faced with being assigned to the PL/I group, he quit, moved to California and hung out with a bunch of gypsies (his term) in the Santa Cruz mountains. He worked under contract to CDC to keep his pockets lined.

He talked about accepting a challenge from Bill Norris and implementing COBOL on a 6000-series PPU (4K of 12 bit words).
 
In another thread some folks touched on the "rubbishing of COBOL," and @thunter0512 mentioned that:

I've never written any COBOL, just read it here and there, and it doesn't strike me as a very good language due to its verbosity. (Then again, I'm a rather experienced programmer; plenty of people seem to think that verbosity isn't a problem, or is even good for the vast majority of lesser-experienced programmers out there—see Rob Pike on Go for more on this sort of attitude.)

I have written plenty of commercial code, including several financial systems, ranging from accounting for ISPs to trading systems. I can't really imagine using COBOL for any of those.

So what is it that makes COBOL good for commercial applications, and is that as applicable today as it was back in the '60s and '70s? Is it really the case that the bulk of the finance sector is still using COBOL in their back-end systems? (I've never seen any.) And are they using it because it's good, or just because it's legacy software that would be expensive and error-prone to replace?
Lets assume we are talking 1960's IBM 360 era code. So the choices were FORTRAN, Cobol, Assembler and perhaps RPG or PL1.

We wrote in COBOL because it was the least worse language available. Our managers loved it because they thought they could understand it.

The ability to do decimal arithmetic sets it aside from most other languages. That was important in Finance.
 
How about the first form that I cited? "goto f1000" with the execution path dropping through f4000 and continuing to the next statement.
Ah, that one you can't do without a GOTO, which Python doesn't have.

Nonetheless, I think it's pretty clear that, overall, even Python has far better and more flexible control structures than COBOL.

Yes you can [do this in Brainfuck], even if you have to construct an emulator in Brainf*ck for a machine that can run COBOL. It's not going to be simple, but it can be done.
But that's not Brainf*ck supporting COBOL syntax and constructions, any more than IBM 360 assembly language or C support COBOL syntax and constructions.

Compare with my example above, where the syntax is very similar to COBOL, but the code is actually Python code, not a compiler for another language or an emulator for another system. (And reasonably natural Python code, to boot, albeit you are unlikely to want that exact control structure. But that general technique for developing your own custom control structures is widely used in many languages, even C to some degree. Just not COBOL or BASIC or similar languages without first class references to functions.)
 
We wrote in COBOL because it was the least worse language available. Our managers loved it because they thought they could understand it.
Lol, that makes sense.

The ability to do decimal arithmetic sets it aside from most other languages. That was important in Finance.
Could you expand a little on what this "decimal" arithmetic was, and what advantages it gave you?

I'm familiar with two forms of decimal arithmetic. One is BCD representations of integers (as seen on 8080, 6502, etc.), which is really just plain old binary integer arithmetic with the conversions being done differently in different places. That can be useful for efficiency, but isn't used much these days because those efficiency gains no longer matter compared to other areas where you can look for efficiency. The other is decimal-based floating point, which is indeed different from binary floating point (each can represent numbers that the other can't), but both have the same accuracy issues; you'll just see the inaccuracies with different numbers.
 
Regarding DECIMAL, What that means in my experience is BCD (some mainframes of that era actually did arithmetic in BCD, possibly exclusively). The reasons it is valuable is that it greatly simplifies the printing of numbers in human-readable form (i.e. as decimal numbers). On the old machines, that was a significant optimization. Many of these machines actually defined the characters 00-09 to the digits '0' through '9' on output devices (printers, punch cards), so there was essentially no translation at all. In addition, using BCD avoids the errors that crept into floating point operations and decimal/binary/decimal translations. Of course, floating point was even worse as far as expense, either for the hardware to support it directly or the software overhead to implement it (in absence of hardware).

But decimal data types were not exclusive to COBOL. Since the machine did decimal arithmetic, assembly language (at the least) allowed use of decimal. Up until FORTRAN/COBOL compilers became abundant (and even after), assembly language was what was used.
 
Regarding DECIMAL, What that means in my experience is BCD (some mainframes of that era actually did arithmetic in BCD, possibly exclusively). The reasons it is valuable is that it greatly simplifies the printing of numbers in human-readable form (i.e. as decimal numbers). On the old machines, that was a significant optimization.
Ah, so BCD integers then. Yes, it is indeed a significant optimisation when it comes to printing, and input too, as I learned when I started writing a bigint library way back when. (I have yet to finish it. :-).) But that doesn't make any difference these days, except for those of us still programming on 6502s and the like.

In addition, using BCD avoids the errors that crept into floating point operations and decimal/binary/decimal translations.
Binary integers also avoid the errors that can creep into floating point operations. If there are errors in decimal/binary/decimal translations, that's very broken code. Such translations are not entirely trivial, but they're not terribly difficult; they just use a fair amount of CPU compared to just using BCD (assuming your CPU supports BCD).
 
Worked on TTD & DDA systems for one of Baltimore's biggest banks late '80s, COBOL/JCL, had to punch out those program card decks myself and I'm a terrible typist. It still gives me a headache just thinking about it.
 
Regarding DECIMAL, What that means in my experience is BCD (some mainframes of that era actually did arithmetic in BCD, possibly exclusively).
Binary integers also avoid the errors that can creep into floating point operations. If there are errors in decimal/binary/decimal translations, that's very broken code. Such translations are not entirely trivial, but they're not terribly difficult; they just use a fair amount of CPU compared to just using BCD (assuming your CPU supports BCD).
What's the exact binary floating point representation of 0.1 decimal? Of course, you can use scaled binary, but you have to pre-specify the precision required. That's asking a lot of the programmer.
 
Last edited:
But that's not Brainf*ck supporting COBOL syntax and constructions, any more than IBM 360 assembly language or C support COBOL syntax and constructions.
That's just a lexical convenience, as any HLL is, not functional. You stated: "I disagree; you cannot do this in Brainf*ck." You can, in fact, do this in any Turing-complete system.
 
I wish I could find the quote but I swear reading somewhere that, when pressed on the verbose English syntax of either COBOL or one of the -MATIC languages, Grace Hopper indicated the alternative was trying to explain mathematical formulas and symbols to military commanders, they would have nothing of it. While it is in name a "business" language, one must remember how big of a business the military is, and that for better or for worse, most of Grace Hopper's view of computers is their role in military, not civilian affairs.
 
Binary integers also avoid the errors that can creep into floating point operations.
What's the exact binary floating point representation of 0.1 decimal?
There isn't one, as I pointed out earlier. While binary integers map exactly to decimal integers, binary floating point does not map exactly to decimal floating point.

And I'm pretty sure you well know this. So what's your game here? Are you just not reading what's right in front of you (what you even quote in your posts), or are you trolling?

That's just a lexical convenience, as any HLL is, not functional. You stated: "I disagree; you cannot do this in Brainf*ck." You can, in fact, do this in any Turing-complete system.
Well, I think you're trolling here, too, because I bet you actually can distinguish between 360 assembly language, COBOL and Python. But if you're really going to go down that route, you have just admitted that COBOL gives you nothing, and doesn't even exist as a language, because it's always implemented as a translation to some other Turing-complete system.
 
Well, I think you're trolling here, too, because I bet you actually can distinguish between 360 assembly language, COBOL and Python. But if you're really going to go down that route, you have just admitted that COBOL gives you nothing, and doesn't even exist as a language, because it's always implemented as a translation to some other Turing-complete system
Now, you're finally getting my point. There's nothing wrong with COBOL or any other language; some are worse than others making use of the instruction set of the host, but as long as the host is Turing complete, there's nothing inherently wrong with that. If your language of choice doesn't incorporate immediate access to a set of machine instructions, there are always subroutine calls; e.g. LRLTRAN Q8INLINE().

Don't get me wrong--there are some truly novel machine implementations, such as Dataflow, but that's a different matter.
 
Last edited:
Lol, that makes sense.


Could you expand a little on what this "decimal" arithmetic was, and what advantages it gave you?

I'm familiar with two forms of decimal arithmetic. One is BCD representations of integers (as seen on 8080, 6502, etc.), which is really just plain old binary integer arithmetic with the conversions being done differently in different places. That can be useful for efficiency, but isn't used much these days because those efficiency gains no longer matter compared to other areas where you can look for efficiency. The other is decimal-based floating point, which is indeed different from binary floating point (each can represent numbers that the other can't), but both have the same accuracy issues; you'll just see the inaccuracies with different numbers.
Cobol will do fixed point decimal arithmetic, so you can specify a field to be 9999v99 where "v" represents the decimal point so as a programmer you don't need to worry about scaling.
Usually this will be stored as packed decimal and may sometimes be converted to decimal floating point depending on number size.

https://www.ibm.com/docs/en/cobol-zos/6.4?topic=6-packed-decimal-comp-3

then there is the "on size error phrase"

https://www.ibm.com/docs/en/i/7.5?topic=operations-size-error-phrase

.. so you can check when something is wrong...
 
Now, you're finally getting my point. There's nothing wrong with COBOL or any other language; some are worse than others making use of the instruction set of the host, but as long as the host is Turing complete, there's nothing inherently wrong with that.
And you've totally missed my point. If being "Turing complete" were enough, we wouldn't need more than one language, and nobody would have made COBOL (or any other high-level language).

But the big problem is not can you do something (if you have a computer, of course you can!), but how easy is it to do something, and to maintain and change that code after it's written. And there Turing completeness helps you not at all; we rely on language design for that. And that's very much the point of my head post in this thread: to discuss that, not drastically naïve notions like, "it's Turing complete."

then there is the "on size error phrase"
https://www.ibm.com/docs/en/i/7.5?topic=operations-size-error-phrase
.. so you can check when something is wrong...
If I'm reading that correctly, it looks as if you need to add that phrase to on every statement (except for floating point ones, where this is a job-level setting) where you want to check for overflow. That sounds like a nightmare to me. Most languages with Integer arithmetic good for business use (Python integral numbers, Haskell Integer, etc.) will automatically check this for you and raise an exception. Admittedly it may sometimes not be a completely obvious exception, being an out of memory error, but I reckon that integers so large that they can no longer fit in the entire virtual memory of your computer are rare and probably are symptoms of some other sort of problem.
 
And you've totally missed my point. If being "Turing complete" were enough, we wouldn't need more than one language, and nobody would have made COBOL (or any other high-level language).

But the big problem is not can you do something (if you have a computer, of course you can!), but how easy is it to do something, and to maintain and change that code after it's written. And there Turing completeness helps you not at all; we rely on language design for that. And that's very much the point of my head post in this thread: to discuss that, not drastically naïve notions like, "it's Turing complete."
We actually don't need any HLLs--as I said earlier, they are a lexical convenience. Machine code should suffice.

In your second paragraph cited above, you've made the case for COBOL. Meets the needs of business.

Historically, there have been Turing-incomplete commercial systems. Note, for example, Dijkstra's paper on the IBM 1620. But the 1620 was a mildly popular machine in the early 1960s and accomplished a lot of productive work.
 
We actually don't need any HLLs--as I said earlier, they are a lexical convenience. Machine code should suffice.
We actually don't need any aircraft, or even cars; they are just a transportation convenience. Walking and canoes should suffice.

But such an attitude betrays a complete lack of understanding of how to program computers, which again, is obviously not the case with you. So cut with the trolling already, will you? You might be a bad programmer, based on some of your comments here, but there is no way you're that clueless about programming.

Historically, there have been Turing-incomplete commercial systems. Note, for example, Dijkstra's paper on the IBM 1620. But the 1620 was a mildly popular machine in the early 1960s and accomplished a lot of productive work.
Indeed. More evidence that "blah blah blah Turing complete so all languages are the same" is just a completely stupid thing to say.
 
We actually don't need any HLLs--as I said earlier, they are a lexical convenience. Machine code should suffice.
Going to agree to disagree here.

May as well advocate keying the machine code in through dozen of toggle switches, except that's just an abstraction of routing the patch cables instead.

I prefer (de)magnetizing the individual beads in the core memory.
 
  • Like
Reactions: cjs
Back
Top