• Please review our updated Terms and Rules here

Memory addressing/architecture and system responsiveness

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
44,470
Location
Pacific Northwest, USA
It depends to a very large degree on the application.

If you're running the typical business applications (AP, AR, GL, Inventory and Payroll), you can estimate pretty accurately what the storage requirements will be and pre-allocate appropriately. No need for elaborate dynamic allocation. Similarly, if you're writing in COBOL, program sections can be treated by the compiler and run-time as segmented code.

Sometimes you just have to do things a certain way. Consider the CDC 6000 series architecture; one of the earlier Cray-designed supercomputers. Most of the operating system resides in peripheral processors (4K of 12-bit words) that have access to main memory, but which run at one-tenth the speed of the CPU. Memory is monolithic, with each user given a relocation address added to all memory addresses and a field length that limits addresses on the high end. Multiprogramming is performed by "rolling" whole user areas in and out of disk storage--and the user area must be locked down during I/O (for PP access). The only real CPU part of the operating system consists of a storage move routine to squeeze out gaps in main memory allocation.

You'd think that running real-time interactive code on a setup like this would be terrible--but it's the OS behind the PLATO project.

Similarly, consider the CDC STAR--a massive (for the time) supercomputer. Virtual memory, paged in either 512 or 64K 64-bit words. Disk allocation is contiguous, with up to 4 extensions allowed. Up to 512KW total memory. Most of the instruction set is vector (SIMD) memory-to-memory operations with 48 bit virtual addresses.

For the typical application workload, this worked pretty well. The main applications were simulation-type with large arrays that would run for hours or days (you need this kind of thing when designing nukes)--and most importantly, usually a single job at a time. The biggest downside was that scalar operations were performed as vectors of length 1--and startup overhead for an instruction was a killer. That however, was a problem with the hardware architecture, not the operating system.

Again, it all depends on the application.
 
Back
Top