• Please review our updated Terms and Rules here

MP/M file sharing

tomjennings

Experienced Member
Joined
Jun 30, 2022
Messages
125
Location
Los Angeles CA
I can't find this in the manuals or errata but I swear I saw somewhere how to do this... for MPM II 2.1.

I have a file open for writing in a text editor on one console. In a second console I want to open that same file read-only (compiler). I get "Bdos Err on D: File Currently Open".

I know there's some attribute to set on the files, or something, but damned if I can find it.

Any hints?
 
Last edited:
Answering my own question:

In the errata document for MP/M II 2.1, dated 1982, the "Enable compatibility attributes" question was added to GENSYS.COM, and user flags, applied to the PROGRAMS doing the file opening/closing, modifies the behavior when one program (eg. text editor) as write lock access to the file, and then another program (eg. compiler) tries to open the same file for read-only access.

This below works for me, and I know PMATE is a crude and often not-well-behaved program (not bad as in buggy or erratic; bad as in shortcuts and cheats, in the typical old CP/M tradition). Attribute F1' tells MP/M to treat that program's (PMATE) file-opens as locked for writing, but RO (read only) in the system tables. The errata implies that this should be sufficient; but I also had to set F1' ON for the C compiler's first pass program BDS C's CC.COM.

Re-run GENSYS.COM and answer YES to "Enable compatibility attributes". And obviously, boot into that MPM.SYS.

SET PM24.COM [SYS,RO,F1'=ON]

SET CC.COM [SYS,RO,F1'=ON]

In one window I can now have PMate editing a source file, and in window 2 compile it. Lovely and modern-ish enough to actually get some work done.

(All executables should be SYS and RO anyway, I include them here for completeness.)

 
tangent: Isn't the problem at least partially that the editor keeps the file open?
Is it an editor that can edit files larger than the free RAM? If not then it seems like a weird thing with the editor.
 
Most text editors, CPM or modern OSs, keep the file open for writing, for scrolling files larger than RAM, for frequent saving, etc. PMate, as well as WordStar, certainly do this. I've never heard of a text editor that doesn't! ED.COM explicitly explains how the file is kept open. It is something that needs to be accommodated. The classic edit-compile-go program development case requires it.

EDIT: PMate says that the XJ command, commonly used after an edit to save the file to disk and reopen for further editing, closes then reopens the file.

WordStar has an explicit MP/M accommodation configurable with WSCHANGE or WSINSTALL: it can set file attributes that indicate that the file is open for writing, but allow other processes (users) to open the editing-file read-only. Most CPM/MPM programs aren't so accommodating; the old MP/M 1.1 method (which this SET F1 business invokes) is a "good enough" solution for me, and probably most casual use.
 
Last edited:
At least for modern editors, but I think also some of the CP/M "word processor" ones, actually do not "edit in-place". They create a new file to contain the results of the edit, then delete the old and rename when finished - possibly keeping the old as a backup. In order to "share" a file being edited, more work must be done. Even on modern systems (Linux), sharing of files being edited doesn't really work well.
 
At least for modern editors, but I think also some of the CP/M "word processor" ones, actually do not "edit in-place". They create a new file to contain the results of the edit, then delete the old and rename when finished - possibly keeping the old as a backup. In order to "share" a file being edited, more work must be done. Even on modern systems (Linux), sharing of files being edited doesn't really work well.

Right! I'd left the discrete operations unmentioned, my bad.

For even lowly PMate, the minimum atomic operations for editing and saving a file are:

1) open input FILE.EXT. Even though PMate does not modify this file directly (until step 4, SAVE) MP/M considers it open and essentially locked/owned by the process PMate.

2) delete, then create the temporary working output file FILE.$$$

3) (data read from input, edit, write to output; user operations; various complexity seeking back and forth within input and output files).

4) A "SAVE" command generally: a) outputs in-memory changes to output file, copies remainder of input file to output file, as necessary. b) input and output files are closed. c) input file renamed to FILE.BAK, or deleted if no backup made. e) output file renamed to FILE.EXT.


So after a "save" the original input file, from the user's point of view, is exists with desired changes. In fact a new file with edited contents of the same name has been created, then optionally re-opened by the editor (in my usage case example). It is at THIS point that the compiler might want to open the file, additionally, as read-only. With the assumption being that editing will not change the file until the compiler is done with the file.

The edit/compile/test lather-rinse-repeat cycle requires some care from the user! Unsurprisingly, especially on antique 8-bit multiprocessing systems. The fact this can be done AT ALL is a testament to DRI's fantastic work! Someone not me somewhere wrote that MP/M may be the most sophisticated piece of software ever written for the 8080; its hard to argue with that!

There's plenty of opportunity for hanky panky in here; the editor making changes to the file WHILE the compiler has it open for reading is at minimum a Really Bad Idea. Late *nix OSs probably do about as good a job as is possible with silliness like this (and there's pipes and other complex things well out of our scope here).
 
My point being that when the editor finishes, the resulting text is in a different file (entry), even though it now has the same name. Another process will be reading the original file, and then at some point after the editor changes things will be getting entries from the new file. It's really not clear how that would go. On a more sophisticated OS that processing reading the original file would essentially continue to see only the original, even after the editor renames the output - it would have to re-open the file by name to see the changes. I'm not sure how MP/M handles this situation - I doubt it actually keeps the original file complete and isolated. At some point, the process reading the file just asks for the next extent and I suspect it gets something inconsistent with the extent it had been reading.
 
Semi-relevant tangent:
This reminds me of OpenVMS that has version numbers for files. Each time you edit a file the newly saved file gets a higher version number. That way any program having a file open will still have the existing version, and won't just randomly end up continue reading content from a file that has changed. I.E. it won't get the rug pulled off from underneath it so to say.

I would think that when editors do as durgadas311 explains, you might end up with temporary files that are the old versions that the editor couldn't delete if some other program has those files open for read. (It seems like a good idea to rename the old file to a temp name, rename the new file to the correct name, and only then delete the old file).

I wonder to what extent OSes can queue up a delete to happen automatically as soon as every program has closed a file marked for deletion?
 
...
I would think that when editors do as durgadas311 explains, you might end up with temporary files that are the old versions that the editor couldn't delete if some other program has those files open for read. (It seems like a good idea to rename the old file to a temp name, rename the new file to the correct name, and only then delete the old file).

I wonder to what extent OSes can queue up a delete to happen automatically as soon as every program has closed a file marked for deletion?
Unix, and now Linux, do essentially that. Files exist as "inodes", and file names are just entries in directories that refer to an inode. When a program opens a file, it references the inode which then cannot be deleted until all references go away (including name references in directories). So, open a file and remove all name references and the file still exists, but can't be accessed except as the open file reference. Once that (last reference) is closed, the inode disappears as goes the last remnant of the file. In the mean time, that program and read (and write) the file as it wishes. This was often used for temporary files: create a file (open it), remove the name entry, then write the file and read it back, then close and it's storage is reclaimed. Since open files can be inherited, you can use this model to spool a file (for example).

I haven't spent a lot of time thinking about it, but I believe the CP/M filesystem does not lend itself to this. The current file extent is kept in memory (in the FCB), but that is not the whole file (if it exceeds 16K). And preventing that from being written to disk (if dirty) is practically impossible. MP/M does more to protect the integrity of the FCB, too (although the compatibility attributes can disable that).

I'm not sure what level of "sharing" we're exploring here. But I'm reminded of the OLPC (One Laptop Per Child) program where the "XO" models had some neat "collaboration" tools. In that case, two people editing the same file actually see each other's changes as they are made. Some modern cloud editors have similar behavior, or at least specific locking mechanisms designed around sharing. But there it's built into the editor, which it probably has to be in order to be very functional.
 
Example from elsewhere: I don't know how DOS handles open files, but on disk you can sort of have hard links by having multiple directory entries point to the same file. However any disk diagnostic tool will flag that as a problem and offer to remove one of them, likely the one you don't want to remove.

related tangent: Whoever decided to use "inode" both to refer so something on disk, and also to something in memory, ought to be smeared in tar and rolled in feathers...
 
My point being that when the editor finishes, the resulting text is in a different file (entry), even though it now has the same name. Another process will be reading the original file ...
Ahh. But there's a human in the loop.

In console 1, I'm editing... when ready to try a compile, I do a save. It works as we know, and yes, FILE.EXT has been deleted and recreated.

Then I switch to console 2, type "compile FILE.EXT". It opens the "new" file.

Definitely true though -- if a program in console 2 is trying to programmatically use a file that's being manipulated in console 1, it would be very confusing at least. But that's not the use case at all, here.
 
Example from elsewhere: I don't know how DOS handles open files, but on disk you can sort of have hard links by having multiple directory entries point to the same file. However any disk diagnostic tool will flag that as a problem and offer to remove one of them, likely the one you don't want to remove.

Really? The *nix inode scheme is amazing. Hard links enable all sorts of amazing features. It's reliable as can be. My backup scheme (not "mine", it's age-old) generates hourly snapshots of my entire home directory, 400 GB, using hard links to duplicate the file, by creating a new dir entry pointing to the same inode, then rsync that changes only actually-changed files, which is of course a microscopic subset of that 400GB. Three months of *hourly* copies of 400GB consumes 750GB total. Each snapshot (folder 2025/Oct/22/1200/tomj/) is an ordinary folder containing 450,000 ordinary files you can copy out, consuming almost no space on the disk, just changed-files since the previous hour, and a bunch of directory space.

DOS disk structure is kinda awful; the FAT contains all these linked-lists of blocks, easily disconnected from the directory entry that names them. CPM is "worse" -- but they both contain beauty. CPM is a master of frugality; one bit per allocation block, in RAM. CPM and DOS had to work in extremely constrained environments. FAT is terrible but FAT is great -- the problems of the 80s are mostly gone; when was the last time we've needed to to CHKDISK fixes?! Code is so much better today.
 
Last edited:
Unix, and now Linux, do essentially that. Files exist as "inodes", and file names are just entries in directories that refer to an inode. When a program opens a file, it references the inode which then cannot be deleted until all references go away (including name references in directories).

Yes! It's quite lovely, and performance is excellent. But requires more RAM than anyone could afford or would fit in a box in 1975, of course.

Poor little CP/M has no inherent ability to share resources like that. But at the BDOS level you can make it work, if you adhere to their reasonable rules, eg this narrow-but-common-and-useful compile after edit case. DRI was aware of it, MP/M 1.1 explicitly supports it. DRI was way ahead of their time.

I'm not sure what level of "sharing" we're exploring here. But I'm reminded of the OLPC (One Laptop Per Child) program where the "XO" models had some neat "collaboration" tools. In that case, two people editing the same file actually see each other's changes as they are made. Some modern cloud editors have similar behavior, or at least specific locking mechanisms designed around sharing. But there it's built into the editor, which it probably has to be in order to be very functional.

I wasn't aware of that! Google Docs had or has a similar collaborative-edit mode where you can see others' live work.

Multi-laptop multi-collaborator work, many people in a room editing the same "thing", is something that's not much explored yet I don't think. Multiplayer games probably come closest.
 
Back
Top