• Please review our updated Terms and Rules here

Allocating memory in an interrupt handler (C/C++)


Experienced Member
Mar 29, 2009
Sweden, way up north
I'm working on a program/lib that sometimes needs to allocate more memory while in an interrupt handler. I guess this is generally frowned upon but I like the idea better than hogging memory in advance, not knowing if we actually need it or not. I'm using the regular malloc()/_fmalloc() (using compact memory model so far pointers) to allocate memory and it all works fine until the current heap is full. What happens then is it can't allocate any more memory and everything goes mad.
My workaround that seems to work is to allocate all available memory at startup and then immediately free it again, to force the crt to create the heaps in advance.

So that works but it's not exactly ideal. All memory is still available to my app so no problem there. But I can probably forget about using exec..()/system() etc and I may actually need to use those functions at some point. I'm sure there's a way to free up the heaps again but that'd just result in everything going mad again.

Is there a proper solution for this or should I just the reserve memory in advance?

Don't allocate memory in an interrupt handler. Have whatever buffers you need allocated and ready to go on a free list that you can access from your interrupt handler; that's much cleaner. Remember, an interrupt handler should be minimal. Save the management of the free list and replenishing the buffers for the main loop of your program.

Open Watcom has a call to grow the near heap to the maximum size, allowing you to preallocate that space and reserve it for malloc if you need to do it. This is the library version of you trying to allocate all available memory and then deallocating it immediately. Look for the _nheapgrow function.
The problem is that the heapgrow functions only grow the current heap to 64k, and they are always at maximum size in large data memory models. Let's say we are already using 60k of it and we need another 8k, that means it's gonna need to start a new heap, and that's where it all starts to fail.

This is from Watcoms C Library reference:
The _fheapgrow function doesn’t do anything to the heap because the far heap will be extended
automatically when needed. If the current far heap cannot be extended, then another far heap will be started.
In a small data memory model, the _heapgrow function is equivalent to the _nheapgrow function;
in a large data memory model, the _heapgrow function is equivalent to the _fheapgrow function.
Yes, it's possible, but let's look at your application.

  • How frequent are the interrupts and how big are the data packets that you need to buffer?
  • Do you know a couple of interrupts ahead of time if you're running out of memory?
Generally speaking, you want to spend as little time in the interrupt route as possible, so offloading the allocation and memory management to an independent task is usually the best way to go.
This is for a networking app and the interrupt is the callback/receiver function from the packet driver. So depending on traffci the frequency can be anything from 0 calls/sec up to several 1000 calls/sec. It's not going to be allocating 1000s of times per second, the buffers are re-used and a simple garbage collector frees them when it sees we have more buffers than we need. I just don't like keeping them around when we don't need them, eg when there is no traffic.

Telling in advance wether or not we're going to need more memory.. Not really I guess. It'd be possible to always have at least 1 spare buffer, so we'd know one call in advance that we need more? Waiting for another task to allocate memory for us would be too slow I think, the receiver would either have to sit and wait for that, or drop the packet.
Sure, you can call the INT 21H memory services as long as the InDOS flag is clear. However, if it's set, you might get the job done by saving DOS swap area, then issuing your request--or simply managing the MCBs yourself.

In the past, I've done this sort of thing with a timer-tick-driven ISR that slowly increased or decreased the number of buffers in the free list according to load, but there will be something of a lag in the response time, so leaving lots of slop in the allocation mechanism (i.e. so you always have significantly more than you need).