If you set a typical composite monitor side by side with a TTL interface *mono* monitor (obviously shadow masks completely break any comparison to mono monitors), even one that runs at NTSC scan rates (like the internal monitors you'd find in a lot of terminal-shaped computers and portables) in my experience the composite monitor usually looks just a *little bit* softer around the edges, which I think you can mostly just chalk up to "analog-ness", or sometimes "cheap-ness"... but in this context when I said "blurry" I was using it inexactly to describe the fact that the characters are going to be lower resolution compared to MDA.
Well, if we set aside the "lower [vertical] resolution than MDA" thing (obviously 350 lines is going to produce a nicer 25-line text display, vertically at least, than 200 lines), I think there is no difference.
I don't see what difference a TTL interface (by which I am taking you also to mean separated sync signals) is going to make. Composite sync is both outside the display area and
much lower frequency than the video signal itself. (A horizontal sync pulse is ~200 kHz, versus >10 MHz for an alternating on-off 640 pixel/line display, if I've got my math right.) And the monitor quantising the luminance input to "on/off" versus an analogue black/white range doesn't make any difference at all to the frequency that has to be handled by the generation system and the cable connected to the monitor. (In fact, it might make it worse if it's forcing you to use 0-5V TTL levels rather than 0-0.7 V video levels.)
We can have a look at what the horizontal bandwidth limitations look like on an MDA display by looking at the output from the
MDA Video SOC-FPGA Project. This feeds out an image stored in a 720×350 bit image with one bit per pixel (so it's not using the intensity input, just the video input, on the monitor); thus all pixels should theoretically be the same brightness. But have a look at some crops of the high-resolution photos of the output on both the original IBM 5151 monitor and an amber clone (a "GM-1230"):
Here you can clearly see the bandwidth limitations of the 5151: the off pixels do not go anywhere near full black when surrounded by adjacent on pixels, and the on pixels are preceptably below standard on brightness when surrounded by adjacent off pixels.
Interestingly, the clone monitor (presumably cheaper, but I am also guessing newer) shows this issue less. It does make me wonder if perhaps the issues with the 5151 might be related to phosphor behaviour, instead of or as well as, bandwidth limitations.
We can compare this to other more standard (15.7 kHz) systems of the day by looking at an Apple IIe running in 80 column mode with the standard Apple monitor. From
this blog entry I've grabbed a couple of screenshots (you'll want to click on these to blow them up to full resolution because I'm too lazy to do more cropping):

The system is running at about 20% lower horizontal resolution (560 pixels instead of 720), and you can pretty clearly see the horizontal bandwidth limitations on characters such as the 'm' and the '0'. Here, too, a single on pixel isn't as bright as a sequence of adjacent on pixels (though the difference seems less to me) and off pixels do seem to go fully off but for a much shorter time (essentially, the on pixels creep into adjacent off pixels).
Unfortunately for this comparison, it's unclear what's causing the "on pixel extension" on the Apple IIe system, and whether that might be intentional, so it's hard to tell if this is really a bandwidth issue or not. (I suspect something else might be doing this.)
I will try at some point soon to get my workbench clean enough that I can haul out some old computers and try them on my old monitor, and maybe 'scope out the signal to see what the actual widths are between the on and off pixels.