• Please review our updated Terms and Rules here

Rest in Pieces

Moore's "law" says nothing about software, threading or programming. It simply says that the number of transistors on a die will double about every 18 months. That includes transistors dedicated to memory. If Moore's "law" is truly dead, then our current memory density is about as good as it will get--which is clearly ridiculous. Note also that Gordon Moore didn't say how the transistors would be fabricated on a chip, so multilayer architectures are certainly in the cards. Just that the number of transistors on a chip (of any size using any fabrication method) should double about every 18 months.

Secondly, it's not a "law"--it's an empirical prediction made by Moore and has been amazingly accurate for a surprisingly long time. Clearly, it can't go on ad inifinitum, but it's done a very good job thus far.
 
Even if Moore's law gets shattered one day,
Murphy's law will last forever and ever and ever...

Murphy's Original Law: If there are two or more ways to do something, and one of
those ways can result in a catastrophe, then someone will do it.

Murphy's Law: If anything can go wrong -- it will.

Murphy's First Corollary: Left to themselves, things tend to go from bad to worse.

Murphy's Second Corollary: It is impossible to make anything foolproof because fools are so ingenious.

Quantised Revision of Murphy's Law: Everything goes wrong all at once.

Murphy's Constant: Matter will be damaged in direct proportion to its value.

The Murphy Philosophy: Smile... tomorrow will be worse.


ziloo :mrgreen:

p.s. ...... If everything seems to be going well, you have obviously overlooked something...
 
Last edited:
They hit a brick wall with raw speed so they went with more cores. Most of the speed improvements has been with branch prediction and security issues has screwed that up. So where is the speed improvements going to come from now?
 
So where is the speed improvements going to come from now?

I don't think we need any more single-core speed. We are fantastically fast already. To the extent I want my computer to be faster, it's usually solving a complex problem that can be parallelized, so I'm happy seeing them move in the direction of more cores. This is the direction the CPU manufacturers will continue to move in, and it puts pressure on software developers to make use of the opportunity to exploit the available parallelism.
 
Golly, the single-CPU concept has been dead in supercomputing for 30 years now.

Massive parallelism is the ticket--and has always been thus. Of course, Amdahl's law still applies.
 
There is plenty of room for speed improvement. They just need to write software that isn't so bloated and clunky....

Right. Not going to happen.

Edit: Well, perhaps with the exception of Trixter :p
 
Trading requires low latency for execution of trades. Trades are placed real-time; now often in nano seconds or faster. This is especially critical when moving large blocks of options in and out cash. Timing is everything.

Statistical/probabilistic calculations uses different resources - floating point helps but fast integer calculations (ie raw cpu) became the norm as I was leaving Investment Banking in 2015.

Geoff, MSF International Finance, MBA - former Securities Accountant at BNYMellon.
 
Monte Carlo simulations? The physicists I work with run these simulations which can run for two weeks sometimes. Single core, multi core, threading... It doesn't matter raw speed of a CPU is irrelevant combined computing might be the only way to achieve faster calculations. So what's that mean for home users? Decentralized computing? Possibly going back to the days of terminals and leased computing
 
IMHO, for home users reading email, web surfing, streaming video - we've long gone past when Moore's law meant anything to them. One only needs to look at tablets and/or smart phones as proof.

I'm not a physicist. I do know that atmospheric modeling, nuclear testing and the like require some sort of deep parallelism approach to perform calculations. The level of mathematics is quite complex and beyond the scope of most people, including me.

In finance and banking, modeling is less complex and more two dimensional oriented. Even with deeply nested cells 5000 x 5000, modern spreadsheets can do multiple regression analysis with statistical significance testing rather quickly. Running R calculations with using Matrix mathematics can get more complex. But again, a decent workstation can handle that part pretty quickly with off the shelf parts. Speed is more critical to decision making which is why low latency is the preferred attribute over brute force cpu power.

I suspect that as AI takes on a greater decision making roles, the requirements for computing resources in finance will change dramatically as decision making systems take on more responsibility and independence with little or no oversight.

Geoff, MSF MBA
 
Last edited:
I could be wrong but I think having massive amounts of RAM (modern computers being 64bit) has helped in some areas more then brute force processing to store and work on data in simulations.
 
Agreed. 64 bit everything helps in finance. When dealing with larger numbers and greater amounts of spreadsheet cells, 64 bits runs circles around 32 bits.
 
Back
Top