The Nightstar Zoo

Nightstar IRC Network - irc.nightstar.net
It is currently Thu Aug 24, 2017 3:58 am

All times are UTC - 6 hours [ DST ]




Post new topic Reply to topic  [ 13 posts ] 
Author Message
PostPosted: Thu Oct 06, 2005 9:50 am 
Offline
Nightstar Graveyard Daemon
User avatar

Joined: Mon Jun 03, 2002 8:30 pm
Posts: 1071
Location: Wouldn't you rather observe my Velocity?
Yesterday we released a beta version of our software to the customer. It was immediately rejected because it was too slow. We connect to a small testing databases here inhouse, but in the Real World™:

  • they connect to a remote database over the internet
  • their database has gazillions of records

End result: it takes 11 seconds to pull up a customer record inhouse. It takes 90 seconds to pull it up in the field. A search returning 340 records takes 5 seconds inhouse, 70 seconds in the field.

I've gone off on premature optimization in the past. I'm gonna do it again.

We sat down with unoptimized code that was fairly readable. Care had been taken to make sure the code worked correctly. We had a working program to test against. We drew a line in the sand: all database-related UI operations must complete in 10 seconds.

It took us 20 minutes to read through the code and take benchmarks. It took us 15 minutes to replace an expensive recordset query that was occurring on every row of data (get column count, column names) with a method that called it on the recordset one time and then cached it for all subsequent queries.

35 minutes. New benchmark: 340 rows in three seconds. Customers load in seven.

I am utterly convinced that optimization is unnecessary unless you have a finished, working, measured program that is too slow. This example really drives home how trivial it is to optimize well-written code.

My rant, however, is this: if you are optimizing code early, you are a complete moron. It took us 35 minutes to refactor that code. How many hours... how many days would have been lost to our team as we tried to implement features on top of "optimized" code? How many days would have been lost optimizing the database code with no clear idea how effective we were being, or how much faster (if at all!) our changes were making the program? How many weeks would have been lost as we repeated this unnecessary optimization + maintenance hassle process for other modules in the system?


Top
 Profile  
 
 Post subject:
PostPosted: Thu Oct 06, 2005 11:37 am 
As a related point, I would argue that it is important that the original, unoptimized code remain available in the source revision tree, as this is the version which you will want when making later revisions - especially feature additions. As bugs get repaired and features are added, the optimization requirements will change; while it will be important to have the optimized versions recorded as a branch off the main source tree, so that the optimization techniques aren't lost, it would still be easier to fix the unoptimized code then reoptimize it again than it would be to try to force the alterations on the existing optimized code, especially since the new code may need radically different optimizations - or conversely, may make the old optimizations unnecessary. Optimized code should always be a branch off the source tree, not the primary revision.


Top
  
 
 Post subject:
PostPosted: Thu Oct 06, 2005 12:34 pm 
Offline
Nightstar Graveyard Daemon
User avatar

Joined: Mon Jun 03, 2002 8:30 pm
Posts: 1071
Location: Wouldn't you rather observe my Velocity?
Interesting. I'm not sure I agree with the always part, or even the branching.

If I had two methods, one unoptimized but demonstrably correct and one optimized, I would probably want to build a unit test that ran both algorithms and compared their output. So in this case I wolud not want branching at all.

Our database optimization actually presented an underlying change in the architecture. Leaving the unoptimized code didn't make sense, and later work will probably want to build directly on top of the caching db code so that there is no integration woe later on trying to merge back from the branch.

What if you muddy up a module with optimizations, but the interface is identical? Should you leave the unoptimized code on the main branch? If the interface is identical, I would worry that you're needlessly slowing everybody down by not giving them the optimized code. I would worry more that you're not testing the optimized code if you don't give it to the developers.

In certain specific cases, such as grinding through a gnarly calculation, I think it might make sense to give the team some easy code to step through. If your optimized code is orders of magnitude harder to debug (say you spin off a blocking task into its own thread), I could maybe see it.

Dunno... I guess this goes against my grain somehow. Depositing code into a branch and then not using it as development moves forward seems kinda scary.

Motherhuge projects excepted, of course; Pi's company apparently has dozens of teams writing entire applications that are in their own branch and never see the light of day until the operating system is ready for integration testing.


Top
 Profile  
 
 Post subject:
PostPosted: Fri Oct 07, 2005 8:45 pm 
Fair enough; in retrospect, I was making the point too strongly, and should have stated it as a recommendation or something to consider doing, rather than an absolute rule. As you may have noticed, I tend to get carried away by my own rhetoric, often with absurd results. :roll: Part of it is from having been in too many projects that used no revision control at all, or used it lacksidasically, with disasterous results.


Top
  
 
PostPosted: Sat Oct 08, 2005 3:26 pm 
Some optimizations are design issues, and those can be hard to make after the fact.

I'm not saying that excessive amounts of time should be spent on optimization, but a bit of time to make sure the design doesn't do anything obviously substantially slower than it has to be can be can be a good idea. The trick, as always, is to pick the right amount for the situation. Your example was a situation where optimization after the fact was easy and successful, but I've seen situations where ignoring performance in the planning stages has lead to the project having to be essentially reimplemented. Now, it's handy to have something to compare the new code against, but not so handy that it was worth the wasted time.

There's a difference between optimizing for the sake of optimizing and making careful design choices when it's known (eg the contract says) that good performance is required.


Top
  
 
PostPosted: Sat Oct 08, 2005 7:24 pm 
Offline
Nightstar Graveyard Daemon
User avatar

Joined: Mon Jun 03, 2002 8:30 pm
Posts: 1071
Location: Wouldn't you rather observe my Velocity?
anthonyr wrote:
Some optimizations are design issues, and those can be hard to make after the fact.


I disagree. I've been programming for juuust over 20 years now, and I have never found optimization to be as difficult as writing code in the first place. Even when I worked in videogames and we were presented with the insane task of taking a physics engine that ran at 0.25fps and making it run faster than 30fps. We got it to run at 15fps in one day of effort. The second day, we called in a Playstation expert who ran a profiler we didn't know existed, spotted one function that was overflowing the 1k instruction cache, split that function into two parts, and presto: four hours of his time later we were running at 40fps.

Changing the design of a working program is called refactoring. It was a crazy idea five years ago, now it's a well-studied and widely implemented practice.

Caveat: I'm not saying you shouldn't spend some time up front thinking about good design, and I'm not saying you should write willfully stupid code.

The problem is, if you are looking a design on the whiteboard and you are thinking about making the program go faster, you have already crossed over to the dark side.

anthonyr wrote:
The trick, as always, is to pick the right amount for the situation.


...Aaaaand you've already crossed over to the dark side.

The right amount is zero. With perhaps one exception.

Before you have working code, the only thoughts you should have about speed are in terms of O-notation. If you know you're processing a dataset that's got 2<sup>48</sup> elements, it's okay to write "no O(n<sup>2</sup>) algorithms!" on the whiteboard. But if you are spending cycles considering two algorithms that execute in comparable time--e.g. they're both linear, or both exponential--then you need to scroll up to my original post where I called you a complete moron. Furthermore, if your dataset is small--and you can pick a number here, but I generally draw the line around 100k elements--then you shouldn't spend any time on optimization, period. Even considering algorithms that execute in times that are orders of magnitude apart. Why? Because your 2GHz processor is going to make short work of it even at O(n<sup>2</sup>). Don't waste time arguing over O(nlogn) versus O(n).

anthonyr wrote:
I've seen situations where ignoring performance in the planning stages has lead to the project having to be essentially reimplemented. Now, it's handy to have something to compare the new code against, but not so handy that it was worth the wasted time.


Okay, rant mode off. Please share, I would be interested to hear your experience. While my experience has been completely the opposite of this, I'm open to the possibility that I have not, in fact, seen it all. :-) I have worked on projects that had to be scrapped and reworked, but in hindsight there were several other factors at work larger than inability to refactor.

anthonyr wrote:
There's a difference between optimizing for the sake of optimizing and making careful design choices when it's known (eg the contract says) that good performance is required.


I agree, but if we were on a team together even this statement would put my wind up. Making careful design choices is a good call. But unless you know what the performance characteristics are, you're wasting time writing needlessly cryptic code.

Essentially, what I'm saying is, unless you know the dataset is O(huge), you should use the cleanest and easiest algorithm to implement. Don't waste a single brain cycle on binary trees versus hashes versus vectors. Just use the simplest collection type that could possibly work. If it's too slow, change it. You're not up against a design issue replacing a collection with a different type of collection.

I want to clarify here that I make no exception for code that has to run at speeds of x-per-second. If your O(data/s) is comparable to your O(processor cycles/s), then you need to spend some time making sure you have enough microseconds to complete each operation. But if you have languid milliseconds stretching to the horizon, I will stake you through the heart if you say anything about making it go faster just for the sake of making it go faster. Writing clean code first and then optimizing will get cleaner, better tested code to market sooner, and it will run just as fast.

Now, for all that I've said, there is a major design flaw in the database program I mentioned in my first post. I'll have to fix it in the next 2 weeks. I concede that I would have made them change the design had I been present when the original code was written. But at the time of this writing, I'm still confident that optimizing it at this late date will not be a major crisis. I'll report in when I'm finished.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Oct 08, 2005 7:43 pm 
Offline
Nightstar Graveyard Daemon
User avatar

Joined: Mon Jun 03, 2002 8:30 pm
Posts: 1071
Location: Wouldn't you rather observe my Velocity?
One other thought. When I write code I don't typically slap in O(n<sup>2</sup>) code.

I have spent 20 years learning to wite code, and I write code that is much faster and much more readable than I did a decade ago. When confronted with a problem, I tend to throw in the easiest solution that seems to fit, and frequently these are faster than the naiive or trivial solution. If I know two ways to do a thing, and one is O(n) and the other is O(k), I will typically use the O(k) method if it's maybe less than twice as complex.

It is good to understand and know how to write fast code.

What I am after here is that peculiar itch that programmers the world over seem to get, to explore complicated new solutions to problems that don't exist. If you know quicksort by heart, go ahead and use it instead of bubblesort. If you only know bubblesort and want to learn a new sorting method and you have the time to do this, go ahead and invest the time educating yourself. But don't go inventing a new type of sort just because you're sure bubblesort is too slow. Now you're just wasting time and complicating the code, and you're going to end up with a muddled implementation of radixsort that gains you nothing and makes other programmers wonder what the heck you were thinking. ("Radixsort is used when the entire sortable set doesn't fit in available memory at one time... yet this array is maybe 1k in size, and we've got a gig of RAM. What's going on?")


Top
 Profile  
 
PostPosted: Sun Oct 09, 2005 2:39 pm 
Chalain wrote:
Changing the design of a working program is called refactoring. It was a crazy idea five years ago, now it's a well-studied and widely implemented practice.


Yes, and if the initial design is bad enough then refactoring becomes equivilant to reimplementing.

Chalain wrote:
Caveat: I'm not saying you shouldn't spend some time up front thinking about good design, and I'm not saying you should write willfully stupid code.


Well, that all comes down to how you define optimization. If you call anything other than the simplest possible implementation "optimization", then you're going to paint yourself into a lot of corners.

Chalain wrote:
Before you have working code, the only thoughts you should have about speed are in terms of O-notation.


Precisely. And how is picking fast algorithms not an optimization? I'm not saying I worry about cache size on the white board (if ever, a Xeon is about the smallest cache I worry about these days), but I do worry about complexity.

Chalain wrote:
Okay, rant mode off. Please share, I would be interested to hear your experience. While my experience has been completely the opposite of this, I'm open to the possibility that I have not, in fact, seen it all. :-) I have worked on projects that had to be scrapped and reworked, but in hindsight there were several other factors at work larger than inability to refactor.


Okay... well I'm not sure how much I can say because of the NDA, but basically to get the kind of speed improvements we needed, a completely different algorithm was required and this algorithm required that we have certain information at a much earlier stage of processing (and thus impacted the overall design of the project). We got that information basically for free by the end of processing with the old way of doing it, but we basically had to rewrite the engine and give it a different API to get that information earlier.

There have been other misc situations where optimizations were made beforehand when it was completely obvious they'd be needed. An example that comes to mind is a multitude of projects where we assumed we'd need a client side cache to keep network traffic within reason, even though worst case complexity isn't any better. Caches tend to be fairly modular, but we did it the first time round anyway because there was no possibility that the project could reach the required performance with the network and server hardware we knew we were dealing with. eg, if you know you'll have to make at least X many transactions per second and each one takes at least Y bits, and X*Y is several times Z the network bandwidth, then there's just no possibility that it can be done without a cache. We don't have to profile it to know that.

Chalain wrote:
Essentially, what I'm saying is, unless you know the dataset is O(huge), you should use the cleanest and easiest algorithm to implement.


Sun had some of our customers in mind when they made ZFS 128-bit instead of 64-bit.

Chalain wrote:
What I am after here is that peculiar itch that programmers the world over seem to get, to explore complicated new solutions to problems that don't exist. If you know quicksort by heart, go ahead and use it instead of bubblesort. If you only know bubblesort and want to learn a new sorting method and you have the time to do this, go ahead and invest the time educating yourself. But don't go inventing a new type of sort just because you're sure bubblesort is too slow. Now you're just wasting time and complicating the code, and you're going to end up with a muddled implementation of radixsort that gains you nothing and makes other programmers wonder what the heck you were thinking. ("Radixsort is used when the entire sortable set doesn't fit in available memory at one time... yet this array is maybe 1k in size, and we've got a gig of RAM. What's going on?")


I think we agree. If it's not free there has to be a damned good reason to do it, like "We can say for sure it's impossible without this.". If there's two similarly hard ways of doing it and experience tells you one will be significantly faster without your having to mull it over, it's okay. The rhetoric is a necessary evil so people who don't have the experience to do this well won't try until they learn where the exceptions are through experience.

EDIT: It's quite clear that not only are we not on the same planet wrt to what we do, we're not even in the same solar system so different attitudes towards this are understandable.


Top
  
 
PostPosted: Sun Oct 09, 2005 7:46 pm 
Offline
Nightstar Graveyard Daemon
User avatar

Joined: Mon Jun 03, 2002 8:30 pm
Posts: 1071
Location: Wouldn't you rather observe my Velocity?
anthonyr wrote:
Yes, and if the initial design is bad enough then refactoring becomes equivilant to reimplementing.


Agreed, though this still doesn't ring as much of an alarm with me. *shrug* The XP guys call it a "spike refactoring". :-)

When I was at Evans & Sutherland, we were developing a graphics driver for an amazing video chip. The typical video card back then was 32MB and you could get 64MB if you were willing to pay $400 for your video card. The card we were developing had a bottom-end price point of $2500 and came with 256MB ram, though you could get a $6500 version with a full gigabyte.

Anyway. The chip was in the fab when we discovered a major flaw in its design. If you did 3D rendering to a texture, then did a 2D blit to the texture, you ended up with undefined results because the 3D operations were piped but 2D operations were instantaneous. If you rendered a cube and blitted it to the screen, you got garbage: the blit was over and done long before the 3D operation came out of the pipe.

But the chip was in the fab, and fabbing chips of that complexity was a 3-month process back then (this was in late 2001). It would cost about US$1M to turn the chip, and we'd instantly lose 1-2 months from our ship date. We couldn't change the hardware. We had to fix it in the driver. The task: figure out how to synchronize 3D and 2D without sacrificing 2D speed unnecessarily. It took a week of yelling and shouting in the conference room, and several whiteboards were completely filled with calculations to determine the speed impact to various operations. We ended up with a sync mechanism that had to be implemented everywhere in the driver, top to bottom. I figure maybe 1% of the codebase was changed, but that 1% was in every single file.

It took 6 engineers a week to implement, and the remaining 46 engineers on the team another 2 weeks to get adjusted to using it. But in the end, the project never really stopped moving forward.

I remember those weeks as being a bear to slog through, but it wasn't a total nightmare. That's the experience I think of when you mention "a complete reimplementation", and its why I don't flinch (much) when I insist on doing things simply at first: I've counted the cost, I've paid the cost, and my experience tells me that paying it again is cheaper than muddling things up front.

anthonyr wrote:
EDIT: It's quite clear that not only are we not on the same planet wrt to what we do, we're not even in the same solar system so different attitudes towards this are understandable.


You know, I get that a lot. :-)

I don't know how far apart we are, really, though. I've never done operating systems, but I've done device drivers, video games, vibration control, realtime networking, and the odd bit of motherhuge dataset processing. I got to the opinion I hold today because of those industries, not in spite of them. If my perceptions are vastly different from your own, chalk it up to weirdness on my part. :-)

anthonyr wrote:
Well, that all comes down to how you define optimization. If you call anything other than the simplest possible implementation "optimization", then you're going to paint yourself into a lot of corners.


You're exactly right! And, in fact, what I'm advocating to the younger programmers out there is that they should paint themselves into these very corners. See earlier comment about counting the cost. To borrow an XP phrase, "when you painted yourself into a corner, you've located the precise spot to place a door."

anthonyr wrote:
And how is picking fast algorithms not an optimization? ... but I do worry about complexity.


I think we're on the same page, here. I'll use a faster algorithm if it's easy to implement and doesn't massively impact the complexity of the application. Otherwise, I'll use the simplest, easiest to read algorithm that comes to mind.

anthonyr wrote:
eg, if you know you'll have to make at least X many transactions per second and each one takes at least Y bits, and X*Y is several times Z the network bandwidth, then there's just no possibility that it can be done without a cache. We don't have to profile it to know that.


Here we're definitely on the same page. This is what I meant by O(data/s) being a significant fraction of O(processor/s). Or in your case, O(bandwidth/s). If a quick bit of back-of-the-envelope math tells you that you have a design constraint to worry about, then go ahead and worry about it. Ignoring it is willfully stupid.

anthonyr wrote:
I think we agree. If it's not free there has to be a damned good reason to do it, like "We can say for sure it's impossible without this.". If there's two similarly hard ways of doing it and experience tells you one will be significantly faster without your having to mull it over, it's okay. The rhetoric is a necessary evil so people who don't have the experience to do this well won't try until they learn where the exceptions are through experience.


Yes. This is exactly what I was on about. I recently read some code of another programmer. It was heinously complex, using parallel data structures and the clients of his module had to know how the internals worked in order to keep the data synchronized. When I asked him why (in the hell!) he did it that way, he said, "Well, it needs to be fast. If I just use a queue it'll be too slow."

Poor guy thought he'd win a cookie; instead I went four sigma past tantrum. I had already done a quick bit of envelope math and determined that his code needed to handle sustained throughput of 100 messages per second, with rare bursts of up to 1000 messages per second. He could have been writing them to disk and reading them back in that amount of time. My only regret is that, after burning him at the stake, I didn't make his next-of-kin rip out his code and implement a queue. :twisted:


Top
 Profile  
 
PostPosted: Sun Oct 09, 2005 11:38 pm 
Chalain wrote:
I remember those weeks as being a bear to slog through, but it wasn't a total nightmare. That's the experience I think of when you mention "a complete reimplementation", and its why I don't flinch (much) when I insist on doing things simply at first: I've counted the cost, I've paid the cost, and my experience tells me that paying it again is cheaper than muddling things up front.


In my case we're a bit short on people and we don't really have the option of reimplementing complicated stuff like that. It takes forever to validate it and we have to be extra-careful when it's a wholesale reimplementation like the example I gave (a thorough test can take CPU years). If we screw up, the consequences can potentially include words like "multi-kiloton explosion".

Chalain wrote:
I think we're on the same page, here. I'll use a faster algorithm if it's easy to implement and doesn't massively impact the complexity of the application. Otherwise, I'll use the simplest, easiest to read algorithm that comes to mind.


Something else that I've noticed is that a data structure that represents the way the data naturally is will often be faster because it exposes relationships that aren't always trivial with standard data structures. That's not premature optimization, that's just good design. I won't use a tree (or whatever) for the sheer unadulterated hell of it, but it's great for checking dependancies (or whatever). It'll be easier to write, easier to read, way harder to screw up, and faster as a consequence of the data being laid out in a way that reflects what it does.

Sometimes that way has a performance penalty. For example, I use regex's when most people seem happy to write a little state machine. A regex is faster to write, but slow as hell (I've seen them tax an Athlon64 box on surprisingly small amounts of input). But I find that when I'm actually writing a formal definition of what something is supposed to look like, there's fewer bugs down the road while the state machines take weeks of tweaks to get right (and even then there's hidden cases waiting in ambush).


Top
  
 
PostPosted: Mon Oct 10, 2005 11:14 am 
anthonyr wrote:
But I find that when I'm actually writing a formal definition of what something is supposed to look like, there's fewer bugs down the road while the state machines take weeks of tweaks to get right (and even then there's hidden cases waiting in ambush).

Interesting... I've usually found regexes to be buggier and harder to maintain, both because they're difficult to read once written. At least, that's true for complicated ones.


Top
  
 
 Post subject:
PostPosted: Mon Oct 10, 2005 11:46 am 
Offline
Nightstar Graveyard Daemon
User avatar

Joined: Mon Jun 03, 2002 8:30 pm
Posts: 1071
Location: Wouldn't you rather observe my Velocity?
anthonyr wrote:
In my case we're a bit short on people


This is just an extension of the fundamental disconnect between our positions. I'm asserting that simple-then-complex-if-needed costs less; you're asserting that there are cases where it costs much more. So I read that line and threw up my hands, saying, if you're short on resources, then you NEED to be doing simple design first!

I'm still not convinced that your technology situation makes refactoring too painful, BUT:
  • You've agreed with the gist of my post, so I'm really just beating a dead horse now,
  • you may be in an environment where refactoring is extra difficult (for example, if your code can blow up a building, you end up with acceptance tests that take weeks to run)

anthonyr wrote:
If we screw up, the consequences can potentially include words like "multi-kiloton explosion".


Okay, we're not on different planets, or even different towns. I worked quite a bit in vibration control, and still consult in that field today. One of my client's customers uses our control software to shake howitzer shells. In 30 years at his job, he is considered an expert in his field... because he's only blown up his lab twice. :-)

That was also the company where we got a bug report that literally began with the words "Fortunately no one was killed." Our controller had an accumulation bug that would make it "pop" when you resumed a test after a pause. Toyota engineers had a camry frame on a 25KVA shaker table. It didn't go "pop" so much as "BANG". It bent the car's frame... and the safety restraints were the only thing that kept the "launch" from being successful.

I'm not arguing anymore, now I'm just rambling. :-)

anthonyr wrote:
Something else that I've noticed is that a data structure that represents the way the data naturally is will often be faster because it exposes relationships that aren't always trivial with standard data structures. That's not premature optimization, that's just good design.


Yes! Yes yes yes! So here's what's interesting: posit the theoretical "problem coder" I have been on about. Now give him a natural-feeling design and a heinously complicated trifurcated steppinghash network. He'll go for the second one every time.

Why is that? Looking at my own youthful indiscretions, I remember frequently thinking, "I don't want to write a full object implementation of this because it will take too long. I just want to get this working." This is especially true if the object implementation is actually a set of two or three separate objects that exist naturally and cooperate to solve the problem cleanly. Dunno if other have that experience; it may just be me.

anthonyr wrote:
Sometimes that way has a performance penalty. For example, I use regex's when most people seem happy to write a little state machine.


This is a great example of the right way to do it. A solid implementation that covers exactly what needs to be covered. Later, if you discover you need more speed out of that loop, you can convert the regex to a state machine. It will go faster because you've already got code to do it correctly, so you should be able to create a state machine in one round of coding that catches all of the correct cases.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 10, 2005 3:07 pm 
Raif wrote:
Interesting... I've usually found regexes to be buggier and harder to maintain, both because they're difficult to read once written. At least, that's true for complicated ones.


I dunno, they've never given me much trouble. They require a lot of documentation for other people to be able to read them, but in my experience the probability of bugs is so low that they only need to be updated when the language changes. Most bugs are because the language was designed wrong.

They don't usually get too complicated before you end up writing a full parser, which is a whole other issue.

Chalain wrote:
This is just an extension of the fundamental disconnect between our positions. I'm asserting that simple-then-complex-if-needed costs less; you're asserting that there are cases where it costs much more. So I read that line and threw up my hands, saying, if you're short on resources, then you NEED to be doing simple design first!


Well to be fair, the engines are the bit that gets the treatment I talked about, I'm focusing on them because they're counterexamples that show exceptions to your position. There's simpler stuff and front ends that accounts for most of what we do, and I think we'd agree on how to deal with that.

The engines defy good design pricipals. They're based on weird math most of us don't get (we love Python because the engineers can give us working code to test against), they're tightly integrated by necessity, etc.

Chalain wrote:
You've agreed with the gist of my post, so I'm really just beating a dead horse now,


beating a dead horse == chatting 'cause it's fun

Chalain wrote:
you may be in an environment where refactoring is extra difficult (for example, if your code can blow up a building, you end up with acceptance tests that take weeks to run)


It's not so much a building that would blow up as it is a pipeline out in the middle of nowhere, and I'm pretty sure it would have to be gaseous at ambient temps to give you that kind of explosion. The far more likely outcome is massive environmental damage. And anyone who wants to refactor 20-year-old FORTRAN is welcome to take my place. :)

Chalain wrote:
I'm not arguing anymore, now I'm just rambling. :-)


Yeah, me too. :)

Chalain wrote:
Why is that? Looking at my own youthful indiscretions, I remember frequently thinking, "I don't want to write a full object implementation of this because it will take too long. I just want to get this working." This is especially true if the object implementation is actually a set of two or three separate objects that exist naturally and cooperate to solve the problem cleanly. Dunno if other have that experience; it may just be me.


I was never afraid of building big class hierarchies (probably not afraid enough...), and I don't think I was ever really that concerned with performance except for complexity and trying to cut down on system calls. I cut my teeth on a Pentium II. On a VAX or whatever you started on, you might actually notice optimizations.


Top
  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 13 posts ] 

All times are UTC - 6 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group