Thursday, December 24, 2009
audit tracking on Sql Server
Tuesday, November 17, 2009
the excel syndrome
Thursday, November 12, 2009
data access
Wednesday, November 11, 2009
today
Wednesday, November 4, 2009
Workflow and Exceptions
Thursday, October 29, 2009
SQL Server post Insert Trigger
CREATE TRIGGER trinsert ON kjh
AS
SELECT a
FROM Inserted ;
Wednesday, October 28, 2009
Missing Technical Phrases
Thursday, October 22, 2009
bitlocker
Thursday, October 15, 2009
why your systems suck
It's been a while since I posted, I know. The delay hasn’t been for lack of interest or distraction as much as it has been from the fact that I just haven’t been doing anything interesting. Most of my time lately has gone to fixing “little stuff.” You know, that old app that works most of the time but has an issue that wakes everyone up once a month. Or that emergency “gotta-have-it” report that no one will bother to look at. Or that special request that comes from the local potentate that is meaningless, but that everything stops for because the potentate wants it.
It’s given me time to think about the overall question though: why do computer systems suck so bad?
If you’re bothering to read this blog, I assume you have to be a geek. Otherwise, you’d not be wasting your time. But we all know that geeks like to waste their time, so you’re here. So, for a minute, pretend you’re not a geek. Let’s say you’re a high-powered manager in a decent organization. You were that guy who played second string on the football team in collage and drank a bit too much the night before the early morning practice. Or you were that young woman whose social calendar was filled with must-do events and whose classes just kept getting in the way.
But you got your MBA and found you had a real aptitude for understanding and managing how companies actually make money.
Now, you find yourself going in each day and looking at a spreadsheet with your morning coffee. This spreadsheet gives you a snapshot of yesterday’s performance and today’s challenges. And it sucks. The tabs aren’t right. The numbers are usually off. The thing is often late. You’ve learned to rely on this and you need it to get the competitive edge you crave. But the thing can’t even reliably calculate something as simply as a sales conversion rate.
You call your IT director and scream into your phone in what becomes a daily ritual. And you just can’t understand why creating a simple spreadsheet should be so beyond your IT department.
Why is it? You’ve got a bunch of good people who all work too many hours and really want to make it right. You’ve got a – maybe not ideal but – reasonable budget. What’s wrong?
What IT groups usually miss is the big picture and the end goal. That business manager cares about his or her spreadsheet. They couldn’t care less that their IT department just worked a 14 hour day to install a new version of a J2EE engine.
What tends to happen in IT is that you get a developer. Let’s say that developer’s name is Marvin (you know… the guy who wears white socks with black shoes and reads comic books at lunch). And he’s pretty good. The code that he writes incorporates complex logic to pull data across a variety of distributed platforms, converts data types – because we all know that duplicated data isn’t really cloned, it’s mutated – and does these complex mathematical calculations that would make a physics professor blink. And it works pretty well – say about 90% of the time. Let’s say that 3 days each month (or about 3/30) it has some issue. It could be something small, like it tried to write data, but couldn’t get a lock for one of the rows. It could be an all-out crash that was really simple to fix by just restarting the app, but required a little manual intervention. Or it could be anything in between – maybe Marvin has a bug in one of his calculations, so that if some value is less than zero he gets the wrong results. But, all in all, it only has issues 3 days each month. Not a huge deal. Someone calls Marvin, he gets out of bed, logs in, fixes the issue and republishes the data.
But suppose the server admin (Alvin) has the same success rate – 9 in 10. About 3 days each month, the server runs out of memory or disk or drops off line without warning to auto-install new patches or something.
If the failure rate of Marvin’s process is 1 in 10 and the failure rate of Alvin’s server is 1 in 10, then the sum of the failure rates is 20%. But here’s the kicker. The whole is greater than the sum of the parts, because a failure in the server can cause downstream consequences that may not be visible instantly. (This is one of the things I can’t get my infrastructure staff to see, by the way. So as part of his process, Marvin, let’s say, writes some temp files. When he gets called because the server did an unplanned restart and he needs to re-run his process, he logs in, goes to a command prompt and runs the process. But he runs it with his credentials, not the credentials of the scheduler’s account, thereby causing the scheduled process to run into privilege issues on its next run. Now he’s just created a problem that was not part of either 10% failure rate. It’s a perfectly reasonable mistake to make. It doesn’t make Marvin a bad guy.
But the end result is that the total failure rate of the end process is now 10% for Melvin’s process + 10% for Alvin’s server + X% for mistakes made during clean up + Y% for other failures in the process caused by fallout from the server crash – missing temp files, lost session data, etc.
In the end, the failure rate of the whole process may be something like 25% or 30%. And that likely doesn’t include the failure of whatever systems Melvin is pulling the data from. If those systems also have a 10% failure rate, and their failure can cause downstream problems, this can add an additional 15% or more to the overall failure of the process.
So the reason computer systems suck is that the failure rates grow geometrically with each new error.
How do you fix it?
Naturally, you need to fix the individual issues. You need to get Melvin’s 10% failure rate down to 5% and then under 2%. But the real answer is that the systems need to be more loosely coupled and self-resilient. Any cross-system dependency needs to be identified and planned for. And someone needs to own – not the architecture of the individual pieces but – the architecture of the interaction between the pieces. This is probably the most overlooked piece of the puzzle.
Someone once said: “the more complex you make the drain, the easier it is to stop up the pluming”.
Wednesday, September 2, 2009
just a little annoyance
Saturday, August 22, 2009
Windows 7
Sunday, August 16, 2009
a quickie Unix thing
This is a really quick thing, but by far the most useful thing about Posix to me is the way you can chain commands. Probably the most coolest command chain that I regularly use, or at least the one I use most often is:
find ./ -name "*.something" -exec egrep -il "somepattern" {} \;
Yeah, for those non-"ix"-ers that syntax is ugly. Ok, for those of us who have been there, it's ugly too.
But what it will do is look through every file named *.something for the pattern "somepattern", ignoring case, and provide a list of all the files that have a pattern match in them (that's what the -l is for).
I use it constantly. Of course there are lots of variations, but the ability to look through a complex directory tree of php files for anything with a certain pattern, when one is totally unfamiliar with the code, is wildly useful.
it's not difficult, but I've run into a few poeple who don't know how to do this and stumble over it using pipes and the like.
Hope it helps someone.
Wednesday, July 22, 2009
Creating Custom Attributes
Wednesday, July 15, 2009
annother MS rave
Friday, July 3, 2009
WCF
Wednesday, July 1, 2009
Windows WorkFun
Dictionary<string, object> ret = new Dictionary<string, object>();
now the WF framework will dutifully walk through the Dictionary, and for each string/key, it will use .Net's reflection to look for a public workflow property with the same name.
So if you create a workflow and, in the code of the class, you add:
Public string thisIsAHack
{ get { return "hack";}
set (someVar = value;}
}
then you add a Dictionary entry like this
r.add("thisIsAHack", "someDummyValue");
and if you pass the Dictionary to the workflow , then ("*ding*") the workflow property will be populated.
Ok... on one level that sounds cool.
On another level, it makes me say "what were you guys *THINKING*?"
First of all, there is no validation at all. It's very loosely coupled. So in many ways, it's a throwback to the old JScript days when all variables were "var x =" . There's no strong typing. So you have no clue until runtime whether or not things are going to actually.. you know... work? The least they could have done was provided a tool to do easy type-checkng on the inputs. Or a "is this ok" method we could call before hand that would do a test and return true/false, so we're not stuck with a runtime exception.
But, ok.
The second (minor, I'll admit) thing is that they have to be public properties. The public part I kind of understand (although it throws out data protection for things in the same namespace). But the *property* part annoys me.
I'll be honest. I love properties. I get it. But it drives me *UP A WALL* that the coding standard is:
public string SomeProp
{
get{ return _someProp;} set {_someProp = value; }
}
that code and *ZERO* value and just clutters the source. I mean please. Properties are great if you want to do some special logic with the variables. But come on. What value does it add to put a wrapper on top of the accessor, that just passes and sets the value anyway?
So far, it's not so terrible. But here's the awful part.
Ok... first of all, the workflow classes are "partial" classes. Partial classes are a sin. I hope Microsoft repents from this before God smites the world because of it. Or at least the guys who invented OOA/D smite them.
Honestly, I think Micorsoft has never liked object oriented development. All their products talk about it
I remember when VB4 came out. I still have the box. Right on it, it says that it's an "object oriented RAD" tool. Forget the fact that OO and RAD are nearly polar opposites, VB4 was about as OO as ... well... C. Yeah, it had a keyword "object" but no inheritance or dataprotection, and only had polimorphism because the data was not typed. I mean, please.
We've seen this a lot. Databound Grid controls. Need I say more? That's about as non-OO as you can get. Oh, and MVC? well, it's in the docs, but you have to work pretty hard to do it.
Alright. So partial classes are an abomination. It is an oximoron to anything object based. The whole point of a class is to put all the logic and values together. Partial classes are.. well.. not classes. They're pieces of things. At my last job, I told my team that if I ever ran across one in a code review I'd slap the developer.
(note to self: *remember to breathe*)
Now here's where it gets good. The partial class workflow is a partial to... what? Oh, you can't tell. MS locked that down. So, suppose you're walking along and you add a public (*grumble*) property to your part of the partial class. And suppose there's a name collision of some weird type with the other side of the partial class. Or.. even bettter, suppose there's a name collision with something in the hierarchy tree that.. oh yeah.. that's right, you can't see.
When you dutifully add the item to the input dictionary
r.add("Site", "somesite");
everything is great. But when you create the workflow, you'll get a runtime error with this very helpful exception message :
"Ambiguous match found".
Microsoft very clever doesn't tell you *which* property caused the ambiguous match, by the way. It's similar to some of the SQL server errors I've seen where they tell you there's a data missmatch on an insert, but don't bother to mention which column is actually mis-match.
Gee. thanks guys.